In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

UH receives $2.6M gift to support opioid addiction research and treatment

drug research

The estate of Dr. William A. Gibson has granted the University of Houston a $2.6 million gift to support and expand its opioid addiction research, including the development of a fentanyl vaccine that could block the drug's ability to enter the brain.

The gift builds upon a previous donation from the Gibson estate that honored the scientist’s late son Michael, who died from drug addiction in 2019. The original donation established the Michael C. Gibson Addiction Research Program in UH's department of psychology. The latest donation will establish the Michael Conner Gibson Endowed Professorship in Psychology and the Michael Conner Gibson Research Endowment in the College of Liberal Arts and Social Sciences.

“This incredibly generous gift will accelerate UH’s addiction research program and advance new approaches to treatment,” Daniel O’Connor, dean of the College of Liberal Arts and Social Sciences, said in a news release.

The Michael C. Gibson Addiction Research Program is led by UH professor of psychology Therese Kosten and Colin Haile, a founding member of the UH Drug Discovery Institute. Currently, the program produces high-profile drug research, including the fentanyl vaccine.

According to UH, the vaccine can eliminate the drug’s “high” and could have major implications for the nation’s opioid epidemic, as research reveals Opioid Use Disorder (OUD) is treatable.

The endowed professorship is combined with a one-to-one match from the Aspire Fund Challenge, a $50 million grant program established in 2019 by an anonymous donor. UH says the program has helped the university increase its number of endowed chairs and professorships, including this new position in the department of psychology.

“Our future discoveries will forever honor the memory of Michael Conner Gibson and the Gibson family,” O’Connor added in the release. “And I expect that the work supported by these endowments will eventually save many thousands of lives.”

CenterPoint and partners launch AI initiative to stabilize the power grid

AI infrastructure

Houston-based utility company CenterPoint Energy is one of the founding partners of a new AI infrastructure initiative called Chain Reaction.

Software companies NVIDIA and Palantir have joined CenterPoint in forming Chain Reaction, which is aimed at speeding up AI buildouts for energy producers and distributors, data centers and infrastructure builders. Among the initiative’s goals are to stabilize and expand the power grid to meet growing demand from data centers, and to design and develop large data centers that can support AI activity.

“The energy infrastructure buildout is the industrial challenge of our generation,” Tristan Gruska, Palantir’s head of energy and infrastructure, says in a news release. “But the software that the sector relies on was not built for this moment. We have spent years quietly deploying systems that keep power plants running and grids reliable. Chain Reaction is the result of building from the ground up for the demands of AI.”

CenterPoint serves about 7 million customers in Texas, Indiana, Minnesota and Ohio. After Hurricane Beryl struck Houston in July 2024, CenterPoint committed to building a resilient power grid for the region and chose Palantir as its “software backbone.”

“Never before have technology and energy been so intertwined in determining the future course of American innovation, commercial growth, and economic security,” Jason Wells, chairman, president and CEO of CenterPoint, added in the release.

In November, the utility company got the go-ahead from the Public Utility Commission of Texas for a $2.9 billion upgrade of its Houston-area power grid. CenterPoint serves 2.9 million customers in a 12-county territory anchored by Houston.

A month earlier, CenterPoint launched a $65 billion, 10-year capital improvement plan to support rising demand for power across all of its service territories.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

Houston researchers develop material to boost AI speed and cut energy use

ai research

A team of researchers at the University of Houston has developed an innovative thin-film material that they believe will make AI devices faster and more energy efficient.

AI data centers consume massive amounts of electricity and use large cooling systems to operate, adding a strain on overall energy consumption.

“AI has made our energy needs explode,” Alamgir Karim, Dow Chair and Welch Foundation Professor at the William A. Brookshire Department of Chemical and Biomolecular Engineering at UH, explained in a news release. “Many AI data centers employ vast cooling systems that consume large amounts of electricity to keep the thousands of servers with integrated circuit chips running optimally at low temperatures to maintain high data processing speed, have shorter response time and extend chip lifetime.”

In a report recently published in ACS Nano, Karim and a team of researchers introduced a specialized two-dimensional thin film dielectric, or electric insulator. The film, which does not store electricity, could be used to replace traditional, heat-generating components in integrated circuit chips, which are essential hardware powering AI.

The thinner film material aims to reduce the significant energy cost and heat produced by the high-performance computing necessary for AI.

Karim and his former doctoral student, Maninderjeet Singh, used Nobel prize-winning organic framework materials to develop the film. Singh, now a postdoctoral researcher at Columbia University, developed the materials during his doctoral training at UH, along with Devin Shaffer, a UH professor of civil engineering, and doctoral student Erin Schroeder.

Their study shows that dielectrics with high permittivity (high-k) store more electrical energy and dissipate more energy as heat than those with low-k materials. Karim focused on low-k materials made from light elements, like carbon, that would allow chips to run cooler and faster.

The team then created new materials with carbon and other light elements, forming covalently bonded sheetlike films with highly porous crystalline structures using a process known as synthetic interfacial polymerization. Then they studied their electronic properties and applications in devices.

According to the report, the film was suitable for high-voltage, high-power devices while maintaining thermal stability at elevated operating temperatures.

“These next-generation materials are expected to boost the performance of AI and conventional electronics devices significantly,” Singh added in the release.