In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Rice University's edtech company receives $90M to lead NSF research hub

major collaboration

An educational technology company based out of Rice University has received $90 million to create and lead a research and development hub for inclusive learning and education research. It's the largest research award in the history of the university.

OpenStax received the grant funding from the U.S. National Science Foundation for a five-year project create the R&D hub called SafeInsights, which "will enable extensive, long-term research on the predictors of effective learning while protecting student privacy," reads a news release from Rice. It's the NSF's largest single investment commitment to national sale education R&D infrastructure.

“We are thrilled to announce an investment of $90 million in SafeInsights, marking a significant step forward in our commitment to advancing scientific research in STEM education,” NSF Director Sethuraman Panchanathan says in the release. “There is an urgent need for research-informed strategies capable of transforming educational systems, empowering our nation’s workforce and propelling discoveries in the science of learning.

"By investing in cutting-edge infrastructure and fostering collaboration among researchers and educators, we are paving the way for transformative discoveries and equitable opportunities for learners across the nation.”

SafeInsights is funded through NSF’s Mid-scale Research Infrastructure-2 (Mid-scale RI-2) program and will act as a central hub for 80 partners and collaborating institutions.

“SafeInsights represents a pivotal moment for Rice University and a testament to our nation’s commitment to educational research,” Rice President Reginald DesRoches adds. “It will accelerate student learning through studies that result in more innovative, evidence-based tools and practices.”

Richard Baraniuk, who founded OpenStax and is a Rice professor, will lead SafeInsights. He says he hopes the initiative will allow progress to be made for students learning in various contexts.

“Learning is complex," Baraniuk says in the release. "Research can tackle this complexity and help get the right tools into the hands of educators and students, but to do so, we need reliable information on how students learn. Just as progress in health care research sparked stunning advances in personalized medicine, we need similar precision in education to support all students, particularly those from underrepresented and low-income backgrounds.”

OpenStax awarded $90M to lead NSF research hub for transformational learning and education researchwww.youtube.com

2 Houston startups selected by US military for geothermal projects

hot new recruits

Two clean energy companies in Houston have been recruited for geothermal projects at U.S. military installations.

Fervo Energy is exploring the potential for a geothermal energy system at Naval Air Station Fallon in Nevada.

Meanwhile, Sage Geosystems is working on an exploratory geothermal project for the Army’s Fort Bliss post in Texas. The Bliss project is the third U.S. Department of Defense geothermal initiative in the Lone Star State.

“Energy resilience for the U.S. military is essential in an increasingly digital and electric world, and we are pleased to help the U.S. Army and [the Defense Innovation Unit] to support energy resilience at Fort Bliss,” Cindy Taff, CEO of Sage, says in a news release.

A spokeswoman for Fervo declined to comment.

Andy Sabin, director of the Navy’s Geothermal Program Office, says in a military news release that previous geothermal exploration efforts indicate the Fallon facility “is ideally suited for enhanced geothermal systems to be deployed onsite.”

As for the Fort Bliss project, Michael Jones, a project director in the Army Office of Energy Initiatives, says it’ll combine geothermal technology with innovations from the oil and gas sector.

“This initiative adds to the momentum of Texas as a leader in the ‘geothermal anywhere’ revolution, leveraging the robust oil and gas industry profile in the state,” says Ken Wisian, associate director of the Environmental Division at the U.S. Bureau of Economic Geology.

The Department of Defense kicked off its geothermal initiative in September 2023. Specifically, the Army, Navy, and Defense Innovation Unit launched four exploratory geothermal projects at three U.S. military installations.

One of the three installations is the Air Force’s Joint Base San Antonio. Canada-based geothermal company Eavor is leading the San Antonio project.

Another geothermal company, Atlanta-based Teverra, was tapped for an exploratory geothermal project at the Army’s Fort Wainwright in Alaska. Teverra maintains an office in Houston.

------

This article originally ran on EnergyCapital.