In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

3 Houston innovators who made headlines in May 2025

Innovators to Know

Editor's note: Houston innovators are making waves this month with revolutionary VC funding, big steps towards humanoid robotics, and software that is impacting the agriculture sector. Here are three Houston innovators to know right now.

Zach Ellis, founder and partner of South Loop Ventures

Zach Ellis. Photo via LinkedIn

Zach Ellis Jr., founder and general partner of South Loop Ventures, says the firm wants to address the "billion-dollar blind spot" of inequitable distribution of venture capital to underrepresented founders of color. The Houston-based firm recently closed its debut fund for more than $21 million. Learn more.

Ty Audronis, CEO and founder of Tempest Droneworx

Ty Audronis, CEO and founder of Tempest Droneworx

Ty Audronis, center. Photo via LinkedIn.

Ty Audronis and his company, Tempest Droneworx, made a splash at SXSW Interactive 2025, winning the Best Speed Pitch award at the annual festival. The company is known for it flagship product, Harbinger, a software solution that agnostically gathers data at virtually any scale and presents that data in easy-to-understand visualizations using a video game engine. Audronis says his company won based on its merits and the impact it’s making and will make on the world, beginning with agriculture. Learn more.

Nicolaus Radford, CEO of Persona AI

Nicolaus Radford, founder and CEO of Nauticus RoboticsNicolaus Radford. Image via LinkedIn

Houston-based Persona AI and CEO Nicolaus Radford continue to make steps toward deploying a rugged humanoid robot, and with that comes the expansion of its operations at Houston's Ion. Radford and company will establish a state-of-the-art development center in the prominent corner suite on the first floor of the building, with the expansion slated to begin in June. “We chose the Ion because it’s more than just a building — it’s a thriving innovation ecosystem,” Radford says. Learn more.

Houston university to launch artificial intelligence major, one of first in nation

BS in AI

Rice University announced this month that it plans to introduce a Bachelor of Science in AI in the fall 2025 semester.

The new degree program will be part of the university's department of computer science in the George R. Brown School of Engineering and Computing and is one of only a few like it in the country. It aims to focus on "responsible and interdisciplinary approaches to AI," according to a news release from the university.

“We are in a moment of rapid transformation driven by AI, and Rice is committed to preparing students not just to participate in that future but to shape it responsibly,” Amy Dittmar, the Howard R. Hughes Provost and executive vice president for academic affairs, said in the release. “This new major builds on our strengths in computing and education and is a vital part of our broader vision to lead in ethical AI and deliver real-world solutions across health, sustainability and resilient communities.”

John Greiner, an assistant teaching professor of computer science in Rice's online Master of Computer Science program, will serve as the new program's director. Vicente Ordóñez-Román, an associate professor of computer science, was also instrumental in developing and approving the new major.

Until now, Rice students could study AI through elective courses and an advanced degree. The new bachelor's degree program opens up deeper learning opportunities to undergrads by blending traditional engineering and math requirements with other courses on ethics and philosophy as they relate to AI.

“With the major, we’re really setting out a curriculum that makes sense as a whole,” Greiner said in the release. “We are not simply taking a collection of courses that have been created already and putting a new wrapper around them. We’re actually creating a brand new curriculum. Most of the required courses are brand new courses designed for this major.”

Students in the program will also benefit from resources through Rice’s growing AI ecosystem, like the Ken Kennedy Institute, which focuses on AI solutions and ethical AI. The university also opened its new AI-focused "innovation factory," Rice Nexus, earlier this year.

“We have been building expertise in artificial intelligence,” Ordóñez-Román added in the release. “There are people working here on natural language processing, information retrieval systems for machine learning, more theoretical machine learning, quantum machine learning. We have a lot of expertise in these areas, and I think we’re trying to leverage that strength we’re building.”