In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Austin company to bring AI-powered school to The Woodlands

AI education

Austin-based Alpha School, which operates AI-powered private schools, is opening its first Houston-area location in The Woodlands.

The 8,000-square-foot school, scheduled to be ready for the 2026-27 academic year, initially will serve students in kindergarten through eighth grade. Alpha says the school will offer “open workshop spaces and innovative classrooms that support personalized instruction, core academics, leadership development, and real-world life skills.”

Alpha sets aside two hours each school day for the AI-driven, self-paced study of core subjects like math, reading and science. The rest of each school day consists of life-skills workshops focusing on topics such as leadership and financial literacy.

Alpha’s school in The Woodlands has begun accepting applications for the 2026-27 school year. Annual tuition costs $40,000.

“The Woodlands is one of the most dynamic, forward-thinking communities in Texas, and Alpha is proud to bring

an innovative educational model that complements its strong academic foundation,” says Rachel Goodlad, head

of expansion for Alpha.

Founded in 2014, Alpha School combines adaptive technology-driven instruction with immersive life-skills workshops. Its model emphasizes mastery-based learning in core subjects alongside development of communication, critical thinking, financial literacy and leadership skills. It operates more than 15 schools across the country.

Elsewhere in Texas, Alpha operates schools in Austin, Brownsville, Fort Worth and Plano. Alpha also operates 12 Texas Sports Academy campuses in Texas, including locations in Houston, Pearland and Richmond, along with a NextGen Academy esports school in Austin, a school for gifted students in Georgetown, and lower-cost Nova Academy campuses in Austin and Bastrop.

Alpha has fans and critics. While supporters tout students’ high achievement rates, detractors complain about the high tuition and the AI-influenced depersonalization of education.

“Students and our country need to be in relationship with other human beings,” Randi Weingarten, president of the American Federation of Teachers, a teachers union, tells The New York Times. “When you have a school that is strictly AI, it is violating that core precept of the human endeavor and of education.”

Alpha co-founder MacKenzie Price, a podcaster and social media influencer, doesn’t share Weingarten’s views.

“Parents and teachers: We need to embrace this change,” Price wrote after President Trump signed an executive order promoting AI in schools.

The Times notes that Alpha doesn’t employ AI as a tutor or a supplement. Rather, the newspaper says, AI is “the school’s primary educational driver to move students through academic content.”

Houston researcher secures $1.7M to develop drug for aggressive form of breast cancer

cancer research

A University of Houston researcher has joined a $3.2 million effort to develop a new drug designed to attack a cancer-driving protein commonly found in triple-negative breast cancer.

Triple-negative breast cancer (TNBC) is one of the most difficult-to-treat forms of cancer and accounts for 10 percent to 15 percent of all breast cancer cases. The disease gets its name because tumors associated with it test negative for estrogen receptors, progesterone receptors and excess HER2 protein, making it difficult to target. Due to this, TNBC is often treated with general chemotherapy, which can come with negative side effects and drug resistance, according to UH.

UH College of Pharmacy research associate professor Wei Wang is developing a drug that can target the disease more specifically. The drug will target MDM2, a protein often overproduced in TNBC that also contributes to faster tumor growth.

Wang is working on a team led by Wei Li, director of the University of Tennessee Health Science Center College of Pharmacy’s Drug Discovery Center. She has received $1.7 million to support the research.

Wang and UH professor of pharmacology and toxicology Ruiwen Zhang have discovered a compound that can break down MDM2. In early laboratory models, the compound has shown the ability to shrink tumors.

Wang and Zhang will focus on understanding how the treatment works and monitoring its effectiveness in models that closely mirror human disease.

“We will study how the drug targets MDM2 and evaluate the most promising drug candidates to determine effective dosing, understand how the drug behaves in the body, compare it with existing treatments and assess early safety,” Wang said in a news release.

Li’s team at the University of Tennessee will be working on the chemistry and drug design end of the project.

“This work could lead to an entirely new class of therapies for triple-negative breast cancer,” Li added in the release. “We’re hopeful that by directly removing the MDM2 protein from cancer cells, we can help more patients respond to treatment regardless of their tumor type.”