guest column

Here's what Houston employers need to know about using artificial intelligence in the hiring process

In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Trending News

Building Houston

 
 

From a low-cost vaccine to an app that can help reduce exposure, here are the latest COVID-focused and Houston-based research projects. Photo via Getty Images

While it might seem like the COVID-19 pandemic has settled down for the time being, there's plenty of innovative research ongoing to create solutions for affordable vaccines and tech-enabled protection against the spread of the virus.

Some of that research is happening right here in Houston. Here are two innovative projects in the works at local institutions.

UH researcher designs app to monitor best times to shop

A UH professor is putting safe shopping at your fingertips. Photo via UH.edu

When is the best time to run an errand in the pandemic era we currently reside? There might be an app for that. Albert Cheng, professor of computer science and electrical and computer engineering at the University of Houston, is working on a real-time COVID-19 infection risk assessment and mitigation system. He presented his plans at the Institute of Electrical and Electronics Engineers conference HPC for Urgent Decision Making and will publish the work in IEEE Xplore.

Cheng's work analyzes up-to-date data from multiple open sources to see when is the best time to avoid crowds and accomplish activities outside the home.

"Preliminary work has been performed to determine the usability of a number of COVID-19 data websites and other websites such as grocery stores and restaurants' popular times and traffic," Cheng says in a UH release. "Other data, such as vaccination rates and cultural factors (for example, the percentage of people willing to wear facial coverings or masks in an area), are also used to determine the best grocery store to shop in within a time frame."

To use the app, a user would input their intended destinations and the farthest distance willing to go, as well as the time frame of the trip. The risk assessment and mitigation system, or RT-CIRAM, then "provides as output the target location and the time interval to reach there that would reduce the chance of infections," said Cheng.

There's a lot to it, says Cheng, and the process is highly reliant on technology.

"We are leveraging urgent high-performance cloud computing, coupled with time-critical scheduling and routing techniques, along with our expertise in real-time embedded systems and cyber-physical systems, machine learning, medical devices, real-time knowledge/rule-based decision systems, formal verification, functional reactive systems, virtualization and intrusion detection," says Cheng.

2 Houston hospitals team up with immunotherapy company for new vaccine for Africa

The new vaccine will hopefully help mitigate spread of the disease in Sub-Saharan Africa. Photo via bcm.edu

Baylor College of Medicine and Texas Children's Hospital have teamed up with ImmunityBio Inc. — a clinical-stage immunotherapy company — under a licensing agreement to develop a safe, effective and affordable COVID-19 vaccine.

BCM has licensed out a recombinant protein COVID-19 vaccine candidate that was developed at the Texas Children's Hospital Center for Vaccine Development to ImmunityBio. According to the release, the company engaged in license negotiations with the BCM Ventures team, about the vaccine that could address the current pandemic needs in South Africa.

"We hope that our COVID-19 vaccine for global health might become an important step towards advancing vaccine development capacity in South Africa, and ultimately for all of Sub-Saharan Africa," says Dr. Peter Hotez, professor and dean of the National School of Tropical Medicine at Baylor and co-director of the Texas Children's Hospital Center for Vaccine Development.

ImmunityBio, which was founded in 2014 by Dr. Patrick Soon-Shiong, is working on innovative immunotherapies that address serious unmet needs in infectious diseases, according to a news release from BCM.

"There is a great need for second-generation vaccines, which are accessible, durable and offer broad protection against the emerging variants," says Soon-Shiong. "ImmunityBio has executed on a heterologous ("mix-and-match") strategy to develop a universal COVID-19 vaccine. To accomplish this, we have embarked upon large-scale good manufacturing practices and development of DNA (adenovirus), RNA (self-amplifying mRNA) and subunit protein (yeast) vaccine platforms. This comprehensive approach will leverage our expertise in these platforms for both infectious disease and cancer therapies."

Trending News