"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

UH is officially part of an initiative to diversify machine learning research. Photo courtesy of University of Houston

University of Houston joins $50M initiative to expand and diversify AI and machine learning research

money moves

A $50 million grant from the National Institutes of Health is expanding research in machine learning and artificial intelligence, and the University of Houston now has a seat at the table.

UH has joined in on a national initiative to increase the diversity of artificial intelligence researchers, according to a news release from the school. Thanks to a $50 million grant from the National Institutes of Health, the University of North Texas Health Science Center will lead the coordinating center of the AIM-AHEAD program, which stands for Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity.

"Beyond health care, AI has been used in areas from facial recognition to self-driving cars and beyond, but there is an extreme lack of diversity among the developers of AI/ML tools. Many studies have shown that flawed AI systems and algorithms perpetuate gender and racial biases and have resulted in untoward outcomes," says Bettina Beech, chief population health officer at the University of Houston and newly named AIM-AHEAD coordinating center team member.

The initiative will bring together collaborators and experts across AI and machine learning, health equity research, data science training, data infrastructure and more. The other universities involved include: University of Colorado-Anschutz Medical Center in Aurora; University of California, Los Angeles; Meharry Medical College in Nashville; Morehouse School of Medicine in Atlanta; Johns Hopkins University, and Vanderbilt University Medical Center.

"This network will be foundational to achieving the goals of the AIM-AHEAD program, which include providing more inclusive data for health disparities research, and enhancing the diversity of AI/ML leadership," says Susan Gregurick, NIH associate director for data science, in the release.

Unfortunately, AI — designed by humans — mimics human decision making through its choice of algorithms. This means that the same biases humans deal with have made it into the AI decision making too. These gaps can lead to continued disparities and inequities for underrepresented communities especially in regards to health care, job hiring, and more.

"AI solutions need to be implemented in a responsible manner and are now guided by AI ethical FAIR (findable, accessible, interoperable, reusable) principles," says Beech in the release. "The AIM-AHEAD project directly connects with the University of Houston's plan to train and diversify the future workforce in population health, increase the use of digital tools for chronic disease self-management, and to advance population health research."

Bettina Beech is the chief population health officer at the University of Houston and newly named AIM-AHEAD coordinating center team member. Photo via UH.edu

This Houston startup has a game-changing technology for deep learning. Photo via Getty Images

Houston artificial intelligence startup raises $6M in seed funding

money moves

A computer science professor at Rice University has raised seed funding last month in order to grow his company that's focused on democratizing artificial intelligence tools.

ThirdAI, founded by Anshumali Shrivastava in April, raised $6 million in a seed funding round from three California-based VCs — Neotribe Ventures and Cervin Ventures, which co-led the round with support from Firebolt Ventures.

Shrivastava, CEO, co-founded the company with Tharun Medini, a recent Ph.D. who graduated under Shrivastava from Rice's Department of Electrical and Computer Engineering. Medini serves as the CTO of ThirdAI — pronounced "third eye." The startup is building the next generation of scalable and sustainable AI tools and deep learning systems.

"We are democratizing artificial intelligence through software innovations," says Shrivastava in a news release from Rice. "Our innovation would not only benefit current AI training by shifting to lower-cost CPUs, but it should also allow the 'unlocking' of AI training workloads on GPUs that were not previously feasible."

The technology ThirdAI is working with comes from 10 years of deep learning research and innovation. The company's technology has the potential to make computing 15-times faster.

"ThirdAI has developed a breakthrough approach to train deep learning models with a large number of parameters that run efficiently on general purpose CPUs. This technology has the potential to result in a gigantic leap forward in the accuracy of deep learning models," per and announcement from Cervin Ventures. "Our investment in ThirdAI was a no-brainer and we are fortunate to have had the opportunity to invest."

Anshumali Shrivastava is an associate professor of computer science at Rice University. Photo via rice.edu

In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Here's what Houston employers need to know about using artificial intelligence in the hiring process

guest column

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Jim Havelka, founder and CEO of InformAI, joins the Houston Innovators Podcast to discuss the difference his technology can make on the health care industry. Photo courtesy of InformAI

Houston health tech founder shares the monumental impact data can have on health care

HOUSTON INNOVATORS PODCAST EPISODE 68

Hospitals are processing massive amounts of data on a daily basis — but few are optimizing this information in life-saving capacities. A Houston company is seeking to change that.

InformAI has created several tech products to allow hospitals to tap into their data for game-changing health care.

"The convergence of technology, data, and deep learning has really opened up an avenue to look at large volumes of information and look at patterns that can be helpful in patient diagnosis and treatment planning," says CEO Jim Havelka on this week's episode of the Houston Innovators Podcast.

The InformAI team has developed two platforms that each of the company's tech products works within. One focuses on medical images and looks for subtle patterns of a medical condition, while the other can datamine patient information to identify patient risk predictors.

Currently, InformAI's sinusitis-focused product is undergoing Food and Drug Administration approval. About a quarter of the population has sinus-related issues, and the technology can help treatment and diagnosis, Havelka says.

"The data that we train our algorithms on are equivalent of 30 careers of a typical ear, nose, and throat surgeon. We see 30 times more patients in our training set than an ENT physician would see in a lifetime," Havelka says. "Being able to bring into play the patterns and unique subtleties that this data can bring into the decision making only makes the ENT more productive and more efficient, as well as creates better outcomes for patients."

InformAI has received venture capital support as well as a National Science Foundation award to advance its work. The company hopes to introduce a new round of funding later this year.

Havelka doesn't mince words when it comes to the importance of InformAI being located in Houston. The company's team works out of JLABS @ TMC as well as TMC Innovation Institute.

"Those relationships have been very helpful in getting data to build these particular products," Havelka says. "Just the Texas Medical Center alone has roughly 10 million patient encounters every year. The ability to get access to data and, equally important, the medical experts has been a tremendous benefit to InformAI."

Havelka discusses more about the revolutionary technology InformAI is working on — as well as advice he has for other health tech founders — on the episode. Listen to the full interview below — or wherever you stream your podcasts — and subscribe for weekly episodes.


Artificial intelligence is changing Houston — one industry at a time. Photo via Getty Images

3 ways artificial intelligence is changing Houston's future

Guest column

Artificial intelligence is the buzzword of the decade. From grocery shopping assistance to personal therapy apps, AI has sunk its teeth into every single industry. Houston is no exception to the AI boom. Enterprise-level companies and startups are already flocking to H-town to make their mark in AI and machine learning.

Since the world is generating more data every minute — 1,736 terabytes to be exact — Houston-based companies are already thinking ahead about how to make sense of all of that information in real-time. That's where AI comes in. By 2021, 80 percent of emerging technologies will have AI foundations — Houston is already ninth on the list of AI-ready cities in the world.

AI and machine learning can process large amounts of data quickly and use that data to inform decisions much like a human would. Here are three ways Houston-based companies are using these emerging technologies to revolutionize the city's future.

Health care

The health care industry is primed for AI's personalization capabilities. Each patient that doctors and nurses encounter has different symptoms, health backgrounds, and prescriptions they have to remember. Managing that amount of information can be dangerous if done incorrectly. With AI, diseases are diagnosed quicker, medications are administered more accurately, and nurses have help monitoring patients.

Decisio Health Inc., a Houston-based health tech startup has already made its mark in the healthcare industry with its AI software helping to tackle the COVID-19 pandemic. Their software, in collaboration with GE Healthcare Inc, allows health care providers to remotely monitor patients. By looking at data from ventilators, patient monitoring systems, health records, and other data sources, doctors can make better decisions about patients from a safe distance.

Climate change

Climate change isn't solved overnight. It's an issue that covers water salinity, deforestation, and even declining bee populations. With a problem as large as climate change, huge amounts of data are collected and need to be analyzed. AI can interpret all of that information, show possible future outcomes, track current weather patterns, and find solutions to environmental destruction.

One Houston-based company in the energy tech industry, Enovate Upstream, has created a new AI platform that will help digitize the oil and gas sector. Their AI-powered platform looks at data from digital drilling, digital completions, and digital production, to give oil companies real-time production forecasting. Their work will hopefully make their oil production more efficient and reduce their carbon emission output. Since oil drilling and fracking are a major cause for concern around climate change, their work will make a difference in slowing climate change and make their industry as a whole more climate-conscious.

Energy

Energy is an industry rich with data opportunities—and as Houston's energy sector grows, AI has become a core part of their work. Houston's large influence in the energy sector has primed it for AI integration from startups like Adapt2 Solutions Inc. By using AI and machine learning in their software, they hope to help energy companies make strategic predictions on how to serve energy to the public efficiently. Their work has become especially important in the wake of COVID-19 and the resulting changing energy needs.

Another Houston-based company using AI to influence the energy industry is the retail energy startup Evolve Energy. Their AI and machine learning system help customers find better prices on fluctuating renewable resource—helping them save money on electricity and reducing emissions. The positive feedback from the public on their AI model has shown how energy companies are using emerging technologies like AI in a positive way in their communities.

The bottom line

Houston is more primed than most cities to integrate AI and machine learning into every industry. While there are valid concerns as to how much we should lean on technology for necessary daily tasks, it's clear that AI isn't going anywhere. And it's clear that Houston is currently taking the right steps to continue its lead in this emerging AI market.

------

Natasha Ramirez is a Utah-based tech writer.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston innovator joins VC world to increase her social impact

Q&A

Kelly Avant didn't exactly pave a linear career path for herself. After majoring in gender studies, volunteering in the Peace Corps, and even attending law school — she identified a way to make a bigger impact: venture capital.

"VC is an awesome way to shape the future in a more positive way because you literally get to wire money to the most innovative thinkers, who are building solutions to the world’s problems," Avant tells InnovationMap.

Avant joined the Mercury Fund team last year as an MBA associate before joining full time as investment associate. Now, after completing her MBA from Rice University this month, Avant tells InnovationMap why she's excited about this new career in investment in a Q&A.

InnovationMap: From law school and the peace corps, what drew you to start a career in the VC world?

Kelly Avant: I graduated from Rice University with an MBA, starting scouting for an investment firm in my first year, and by the summer after my first year I was essentially working full-time interning with Mercury. But, I like to tell people about my undergraduate degree in gender studies and rhetoric from a little ski college in Colorado. If you meet someone else in venture capital with a degree in gender studies, please connect us, but I think I might be the only one. I’ll spare you what I used to think — and say — about business students, but I have really come full circle.

I always thought I would work in a nonprofit space, but after serving in Cambodia with the Peace Corps, working for the National Domestic Violence Hotline, and briefly attending Emory Law School with the intention of becoming a civil rights lawyer.I found that time and time again the root of the problem was a lack of resources. The world’s problems were not going to be solved with my idealism alone.

The problem with operating as a nonprofit in a capitalism is you basically always pandering to the interests of the donors. The NFL was a key sponsor of The National Domestic Violence Hotline. The United States has a complicated, to put it lightly, relationship with Cambodia and Vietnam. It became pretty clear that the donor/nonprofit relationship was oftentimes putting the wrong party in the driver’s seat. I was, and still am, very interested in alternative financing for nonprofits. I became convinced that the most exciting businesses were building solutions to the world’s problems while also turning a profit, which allows them to survive to have a sustainable positive impact.

VC is an awesome way to shape the future in a more positive way because you literally get to wire money to the most innovative thinkers, who are building solutions to the world’s problems.

IM: What are some companies you’re excited about?

KA: There are a couple super interesting founders I’ve met directly engaging with . To name a few: CiviTech, DonateStock, and Polco.

I’m very proud to work on mercury investments like Houston’s own, Topl, which has built an extremely lightweight and energy efficient Blockchain that enables tracking of ethical supply chains from the initial interaction.
I’m also excited about mercury’s investment in Zirtue, which enables relationship based peer to peer lending to solve the massive problem of predatory payday loans.

We have so many awesome founders in our portfolio. The best part about working in VC is meeting passionate innovators every day. I get excited to go to work everyday and help them to build better solutions.

IM: Why are you so passionate about bringing diversity and inclusion into Mercury?

KA: I love working with exciting, highly capable, super smart people. That category includes so many people who have been historically excluded. As an investment team member at Mercury, I do have a voice, and I have an obligation to use that voice to speak highly of the best people in rooms of influence.

IM: With your new role, what are you most focused on?

KA: In my new role, I am identifying and researching high potential investments. We’re building out a Mercury educational series to lift the veil of VC. We want to facilitate a series that gives all founders the basic skills to pass VC due diligence and have the opportunity to build the next innovative companies. My goal is ultimately to produce the best returns possible for our investors, and we can’t accomplish that goal unless we’re building out resources to meet the best founders and help them grow.

------

This conversation has been edited for brevity and clarity.

Houston college system plans to open $30M resiliency-focused center

to the rescue

Houston’s initiative to protect the city from catastrophes is getting a big boost from Houston Community College.

The college is developing the Resilience Center of Excellence to aid the city’s resilience campaign. At the heart of this project is the 65,000-square-foot, $30 million Resiliency Operations Center, which will be built on a five-acre site HCC’s Northeast campus. The complex is scheduled to open in 2024.

HCC estimates the operations center will train about 3,000 to 4,000 local first responders, including police officers and firefighters, during the first three years of operation. They’ll be instructed to prepare for, manage, and respond to weather, health and manmade hazards such as hurricanes, floods, fires, chemical spills, and winter freezes.

According to The Texas Tribune, the operations center will include flood-simulation features like a 39-foot-wide swift water rescue channel, a 15-foot-deep dive area, and a 100-foot-long “rocky gorge” of boulders.

The college says the first-in-the-nation Resilience Center of Excellence will enable residents, employers, civic organizations, neighborhoods, and small businesses to obtain education and certification aimed at improving resilience efforts.

“Our objective is to protect the well-being of our citizens and our communities and increase economic stability,” Cesar Maldonado, chancellor of HCC, said when the project was announced.

Among the programs under the Resiliency Center of Excellence umbrella will be non-credit courses focusing on public safety and rescue, disaster management, medical triage, and debris removal.

Meanwhile, the basic Resilience 101 program will be available to businesses and community organizations, and the emergency response program is geared toward individuals, families, and neighborhoods.

HCC’s initiative meshes with the City of Houston’s Resilient Houston, a strategy launched in 2020 that’s designed to protect Houston against disasters. As part of this strategy, the city has hired a chief resilience and sustainability officer, Priya Zachariah.

“Every action we take and investment we make should continue to improve our collective ability to withstand the unexpected shocks and disruptions when they arrive — from hurricanes to global pandemics, to extreme heat or extreme cold,” Mayor Sylvester Turner said last year. “The time is now to stop doing things the way we’ve always done them because the threats are too unpredictable.”

In an InnovationMap guest column published in February 2021, Richard Seline, co-founder of the Houston-based Resilience Innovation Hub, wrote that the focus of resilience initiatives should be pre-disaster risk mitigation.

“There is still work to be done from a legislative and governmental perspective, but more and more innovators — especially in Houston — are proving to be essential in creating a better future for the next historic disaster we will face,” Seline wrote.