Angela Wilkins joins the Houston Innovators Podcast to discuss the intersection of data and health care. Photo courtesy

When most people hear about Houston startup Starling Medical, they might think about how much potential the medical device company has in the field of urinalysis diagnostics. But that's not quite where Angela Wilkins's head went.

Wilkins explains on the Houston Innovators Podcast that when she met the company's co-founders, Hannah McKenney and Drew Hendricks, she recognized them as very promising startup leaders taking action on a real health care problem. Starling's device can collect urine and run diagnostics right from a patient's toilet.

"It was one of those things where I just thought, 'They're going to get a bunch of data soon,'" Wilkins says. "The opportunity is just there, and I was really excited to come on and build their AI platform and the way they are going to look at data."

For about a year, Wilkins supported the startup as an adviser. Now, she's working more hands on as chief data officer as the company grows.



Wilkins, who serves as a mentor and adviser for several startups, has a 20-year career in Houston across all sides of the innovation equation, working first at Baylor College of Medicine before co-founding Mercury Data Science — now OmniScience. Most recently she served as executive director of the Ken Kennedy Institute at Rice University.

This variety in her resume makes her super connective — a benefit to all the startups she works with, she explains. The decision to transition to a startup team means she gets to work hands on in building a technology — while bringing in her experience from other institutions.

"I think I've really learned how to partner with those institutions," she says on the show. "I've really learned how to make those bridges, and that's a big challenge that startups face."

"When we talk about the Houston innovation ecosystem, it's something we should be doing better at because we have so many startups and so many places that would like to use better technology to solve problems," she continues.

Wilkins has data and artificial intelligence on the mind in everything she does, and she even serves on a committee at the state level to learn and provide feedback on how Texas should be regulating AI.

"At the end of the day, the mission is to put together a report and strategy on how we think Texas should think about AI," she explains. "It's beyond just using an algorithm, they need infrastructure."

Colorado is the first state to pass legislation surrounding AI, and Wilkins says all eyes are on how execution of that new law will go.

"We should have technology that can be double checked to make sure we're applying it in a way that's fair across all demographics. It's obvious that we should do that — it's just very hard," she says.

"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

Houston expert explains health care's inequity problem

guest column

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston medical device startup implants artificial heart in first human patient

big win

Heart health tech company BiVACOR and The Texas Heart Institute announced that they successfully implanted the company's first Total Artificial Heart in a human at Baylor St. Luke’s Medical Center in the TMC.

The milestone is part of an FDA-approved early feasibility study that will test the safety and performance of the TAH device, which is based on a magnetically levitated rotor that takes over functions of a failing heart while a patient is awaiting a heart transplant, according to a statement from the organizations.

The "bridge-to-transplant" device could support an active adult male, as well as many women and children suffering from severe biventricular heart failure or univentricular heart failure.

"With heart failure remaining a leading cause of mortality globally, the BiVACOR TAH offers a beacon of hope for countless patients awaiting a heart transplant,” Dr. Joseph Rogers, president and CEO of THI and national principal investigator on the research, says in a statement. “We are proud to be at the forefront of this medical breakthrough, working alongside the dedicated teams at BiVACOR, Baylor College of Medicine, and Baylor St. Luke’s Medical Center to transform the future of heart failure therapy for this vulnerable population.”

BiVACOR received approval from the FDA for the early feasibility study in late 2023 and has four other patients enrolled in the study. At the time the study was approved, 10 hospitals were enrolled as possible sites.

“I’m incredibly proud to witness the successful first-in-human implant of our TAH. This achievement would not have been possible without the courage of our first patient and their family, the dedication of our team, and our expert collaborators at The Texas Heart Institute ... our TAH brings us one step closer to providing a desperately needed option for people with end-stage heart failure who require support while waiting for a heart transplant. I look forward to continuing the next phase of our clinical trial,” Daniel Timms, PhD, founder and CTO of BiVACOR, adds.

About 100,000 patients suffering from severe heart failure could benefit from BiVACOR’s artificial heart, the company says. Globally, only about 6,000 heart transplants are performed each year, while 26 million people worldwide are affected by heart failure.

BiVACOR was founded in 2008 and maintains its headquarters in Houston, along with offices in Huntington Beach, California, and Brisbane, Australia.

To date, the company has raised nearly $50.8 million, according to CB Insights. The company raised $18 million in 2023, and $22 million in 2021.

Earlier this year, BiVACOR named a new CEO in Jim Dillon, a longtime executive in the medical device sector.

Last summer, Rogers joined the Houston Innovators Podcast to share his excitement with THI's innovations.


Here's how much it takes to earn a top 1 percent salary in Texas

wealthy lifestyle

With two Houston-area neighbors cashing in among the most wealthy suburbs in America, Houstonians may be wondering how much money they need to make to secure a place in the top one percent of earners. According to a new study from SmartAsset, the pre-tax salary required to be considered one of the highest earners in Texas amounts to $762,090 in 2024.

Texas has the 14th highest pre-tax salary needed to be considered in the top one percent of earners in the U.S. for the second year in a row. Texas' income threshold is not too far off from the national average, which is $787,712.

The study further revealed 126,128 Texans are within the top one percent of earners. For more context, the U.S. Census Bureau says over 30 million people lived in Texas as of 2022, and Houston's population grew to 2.3 million people in 2023.

Connecticut continues to lead the nation with the highest income threshold required to be in the top one percent, with residents needing to make over $1.15 million pre-tax.

To determine the income needed to be in the top one percent of earners in each state, SmartAsset analyzed 2021 IRS data for individual tax filers, which is the most recent year where data was available. Income data was then adjusted to June 2024 dollars.

Compared to SmartAsset's 2023 report, Texans now need to make $130,241 more in 2024 to maintain their status as one of the highest earners in the state. Last year, the income threshold was $631,849.

If Houstonians aim to be within the top five percent of earners in Texas, the pre-tax income threshold is drastically lower, at $280,676. However, for many Houston residents, achieving even a "middle class" status means making between $40,280 and $120,852 a year.

Meanwhile, the study says the median income in the U.S. comes out to roughly $75,000, and half of Americans are making even less than that. The income disparity is plainly obvious when high-income earners make (at a minimum) 10 times more than the national median income.

The report goes on to say top-earning Americans make up a "disproportionately large part of the tax base," as their income results in paying a 37 percent federal tax bracket rate. (That is, if these high earners are even paying taxes in the first place, considering America's wealthiest are already evading over $150 billion a year in taxes.)

"While state and local level taxes may impact the spread of high earners in those areas, the cost of living can also be drastically different nationwide," the report said. "As a result, what it takes to be considered a top one percent income earner can differ by over $500,000 from state to state."

The top 10 states with the highest thresholds to be considered in the top one percent of earners in the U.S. are:

  • No. 1 – Connecticut ($1,152,254)
  • No. 2 – Massachusetts ($1,113,662)
  • No. 3 – California ($1,035,673)
  • No. 4 – Washington ($989,649)
  • No. 5 – New Jersey ($975,645)
  • No. 6 – New York ($965,645)
  • No. 7 – Colorado ($865,700)
  • No. 8 – Florida ($852,206)
  • No. 9 – Wyoming ($843,121)
  • No. 10 – New Hampshire ($811,098)
The full report can be found on smartasset.com

.

------

This article originally ran on CultureMap.