Sieve Health is an AI cloud-based SaaS platform designed to automate and accelerate matching patients with clinical trials. Photo via Getty Images

On many occasions in her early career, Dr. Arti Bhosale, co-founder and CEO of Sieve Health, found herself frustrated with having to manually sift through thousands of digital files.

The documents, each containing the medical records of a patient seeking advanced treatment through a clinical trial, were always there to review — and there were always more to read.

Despite the tediousness of prescreening, which could take years, the idea of missing a patient and not giving them the opportunity to go through a potentially life-altering trial is what kept her going. The one she didn’t read could have slipped through the cracks and potentially not given someone care they needed.

“Those stories have stayed with me,” she says. “That’s why we developed Sieve.”

When standard health care is not an option, advances in medical treatment could be offered through clinical trials. But matching patients to those trials is one of the longest standing problems in the health care industry. Now with the use of new technology as of 2018, the solution to the bottleneck may be a new automated approach.

“Across the globe, more than 30 percent of clinical trials shut down as a result of not enrolling enough patients,” says Bhosale. “The remaining 80 percent never end up reaching their target enrollment and are shut down by the FDA.”

In 2020, Bhosale and her team developed Sieve Health, an AI cloud-based SaaS platform designed to automate and accelerate matching patients with clinical trials and increase access to clinical trials.

Sieve’s main goal is to reduce the administrative burden involved in matching enrollments, which in turn will accelerate the trial execution. They provide the matching for physicians, study sponsors and research sites to enhance operations for faster enrollment of the trials.

The technology mimics but automates the traditional enrollment process — reading medical notes and reviewing in the same way a human would.

“I would have loved to use something like this when I was on the front lines,” Bhosale says, who worked in clinical research for over 12 years. “Can you imagine going through 10,000 records manually? Some of the bigger hospitals have upwards of 100,000 records and you still have to manually review those charts to make sure that the patient is eligible for the trial. That process is called prescreening. It is painful.”

Because physicians wear many hats and have many clinical efforts on their plates, research tends to fall to the bottom of the to-do list. Finding 10-20 patients can take the research team on average 15-20 months to find those people — five of which end up unenrolling, she says.

“We have designed the platform so that the magic can happen in the background, and it allows the physician and research team to get a jumpstart,” she says.” They don’t have to worry about reviewing 10,000 records — they know what their efforts are going to be and will ensure that the entire database has been scanned.”

With Sieve, the team was able to help some commercial pilot programs have a curated data pool for their trials – cutting the administrative burden and time spent searching to less than a week.

Sieve is in early-stage start up mode and the commercial platform has been rolled out. Currently, the team is conducting commercial projects with different research sites and hospitals.

“Our focus now is seeing how many providers we can connect into this,” she says. “There’s a bigger pool out there who want to participate in research but don’t know where to start. That’s where Sieve is stepping in and enabling them to do this — partnering with those and other groups in the ecosystem to bring trials to wherever the physicians and the patients are.”

Arti Bhosale is the co-founder and CEO of Sieve Health. Photo courtesy of Sieve

Let's talk about dark data — what it means and how to navigate it. Graphic by Miguel Tovar/University of Houston

Houston expert: Navigating dark data within research and innovation

houston voices

Is it necessary to share ALL your data? Is transparency a good thing or does it make researchers “vulnerable,” as author Nathan Schneider suggests in the Chronicle of Higher Education article, “Why Researchers Shouldn’t Share All Their Data.”

Dark Data Defined

Dark data is defined as the universe of information an organization collects, processes and stores – oftentimes for compliance reasons. Dark data never makes it to the official publication part of the project. According to the Gartner Glossary, “storing and securing data typically incurs more expense (and sometimes greater risk) than value.”

This topic is reminiscent of the file drawer effect, a phenomenon which reflects the influence of the results of a study on whether or not the study is published. Negative results can be just as important as hypotheses that are proven.

Publication bias and the need to only publish positive research that supports the PI’s hypothesis, it can be argued, is not good science. According to an article in the Indian Journal of Anaesthesia, authors Priscilla Joys Nagarajan, et al., wrote: “It is speculated that every significant result in the published world has 19 non-significant counterparts in file drawers.” That’s one definition of dark data.

Total Transparency

But what to do with all your excess information that did not make it to publication, most likely because of various constraints? Should everything, meaning every little tidbit, be readily available to the research community?

Schneider doesn’t think it should be. In his article, he writes that he hides some findings in a paper notebook or behind a password, and he keeps interviews and transcripts offline altogether to protect his sources.

Open-source

Open-source software communities tend to regard total transparency as inherently good. What are the advantages of total transparency? You may make connections between projects that you wouldn’t have otherwise. You can easily reproduce a peer’s experiment. You can even become more meticulous in your note-taking and experimental methods since you know it’s not private information. Similarly, journalists will recognize this thought pattern as the recent, popular call to engage in “open journalism.” Essentially, an author’s entire writing and editing process can be recorded, step by step.

TMI

This trend has led researchers to open-source programs like Jupyter and GitHub. Open-source programs detail every change that occurs along a project’s timeline. Is unorganized, excessive amounts of unpublishable data really what transparency means? Or does it confuse those looking for meaningful research that is meticulously curated?

The Big Idea

And what about the “vulnerability” claim? Sharing every edit and every new direction taken opens a scientist up to scoffers and harassment, even. Dark data in industry even involves publishing salaries, which can feel unfair to underrepresented, marginalized populations.

In Model View Culture, Ellen Marie Dash wrote: “Let’s give safety and consent the absolute highest priority, with openness and transparency prioritized explicitly below those. This means digging deep, properly articulating in detail what problems you are trying to solve with openness and transparency, and handling them individually or in smaller groups.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

Houston expert explains health care's inequity problem

guest column

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Here's what Houston employers need to know about using artificial intelligence in the hiring process

guest column

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Artificial intelligence is changing Houston — one industry at a time. Photo via Getty Images

3 ways artificial intelligence is changing Houston's future

Guest column

Artificial intelligence is the buzzword of the decade. From grocery shopping assistance to personal therapy apps, AI has sunk its teeth into every single industry. Houston is no exception to the AI boom. Enterprise-level companies and startups are already flocking to H-town to make their mark in AI and machine learning.

Since the world is generating more data every minute — 1,736 terabytes to be exact — Houston-based companies are already thinking ahead about how to make sense of all of that information in real-time. That's where AI comes in. By 2021, 80 percent of emerging technologies will have AI foundations — Houston is already ninth on the list of AI-ready cities in the world.

AI and machine learning can process large amounts of data quickly and use that data to inform decisions much like a human would. Here are three ways Houston-based companies are using these emerging technologies to revolutionize the city's future.

Health care

The health care industry is primed for AI's personalization capabilities. Each patient that doctors and nurses encounter has different symptoms, health backgrounds, and prescriptions they have to remember. Managing that amount of information can be dangerous if done incorrectly. With AI, diseases are diagnosed quicker, medications are administered more accurately, and nurses have help monitoring patients.

Decisio Health Inc., a Houston-based health tech startup has already made its mark in the healthcare industry with its AI software helping to tackle the COVID-19 pandemic. Their software, in collaboration with GE Healthcare Inc, allows health care providers to remotely monitor patients. By looking at data from ventilators, patient monitoring systems, health records, and other data sources, doctors can make better decisions about patients from a safe distance.

Climate change

Climate change isn't solved overnight. It's an issue that covers water salinity, deforestation, and even declining bee populations. With a problem as large as climate change, huge amounts of data are collected and need to be analyzed. AI can interpret all of that information, show possible future outcomes, track current weather patterns, and find solutions to environmental destruction.

One Houston-based company in the energy tech industry, Enovate Upstream, has created a new AI platform that will help digitize the oil and gas sector. Their AI-powered platform looks at data from digital drilling, digital completions, and digital production, to give oil companies real-time production forecasting. Their work will hopefully make their oil production more efficient and reduce their carbon emission output. Since oil drilling and fracking are a major cause for concern around climate change, their work will make a difference in slowing climate change and make their industry as a whole more climate-conscious.

Energy

Energy is an industry rich with data opportunities—and as Houston's energy sector grows, AI has become a core part of their work. Houston's large influence in the energy sector has primed it for AI integration from startups like Adapt2 Solutions Inc. By using AI and machine learning in their software, they hope to help energy companies make strategic predictions on how to serve energy to the public efficiently. Their work has become especially important in the wake of COVID-19 and the resulting changing energy needs.

Another Houston-based company using AI to influence the energy industry is the retail energy startup Evolve Energy. Their AI and machine learning system help customers find better prices on fluctuating renewable resource—helping them save money on electricity and reducing emissions. The positive feedback from the public on their AI model has shown how energy companies are using emerging technologies like AI in a positive way in their communities.

The bottom line

Houston is more primed than most cities to integrate AI and machine learning into every industry. While there are valid concerns as to how much we should lean on technology for necessary daily tasks, it's clear that AI isn't going anywhere. And it's clear that Houston is currently taking the right steps to continue its lead in this emerging AI market.

------

Natasha Ramirez is a Utah-based tech writer.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston innovator on seeing a greener future on built environment

HOUSTON INNOVATORS PODCAST EPISODE 162

An architect by trade, Anas Al Kassas says he was used to solving problems in his line of work. Each project architects take on requires building designers to be innovative and creative. A few years ago, Kassas took his problem-solving background into the entrepreneurship world to scale a process that allows for retrofitting window facades for energy efficiency.

“If you look at buildings today, they are the largest energy-consuming sector — more than industrial and more than transportation,” Kassas, founder and CEO of INOVUES, says on the Houston Innovators Podcast. “They account for up to 40 percent of energy consumption and carbon emissions.”

To meet their climate goals, companies within the built environment are making moves to transition to electric systems. This has to be done with energy efficiency in mind, otherwise it will result in grid instability.

"Energy efficiency goes hand in hand with energy transition," he explains.

Kassas says that he first had the idea for his company when he was living in Boston. He chose to start the business in Houston, attracted to the city by its central location, affordable labor market, and manufacturing opportunities here.

Last year, INOVUES raised its first round of funding — a $2.75 million seed round — to scale up the team and identify the best markets to target customers. Kassas says he was looking for regions with rising energy rates and sizable incentives for companies making energy efficient changes.

"We were able to now implement our technology in over 4 million square feet of building space — from Boston, Seattle, Los Angeles, New York City, Portland, and very soon in Canada," he says.

Notably missing from that list is any Texas cities. Kassas says that he believes Houston is a great city for startups and he has his operations and manufacturing is based here, but he's not yet seen the right opportunity and adaption

"Unfortunately most of our customers are not in Texas," "A lot of work can be done here to incentivize building owners. There are a lot of existing buildings and construction happening here, but there has to be more incentives."

Kassas shares more about his growth over the past year, as well as what he has planned for 2023 on the podcast. Listen to the interview below — or wherever you stream your podcasts — and subscribe for weekly episodes.

Houston SPAC announces merger with Beaumont-based tech company in deal valued at $100M

speaking of spacs

A Houston SPAC, or special purpose acquisition company, has announced the company it plans to merge with in the new year.

Beaumont-based Infrared Cameras Holdings Inc., a provider of thermal imaging platforms, and Houston-based SportsMap Tech Acquisition Corp. (NASDAQ: SMAP), a publicly-traded SPAC with $117 million held in trust, announced their agreement for ICI to IPO via SPAC.

Originally announced in the fall of last year, the blank-check company is led by David Gow, CEO and chairman. Gow is also chairman and CEO of Gow Media, which owns digital media outlets SportsMap, CultureMap, and InnovationMap, as well as the SportsMap Radio Network, ESPN 97.5 and 92.5.

The deal will close in the first half of 2023, according to a news release, and the combined company will be renamed Infrared Cameras Holdings Inc. and will be listed on NASDAQ under a new ticker symbol.

“ICI is extremely excited to partner with David Gow and SportsMap as we continue to deliver our innovative software and hardware solutions," says Gary Strahan, founder and CEO of ICI, in the release. "We believe our software and sensor technology can change the way companies across industries perform predictive maintenance to ensure reliability, environmental integrity, and safety through AI and machine learning.”

Strahan will continue to serve as CEO of the combined company, and Gow will become chairman of the board. The transaction values the combined company at a pre-money equity valuation of $100 million, according to the release, and existing ICI shareholders will roll 100 percent of their equity into the combined company as part of the transaction.

“We believe ICI is poised for strong growth," Gow says in the release. "The company has a strong value proposition, detecting the overheating of equipment in industrial settings. ICI also has assembled a strong management team to execute on the opportunity. We are delighted to combine our SPAC with ICI.”

Founded in 1995, ICI provides infrared and imaging technology — as well as service, training, and equipment repairs — to various businesses and individuals across industries.

Report: Federal funding, increased life science space drive industry growth in Houston

by the numbers

Federal funding, not venture capital, continues to be the main driver of growth in Houston’s life sciences sector, a new report suggests.

The new Houston Life Science Insight report from commercial real estate services company JLL shows Houston accounted for more than half (52.7 percent) of total funding from the National Institutes of Health (NIH) across major Texas markets through the third quarter of this year. NIH funding in the Houston area totaled $769.6 million for the first nine months of 2022, exceeding the five-year average by 19.3 percent.

VC funding for Houston’s life sciences sector pales in comparison.

For the first nine months of this year, companies in life sciences raised $147.3 million in VC, according to the report. Based on that figure, Houston is on pace in 2022 to meet or surpass recent life sciences VC totals for most other years except 2021. JLL describes 2021 as an “outlier” when it comes to annual VC hauls for the region’s life sciences companies.

JLL notes that “limited venture capital interest in private industry has remained a challenge for the city’s life sciences sector. Furthermore, it may persist as venture capital strategies are reevaluated and investment strategies shift toward near-term profits.”

While life sciences VC funding has a lot of ground to cover to catch up with NIH funding, there are other bright spots for the sector.

One of those bright spots is the region’s rising amount of life sciences space.

The Houston area boasts more than 2.4 million square feet of space for life sciences operations, with another 1.1 million under construction and an additional 1.5 million square feet on the drawing board, the report says. This includes a soon-to-open lab spanning 25,000 square feet in the first phase of Levit Green.

A second bright spot is the migration of life sciences companies to the region. Two Southern California-based life sciences companies, Cellipoint Bioservices and Obagi Cosmeceuticals, plan to move their headquarters and relocate more than half of their employees to The Woodlands by the first half of 2023, according to the report.

“Houston’s low tax rate and cost of living were primary drivers for the decisions, supported by a strong labor pool that creates advantages for companies’ expansion and relocation considerations,” JLL says.