Just like any workplace, labs can get toxic. Graphic byMiguel Tovar/University of Houston

There are many types of toxic bosses. The Micromanager. The Narcissist. The Incompetent Boss. The list goes on. But labs led by toxic PIs not only make for an abysmal workplace they can actually encourage research misconduct.

According to Charles Wood, author of “When lab leaders take too much control,” there are two types of toxic labs most at risk for this type of behavior: the executive model and the competition model.

Executive model

Wood described the executive approach to lab management as one where the mentor sets expectations for trainees, often with a particular goal in mind. In its negative form, this includes specifying experimental outcomes and instructing trainees on particular experiments to achieve a desired result.

It comes as no surprise that experimenting with the answer already in mind goes against scientific principles. Spiking biological samples, manipulating instruments – all these things have been suspected in labs according to the U.S. government’s Office of Research Integrity. The first line of defense is having the investigators replicate their experiment while being closely supervised. The consequences of misconduct, if the allegations are found to be credible, can include being debarred from further federal funding and having data sequestered.

Competition model

The competition model pits graduate students or postdocs against one another. In this case, whoever gets the result first is rewarded, while the others are punished. This makes a perfect breeding ground for misconduct. Imagine if a foreign student’s citizenship status is affected by whether or not they can produce the results their PI wants them to obtain. Of the competition model, Wood said that what students and postdocs learn can be catastrophic: “competition over collaboration and conformity over creativity.” He posits that researchers graduating from the PI’s toxic lab may be influenced to drop out of science completely or go on to run their own labs in a toxic way.

A correlation between mentors and ethical decision-making

Michael D. Mumford, et al. in “Environmental influences on ethical decision making: Climate and environmental predictors of research integrity” (Ethics & Behavior journal) found that for first-year doctoral students, “environmental experiences (including professional leadership) exert stronger effects on ethical decision making than the climate of the work group.”

Wood also noted that, regardless of the management style, certain scientists may be more prone to cheating. However, active involvement and openness by the principal investigator can serve as a preventive measure against this.

What can you do about it?

Chris Sowers in the “Toxic Boss Syndrome: How To Recover and Get Your Mojo Back” episode of his Better Humans podcast, shared how a few toxic bosses affected his job performance, self esteem and even interpersonal relationships. His first piece of advice is to get out quickly, even if you need to take a pay cut – he says a few thousand dollars are not worth the hit to your mental and physical health.

Vetting your lab’s PI will help enormously. Does the PI have a good track record of being a fair and kind mentor?

“If your principal investigator starts to exhibit toxic behavior, address this with him or her,” said Wood. He goes on to advise that “if you find yourself in a truly toxic environment, seek guidance from a graduate coordinator, assistant dean or other authority figure who oversees the pre- or postdoctoral training programs — and ask for help in finding another mentor.”

The Big Idea

No one has time or energy to dedicate to a toxic workplace. The costs are way too high to risk manipulating data. For one, all authors on a paper will be held responsible for the misconduct– not to mention the physical and mental stress a toxic lab will invite into your life.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Understanding the Fly America Act is important for all researchers planning government-funded travel. Graphic byMiguel Tovar/University of Houston

What Houston researchers should know about the Fly America Act

houston voices

Commercial aviation witnessed a transformative shift following World War II. Initially reserved for military purposes, commercial air travel began to flourish as civilians embraced its convenience. This surge in air travel highlighted the necessity for regulating the industry.

In response, the Federal Aviation Administration, or FAA, emerged from the Federal Aviation Act of 1958, later becoming a component of the Department of Transportation under the Department of Transportation Act in 1967.

The evolution of air travel regulation continued in 1974 with the enactment of the Fly America Act. Designed to safeguard U.S. interests in international air travel funded by the government, this act prioritizes U.S. airline carriers. This initiative serves both to support domestic airlines and promote the U.S. aviation industry on a global stage when passengers travel on federal funds.

What some might not know is this legislation can impact researchers and their organizations.

Importance for researchers

Adhering to the Fly America Act applies not only to federal government employees but also their dependents, grantees, and other travelers funded by federal resources. Even foreign researchers visiting the U.S. under federally funded grants must choose U.S. flag air carriers for their travel.

A U.S. flag air carrier should not be confused with a traditional flag carrier. These are airlines that have historically been government-owned or are otherwise closely tied to the identity of a particular country, like British Airways or Aeroméxico. U.S. flag air carriers encompass a wide range of airlines, including smaller entities like Air Wisconsin Airlines and Avelo, a new carrier based in Houston. You can find a comprehensive list of U.S. flag air carriers here.

Navigating exceptions

While the Fly America Act carries strict guidelines, exceptions do exist. Instances where no U.S. flag air carriers serve the destination or where such carriers would extend the trip by over 24 hours warrant special consideration. In these cases, maintaining meticulous records is essential in order to validate the use of non-U.S. airlines. A list of exceptions can be found here.

Open Skies Agreements introduce another facet to the Fly America Act. These agreements between the U.S. Government and other countries enable travelers, including researchers, to use foreign air carriers for government-funded international travel. Several countries, including those in the European Union, Australia, Switzerland, and Japan, maintain Open Skies Agreements. Flights on British Airways are no longer permitted under an Open Skies Agreement due to the United Kingdom’s exit from the European Union. Proper documentation is essential when claiming a Fly America Act exception, even if covered under an Open Skies Agreement. Detailed travel itineraries, internal agency forms, and evidence of a Fly America exception must be included in travel receipts.

The Big Idea

Compliance with the Fly America Act ensures your travel expenses are reimbursable on government grants. It’s important to remember that cost and convenience are not exceptions to the act. A thorough understanding of Fly America Act’s provisions and exceptions is a must before you book your next flight.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

How this UH pharmacologist tapped into humanities to improve health outcomes in her study. Graphic byMiguel Tovar/University of Houston

Integrating humanities can improve health outcomes, Houston researcher finds

houston voices

The humanities, encompassing language, literature, arts, and history, are often seen as separate from scientific research in universities.

However, they hold the potential to contribute to improved health outcomes by fostering empathy, understanding of cultural context, and enhancing human connections. Meghana Trivedi, an associate professor of Pharmacy Practice and Translational Research at the University of Houston, has embarked on a research project to explore this transformative potential.

Using media to improve medicine adherence

Trivedi, a pharmacologist focusing on developing new drugs for breast cancer treatment, noticed a common issue among breast cancer patients — non-adherence to medication instructions, particularly among minority patients with low socioeconomic status. This non-adherence increases the risk of recurrence and poses a significant national problem.

Trivedi sought to understand the reasons behind this non-adherence and discovered that traditional educational pamphlets were ineffective in improving medication adherence. Instead, she decided to explore a theater-based approach.

Trivedi’s research involves developing a culturally sensitive educational video for breast cancer patients, written by a local African American playwright in collaboration with the clinical team and input from African American survivors. Additionally, her team is testing the effectiveness of motivational interviewing, where pharmacy students personally engage with patients to address barriers to adherence and suggest solutions. Trivedi believes that these approaches, rooted in the humanities, will demonstrate their impact on health outcomes.

The role of humanities in team science

Recognizing the value of interdisciplinary collaboration, Trivedi incorporated the arts and humanities into her research by partnering with professors from the School of Theatre and Dance and the Valenti School of Communication at UH. This collaboration has highlighted the importance of incorporating arts into STEM fields, leading Trivedi to advocate for STEAM (Science, Technology, Engineering, Arts, and Mathematics).

Trivedi emphasizes that effective team science requires shared interest in the project, transparency, and honesty among team members. “We are a very efficient team working to achieve a common and important goal – to improve treatment outcomes and survival in patients.”

The Big Idea

Trivedi’s research underscores the role of social and cultural factors in medication adherence among specific patient groups. By posing her research question outside of her discipline, Trivedi engaged new collaborators who became invested in — and contributed directly to — positive health outcomes. This interdisciplinary approach, combining the insights of the humanities and the collaborative nature of team science, facilitated the development of novel solutions to enhance medication adherence.

------

This article originally appeared on the University of Houston's The Big Idea. Cory Thaxton, the author of this piece, is the communications coordinator for The Division of Research.

The process of breaking up research is dangerous one, according to UH's Big Idea. Graphic byMiguel Tovar/University of Houston

University of Houston: Why breaking up research studies does more harm than good

houston voices

Salami slicing, breaking a paper on a single study up into smaller “slices” and publishing them in more than one journal, is broadly discouraged and considered unethical. Why does the practice persist? What do PIs believe are the benefits of doing it?

Two problems

Breaking up research into smaller slices can have serious consequences for scientific integrity. Researchers, especially younger researchers, may get used to looking at data in smaller pieces and not as a whole. This is dangerous from an academic perspective as valuable conclusions, that could have been derived if the data were presented as a whole, are overlooked.

Further, salami slicing of data may do more harm than good to a researcher’s career over time because it significantly reduces their chances of publishing in high impact journals, thereby lessening the weight of their accrued body of work.

One reason salami slicing still persists, is that there is a veritable avalanche of papers vying for publication. And the number seems to be steadily increasing.

“The academic market became more competitive after the nation’s economic downturn, in 2008,” said Rodica Damian, UH associate professor of psychology. “We saw a lot of competition between those with Ph.D.s and those who were conducting postdoc research. Before, you needed a postdoc if you were in Biology, for instance – but you didn’t need one if you had a doctorate in Psychology. That is no longer the case.”

Another reason salami slicing might persist is that advisors may suggest to a graduate student that they write a series of simpler papers as opposed to a more complex paper consisting of multiple measurements. A researcher might get these “single-lens papers” published much more quickly than their multi-faceted counterparts, due to the amount of background research the journal’s editors need to do on the more complicated papers.

How to avoid self-plagiarism

Salami slicing is not necessarily self-plagiarism, but often the practice does feature a large amount of “text overlap,” according to Miguel Roig, Ph.D. on the website of the Office of Research Integrity for the U.S. Department of Health and Human Services. One example Roig gives is as follows:

“Several months ago, for example, we received a manuscript describing a controlled intervention in a birthing center. The authors sent the results on the mothers to us, and the results on the infants to another journal. The two outcomes would have more appropriately been reported together…The important point is that readers need to be made aware that the data being reported were collected in the context of a larger study.”

The Big Idea

An article published by the NIH suggests this rule of thumb: “If the ‘slice’ of the study in question tests a different hypothesis as opposed to the larger study or has a distinct methodology or populations being studied, then it is acceptable to publish it separately.”

However, when a colleague is trying to do a meta analysis, they need to know what your study actually measured. “One thing you can do to avoid salami slicing,” said Damian, “is to pre-register all the projects you’re planning to do from a specific data set. Then ask yourself, do they use different hypotheses, measures, literatures, etc.”

After all is said and done, are they substantively methodically different research papers? If so, they can be sent to different, separate journals.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Here's how AI-based chat will effect research. Graphic byMiguel Tovar/University of Houston

University of Houston: Here's what intuitive AI and ChatGPT mean for research

HOUSTON VOICES

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office, Kirstin Holzschuh, puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired troupe of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston cardiology-focused tech platform exits to private equity, plans to scale

making moves

A Houston health tech startup founded only last year has exited to a New York private equity firm.

CardioOne, which built a physician enablement platform for independent cardiologists, has been acquired by WindRose Health Investors. The complete terms of the deal were not disclosed, but according to a WindRose news release, the firm will provide up to $100 million of additional capital to go toward supporting CardioOne's growth.

The fresh influx of capital will go toward expanding and enhancing existing service options. The CardioOne leadership team will continue to be at the helm of the startup.

"We are excited for the opportunity to partner with WindRose as CardioOne embarks on its next chapter of growth," Dr. Jasen Gundersen, CardioOne's CEO and co-founder, says in the release. "We believe that working with WindRose, which has a history of successfully partnering with companies to help navigate the transition to value-based care, will empower us to continue supporting independent cardiologists while developing additional solutions that maximize each practice's potential in the shift to VBC arrangements."

Last year, CardioOne raised an $8 million seed round and announced key partnerships at clinics in New Jersey, Florida, and Pennsylvania, in addition to existing relationships in Texas and Maryland. CardioOne also partnered with MedAxiom, an organizational performance solutions provider in the industry.

"CardioOne's unique, physician-aligned model meets the market where it is and positions the Company to take advantage of the growing desire among cardiologists to maintain their independence," Oliver Moses, managing partner with WindRose, adds. "We believe CardioOne delivers a compelling tech-enabled offering to the independent cardiology market and has significant growth potential as the Company builds upon its momentum in 2023. We are excited to join forces with Jasen and his team as they continue to build upon the differentiated platform they have created."

Houston health tech company scores $2.2M grant to use AI to make organ transplants smarter, more successful

future of medicine

The National Institute of Health has bestowed a Houston medtech company with a $2.2 million Fast-Track to Phase 2 award. InformAI will use the money for the product development and commercialization of its AI-enabled organ transplant informatics platform.

Last year, InformAI CEO Jim Havelka told InnovationMap, “A lot of organs are harvested and discarded.”

TransplantAI solves that problem, as well as organ scarcity and inefficiency in allocation of the precious resource.

How does it work? Machine learning and deep learning from a million donor transplants informs the AI, which determines who is the best recipient for each available organ using more than 500 clinical parameters. Organ transplant centers and organ procurement organizations (OPOs) will be able to use the product to make a decision on how to allocate each organ in real time. Ultimately, the tool will service 250 transplant centers and 56 OPOs around the United States.

The NIH grant goes toward developing more precise models for heart and lung transplantation (kidney and liver algorithms are further along in development thanks to a previous award from the National Science Foundation), as well as Phase 2 efforts to fully commercialize TransplantAI.

"There is an urgent need for improved and integrated predictive clinical insights in solid organ transplantation, such as for real-time assessment of waitlist mortality and the likelihood of successful post-transplantation outcomes," according to the grant’s lead clinical investigator, Abbas Rana, associate professor of surgery at Baylor College of Medicine.

“This information is essential for healthcare teams and patients to make informed decisions, particularly in complex cases where expanded criteria allocation decisions are being considered," Rana continues. "Currently, the separation of donor and recipient data into different systems requires clinical teams to conduct manual, parallel reviews for pairing assessments. Our team, along with those at other leading transplant centers nationwide, receives hundreds of organ-recipient match offers weekly.”

Organ transplantation is moving into the future, and Transplant AI is at the forefront.