Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. Photo via Getty Images

Artificial intelligence in the health care setting has a lot of potential, and one Houston institution is looking into one particular use.

Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. According to findings published in the Journal of the American Medical Informatics Association (JAMIA), the team has developed generative AI to understand and interpret genetic tests. They have also tested its accuracy against Open AI’s ChatGPT 3.5.

“We created a chatbot that can provide guidance on general pharmacogenomic testing, dosage implications, and the side effects of therapeutics, and address patient concerns,” explains first author Mullai Murugan in a press release. Murugan is director of software engineering and programming at the Human Genome Sequencing Center. “We see this tool as a superpowered assistant that can increase accessibility and help both physicians and patients answer questions about genetic test results.”

The initial chatbot training specifically targeted pharmacogenomic testing for statins, meaning a patient’s potential response to cholesterol-lowering drugs, as dictated by genetics.

Murugan explains why they decided to create their own chatbot in the key publication on statin pharmacogenomics was published in May 2022, four months after the training cutoff date for ChatGPT 3.5 in January 2022. Alternatively, her team’s technology uses Retrieval Augmented Generation (RAG) and was trained on the most recent guidelines.

How did the two AI assistants compare? Four experts on cardiology and pharmacogenomics rated both chatbots based on accuracy, relevancy, risk management, and language clarity, among other factors. Though the AI scored similarly on language clarity, Baylor’s chatbot scored 85 percent in accuracy and 81 percent in relevancy compared to ChatGPT’s 58 percent in accuracy and 62 percent in relevancy when asked questions from healthcare providers.

“We are working to fine-tune the chatbot to better respond to certain questions, and we want to get feedback from real patients,” Murugan says. “Based on this study, it is very clear that there is a lot of potential here.” Nonetheless, Murugan emphasized that there is much work still to be done before the program is ready for clinical applications. That includes training the chatbot to explain results in the language used by genetic counselors. Funds from the NIH’s All of Us Research Program helped to make the research possible.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook. Photo via Getty Images

3 things Houston companies need to freshen up when it comes to their HR practices

guest column

Just as we typically look to freshen up our homes this time of year, the same needs to be done for employee handbooks. Employee handbooks streamline HR operations, mitigate risks and set expectations to protect a business from negative workplace behavior by outlining employee policies and procedures.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook: in-office attendance, social media and artificial intelligence (AI).

In-office attendance

When taking a closer look at hybrid workplace policies, the in-office attendance policies should align with your organizational goals. Whether you decide to implement hybrid work permanently or eventually return to being in the office completely, the return-to-office (RTO) policies should reflect those goals.

Clear expectations are especially important when defining office attendance rules. When attendance policies are set, employees respond best when they are fair, accessible and easily understood. Detailed policies outlining the nuances and consequences can help reduce noncompliance while supporting accountability.

Policies need consistent enforcement for them to be effective. Hybrid policies set prior to or during the pandemic may now be loosely enforced. The policies may state for employees to be in the office three days a week, but there may be no accountability for not meeting the mandate. Not enforcing attendance policies can give the impression that it is okay to violate other policies, too. Reviewing your policies allows you to course correct and write a policy reflecting your corporate culture and goals. You’ll then be able to reintroduce the attendance policy and enforce it across the board as intended.

Social media

You are hard pressed to find an employee without a social media account, whether it is TikTok or LinkedIn. If your business does not have a social media policy with guidelines surrounding employees’ online behaviors, now is the time to put one in place. If you do have a policy, social media changes quickly enough to warrant an annual review.

Social media policies should set boundaries between personal and professional use of social media. Employee activity on social media outside of work can influence business, as employees are often seen as reflecting the company. It is also important to note that social media policies should be based on input from senior management, HR, legal and IT, not just marketing.

The social media policy should delineate between an employee’s personal and professional use, establish a code of conduct and outline its use as part of crisis communications. Social media can just as easily elevate your brand, and you can potentially ask employees to share positive work experiences online.

Cybersecurity should also be addressed in social media policies. As it has become more common for hackers to infiltrate personal emails and social media accounts, policies can prohibit employees from storing company documents in their personal social media and email accounts for security purposes.

Artificial Intelligence (AI)

AI seems to be changing the way we do business daily. However, the policies surrounding company use of AI are lacking at many organizations. Research from McKinsey states only one in five employers have established policies governing their employees use of AI.

AI technology has already streamlined many business practices, but it can also present major risks. Inaccuracy can threaten your business if employees use generative AI for assistance in completing writing tasks, for instance, and the system may not generate accurate or original information.

As we learn the evolving and complex nuances of AI, creating a policy needs careful attention. You may consider developing an AI team to write a comprehensive, well-researched AI policy tailored to your organization. This working group should gather insights from leaders within the organization, including frontline managers, to fully understand how employees use, or might use, AI. This team should be charged with considering the ethical aspects of AI’s use and ensuring the policy aligns with company values.

One of the most critical elements of the policy is an accountability process or system. The policy should clearly outline any corrective action or disciplinary steps associated with using AI in a manner that harms the business and/or its clients. Just as important, the policy should outline how to use and how to avoid misusing AI. Since AI continues to evolve month to month, this is a policy that will require more attention and revisioning throughout the year.

Keeping a critical eye on HR policies is an important part of business success. Setting aside time to review, update and even create new policies now – before being faced with an issue – can potentially mitigate costly challenges down the road.

------

Karen Leal is performance specialist with Houston-based Insperity, a provider of human resources offering a suite of scalable HR solutions available in the marketplace.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million. Graphic via allganize.ai

Houston AI company raises $35M, plans for Japanese IPO

fresh funding

A Houston tech startup with an artificial intelligence technology has announced it's raised two rounds of funding as it plans to continue developing its product and IPO in Japan.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million, according to the company. Allganize developed Alli, an all-in-one platform for enabling large language models, that's used by over 200 enterprise and public companies globally, including Sumitomo Mitsui Banking Corporation, Nomura Securities, Hitachi, Fujitsu, and KB Securities.

The funding will go toward expanding corporate-specific LLM app markets and expanding enterprise automation AI in the United States, Korea, and Japan. The company has a goal of listing on the Japanese Stock Exchange by 2025.

"This investment accelerates our journey towards global expansion and achieving a milestone of listing on the Japanese stock exchange by 2025. Our focus is on leveraging LLMs to revolutionize work productivity. We are dedicated to empowering companies to develop custom LLM applications, enabling practical tasks execution and work automation,” Changsu Lee, CEO of Allganize, says in a news release.

In the latest round, InterVest and Murex Partners joined existing investors ATINUM Investment and Stonebridge Ventures.

"Allganize's generative AI-based services have garnered acclaim for their technological excellence and practicality among global financial firms. We foresee substantial revenue growth following this investment," Kang Dong-min, vice president of Murex, says in the release.

Allganize was founded in 2017 in California and has offices in Houston, Seoul, and Tokyo. The company's customers range from the insurance and financial services to oil and gas, construction, and more.

WellnessWits, founded by Kike Oduba to enhance patient-physician interaction, has integrated AI with the help of IBM. Photo via WellnessWits.com

Houston health tech startup taps into IBM tech for AI integration

teaming up

A Houston startup aimed at transforming healthcare with solutions for chronic disease and its prevention has teamed up with IBM technology.

WellnessWits has embedded IBM watsonx Assistant into its app for both iOS and Android. By making generative AI part of the app, WellnessWits now boasts an AI-based chat functionality.

That cutting-edge aspect of the platform allows patients to get information on chronic disease more quickly than ever, even before meeting with their physician. But it helps with that, too, aiding in scheduling appointments more easily with doctors who specialize in a host of chronic maladies.

“I founded WellnessWits as a platform for shared medical appointments where doctors with large patient loads can see them in groups and offer collective shared medical experiences to people suffering from chronic conditions. The goal is to bridge this divide, leveraging the strength of digital communities to enhance the overall well-being and healthcare experiences of individuals everywhere,” WellnessWits Founder and CEO Dr. Kike Oduba, a physician and informatician, writes in a blog post.

Oduba founded the company in 2018. In its early years, she participated in IBM’s First AI Cohort for Underrrepresented Founders Program. She believes that by using watsonx Assistant in her technology arsenal, WellnessWits will be a means to “digitize, standardize and automate care plans for patients dealing with chronic diseases.”

Diabetes management is a cornerstone of the company, but Oduba says that WellnessWits is similarly committed to impacting health issues including cancer, chronic pain, mental health, and obesity.

“By partnering with IBM, and academic leaders in institutions like The University of Texas, we aim to extend our reach and impact,” Oduba says.

In addition to the IBM program, Oduba and WellnessWits was selected for MassChallenge's recent cohort.

MassChallenge Startup Stories: WellnessWits Founder's Journey from Struggles to SuccessIn this inspiring 6-minute testimonial, we take you through the remarkable experience of Dr. Oduba, founder and CEO of ...

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Texas lands in top 10 states expected to be most financially affected by weather events

report

Texas — home to everything from tornadoes to hurricanes — cracks the top 10 of a new report ranking states based on impact from weather-related events.

SmartAsset's new report factored in a myriad of data from the Federal Emergency Management Agency to identify which states face the most financial risk due to various weather events. In the report, the states were ranked by the total expected annual financial losses per person. Texas ranked at No. 10.

"With a variety of environmental events affecting the wide stretch of the United States, each state is subject to its own risks," reads the report. "Particularly, tornadoes, wildfires, hurricanes, flooding, landslides, lightning and drought, among other events, can cause damage to buildings, agriculture and individuals alike. When considering insurance, residents and business owners in each state should account for historic and projected losses due to environmental events in their financial plans."

In Texas, the total expected annual loss per person is estimated as $283.15. The report broke down each weather event as follows:

  • Coastal flooding: $1.49
  • Drought: $3.48
  • Earthquake: $1.71
  • Heat wave: $8.16
  • Hurricane: $89.22
  • Riverine flooding: $66.05
  • Strong wind: $5.37
  • Tornado: $71.04
  • Wildfire: $8.26
  • Winter weather: $1.96
Louisiana ranked as No. 1 on the list with $555.55 per person. The state with the lowest expected loss per person from weather events was Ohio with only $63.89 estimated per person.


------

This article originally ran on EnergyCapital.

Exclusive: Houston hydrogen spinout names energy industry veteran as CEO

good as gold

Cleantech startup Gold H2, a spinout of Houston-based energy biotech company Cemvita, has named oil and gas industry veteran Prabhdeep Singh Sekhon as its CEO.

Sekhon previously held roles at companies such as NextEra Energy Resources and Hess. Most recently, he was a leader on NextEra’s strategy and business development team.

Gold H2 uses microbes to convert oil and gas in old, uneconomical wells into clean hydrogen. The approach to generating clean hydrogen is part of a multibillion-dollar market.

Gold H2 spun out of Cemvita last year with Moji Karimi, co-founder of Cemvita, leading the transition. Gold H2 spun out after successfully piloting its microbial hydrogen technology, producing hydrogen below 80 cents per kilogram.

The Gold H2 venture had been a business unit within Cemvita.

“I was drawn to Gold H2 because of its innovative mission to support the U.S. economy in this historical energy transition,” Sekhon says in a news release. “Over the last few years, my team [at NextEra] was heavily focused on the commercialization of clean hydrogen. When I came across Gold H2, it was clear that it was superior to each of its counterparts in both cost and [carbon intensity].”

Gold H2 explains that oil and gas companies have wrestled for decades with what to do with exhausted oil fields. With Gold H2’s first-of-its-kind biotechnology, these companies can find productive uses for oil wells by producing clean hydrogen at a low cost, the startup says.

“There is so much opportunity ahead of Gold H2 as the first company to use microbes in the subsurface to create a clean energy source,” Sekhon says. “Driving this dynamic industry change to empower clean hydrogen fuel production will be extremely rewarding.”

In 2022, Gold H2 celebrated its successful Permian Basin pilot and raised early-stage funding. In addition to Gold H2, Cemvita also spun out a resource mining operation called Endolith. In a podcast episode, Karimi discussed Cemvita's growth and spinout opportunities.

Rice University's student startup competition names 2024 winners, awards $100,000 in prizes

taking home the W

A group of Rice University student-founded companies shared $100,000 of cash prizes at an annual startup competition.

Liu Idea Lab for Innovation and Entrepreneurship's H. Albert Napier Rice Launch Challenge, hosted by Rice earlier this month, named its winners for 2024. HEXASpec, a company that's created a new material to improve heat management for the semiconductor industry, won the top prize and $50,000 cash.

Founded by Rice Ph.D. candidates Tianshu Zhai and Chen-Yang Lin, who are a part of Lilie’s 2024 Innovation Fellows program, HEXASpec is improving efficiency and sustainability within the semiconductor industry, which usually consumes millions of gallons of water used to cool data centers. According to Rice's news release, HEXASpec's "next-generation chip packaging offer 20 times higher thermal conductivity and improved protection performance, cooling the chips faster and reducing the operational surface temperature."

The rest of the winners included:

  • Second place and $25,000: CoFlux Purification
  • Third place and $15,000: Bonfire
  • Outstanding Achievement in Social Impact Award and $1,500: EmpowerU
  • Outstanding Achievement in Artificial Intelligence and $1,000: Sups and Levytation
  • Outstanding Achievement in Consumer Goods Prize and $1,000: The Blind Bag
  • Frank Liu Jr. Prize for Creative Innovations in Music, Fashion and the Arts and $1,500: Melody
  • Outstanding Achievement in Climate Solutions Prizes and $1,000: Solidec and HEXASpec
  • Outstanding Undergraduate Startup Award and $2,500: Women’s Wave
  • Audience Choice Award and $2,000: CoFlux Purification

The NRLC, open to Rice students, is Lilie's hallmark event. Last year's winner was fashion tech startup, Goldie.

“We are the home of everything entrepreneurship, innovation and research commercialization for the entire Rice student, faculty and alumni communities,” Kyle Judah, executive director at Lilie, says in a news release. “We’re a place for you to immerse yourself in a problem you care about, to experiment, to try and fail and keep trying and trying and trying again amongst a community of fellow rebels, coloring outside the lines of convention."

This year, the competition started with 100 student venture teams before being whittled down to the final five at the championship. The program is supported by Lilie’s mentor team, Frank Liu and the Liu Family Foundation, Rice Business, Rice’s Office of Innovation, and other donors

“The heart and soul of what we’re doing to really take it to the next level with entrepreneurship here at Rice is this fantastic team,” Peter Rodriguez, dean of Rice Business, adds. “And they’re doing an outstanding job every year, reaching further, bringing in more students. My understanding is we had more than 100 teams submit applications. It’s an extraordinarily high number. It tells you a lot about what we have at Rice and what this team has been cooking and making happen here at Rice for a long, long time.”

HEXASpec was founded by Rice Ph.D. candidates Tianshu Zhai and Chen-Yang Lin, who are a part of Lilie’s 2024 Innovation Fellows program. Photo courtesy of Rice