BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies. Photo via Getty Images

Houston researchers are part of a team that has created an AI model intended to understand how brain activity relates to behavior and illness.

Scientists from Baylor College of Medicine worked with peers from Yale University, University of Southern California and Idaho State University to make Brain Language Model, or BrainLM. Their research was published as a conference paper at ICLR 2024, a meeting of some of deep learning’s greatest minds.

“For a long time we’ve known that brain activity is related to a person’s behavior and to a lot of illnesses like seizures or Parkinson’s,” Dr. Chadi Abdallah, associate professor in the Menninger Department of Psychiatry and Behavioral Sciences at Baylor and co-corresponding author of the paper, says in a press release. “Functional brain imaging or functional MRIs allow us to look at brain activity throughout the brain, but we previously couldn’t fully capture the dynamic of these activities in time and space using traditional data analytical tools.

"More recently, people started using machine learning to capture the brain complexity and how it relates it to specific illnesses, but that turned out to require enrolling and fully examining thousands of patients with a particular behavior or illness, a very expensive process,” Abdallah continues.

Using 80,000 brain scans, the team was able to train their model to figure out how brain activities related to one another. Over time, this created the BrainLM brain activity foundational model. BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies.

Abdallah said that using BrainLM will cut costs significantly for scientists developing treatments for brain disorders. In clinical trials, it can cost “hundreds of millions of dollars,” he said, to enroll numerous patients and treat them over a significant time period. By using BrainLM, researchers can enroll half the subjects because the AI can select the individuals most likely to benefit.

The team found that BrainLM performed successfully in many different samples. That included predicting depression, anxiety and PTSD severity better than other machine learning tools that do not use generative AI.

“We found that BrainLM is performing very well. It is predicting brain activity in a new sample that was hidden from it during the training as well as doing well with data from new scanners and new population,” Abdallah says. “These impressive results were achieved with scans from 40,000 subjects. We are now working on considerably increasing the training dataset. The stronger the model we can build, the more we can do to assist with patient care, such as developing new treatment for mental illnesses or guiding neurosurgery for seizures or DBS.”

For those suffering from neurological and mental health disorders, BrainLM could be a key to unlocking treatments that will make a life-changing difference.

Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. Photo via Getty Images

Houston researches tap into GenAI for communicating genetic test results

hi, tech

Artificial intelligence in the health care setting has a lot of potential, and one Houston institution is looking into one particular use.

Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. According to findings published in the Journal of the American Medical Informatics Association (JAMIA), the team has developed generative AI to understand and interpret genetic tests. They have also tested its accuracy against Open AI’s ChatGPT 3.5.

“We created a chatbot that can provide guidance on general pharmacogenomic testing, dosage implications, and the side effects of therapeutics, and address patient concerns,” explains first author Mullai Murugan in a press release. Murugan is director of software engineering and programming at the Human Genome Sequencing Center. “We see this tool as a superpowered assistant that can increase accessibility and help both physicians and patients answer questions about genetic test results.”

The initial chatbot training specifically targeted pharmacogenomic testing for statins, meaning a patient’s potential response to cholesterol-lowering drugs, as dictated by genetics.

Murugan explains why they decided to create their own chatbot in the key publication on statin pharmacogenomics was published in May 2022, four months after the training cutoff date for ChatGPT 3.5 in January 2022. Alternatively, her team’s technology uses Retrieval Augmented Generation (RAG) and was trained on the most recent guidelines.

How did the two AI assistants compare? Four experts on cardiology and pharmacogenomics rated both chatbots based on accuracy, relevancy, risk management, and language clarity, among other factors. Though the AI scored similarly on language clarity, Baylor’s chatbot scored 85 percent in accuracy and 81 percent in relevancy compared to ChatGPT’s 58 percent in accuracy and 62 percent in relevancy when asked questions from healthcare providers.

“We are working to fine-tune the chatbot to better respond to certain questions, and we want to get feedback from real patients,” Murugan says. “Based on this study, it is very clear that there is a lot of potential here.” Nonetheless, Murugan emphasized that there is much work still to be done before the program is ready for clinical applications. That includes training the chatbot to explain results in the language used by genetic counselors. Funds from the NIH’s All of Us Research Program helped to make the research possible.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook. Photo via Getty Images

3 things Houston companies need to freshen up when it comes to their HR practices

guest column

Just as we typically look to freshen up our homes this time of year, the same needs to be done for employee handbooks. Employee handbooks streamline HR operations, mitigate risks and set expectations to protect a business from negative workplace behavior by outlining employee policies and procedures.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook: in-office attendance, social media and artificial intelligence (AI).

In-office attendance

When taking a closer look at hybrid workplace policies, the in-office attendance policies should align with your organizational goals. Whether you decide to implement hybrid work permanently or eventually return to being in the office completely, the return-to-office (RTO) policies should reflect those goals.

Clear expectations are especially important when defining office attendance rules. When attendance policies are set, employees respond best when they are fair, accessible and easily understood. Detailed policies outlining the nuances and consequences can help reduce noncompliance while supporting accountability.

Policies need consistent enforcement for them to be effective. Hybrid policies set prior to or during the pandemic may now be loosely enforced. The policies may state for employees to be in the office three days a week, but there may be no accountability for not meeting the mandate. Not enforcing attendance policies can give the impression that it is okay to violate other policies, too. Reviewing your policies allows you to course correct and write a policy reflecting your corporate culture and goals. You’ll then be able to reintroduce the attendance policy and enforce it across the board as intended.

Social media

You are hard pressed to find an employee without a social media account, whether it is TikTok or LinkedIn. If your business does not have a social media policy with guidelines surrounding employees’ online behaviors, now is the time to put one in place. If you do have a policy, social media changes quickly enough to warrant an annual review.

Social media policies should set boundaries between personal and professional use of social media. Employee activity on social media outside of work can influence business, as employees are often seen as reflecting the company. It is also important to note that social media policies should be based on input from senior management, HR, legal and IT, not just marketing.

The social media policy should delineate between an employee’s personal and professional use, establish a code of conduct and outline its use as part of crisis communications. Social media can just as easily elevate your brand, and you can potentially ask employees to share positive work experiences online.

Cybersecurity should also be addressed in social media policies. As it has become more common for hackers to infiltrate personal emails and social media accounts, policies can prohibit employees from storing company documents in their personal social media and email accounts for security purposes.

Artificial Intelligence (AI)

AI seems to be changing the way we do business daily. However, the policies surrounding company use of AI are lacking at many organizations. Research from McKinsey states only one in five employers have established policies governing their employees use of AI.

AI technology has already streamlined many business practices, but it can also present major risks. Inaccuracy can threaten your business if employees use generative AI for assistance in completing writing tasks, for instance, and the system may not generate accurate or original information.

As we learn the evolving and complex nuances of AI, creating a policy needs careful attention. You may consider developing an AI team to write a comprehensive, well-researched AI policy tailored to your organization. This working group should gather insights from leaders within the organization, including frontline managers, to fully understand how employees use, or might use, AI. This team should be charged with considering the ethical aspects of AI’s use and ensuring the policy aligns with company values.

One of the most critical elements of the policy is an accountability process or system. The policy should clearly outline any corrective action or disciplinary steps associated with using AI in a manner that harms the business and/or its clients. Just as important, the policy should outline how to use and how to avoid misusing AI. Since AI continues to evolve month to month, this is a policy that will require more attention and revisioning throughout the year.

Keeping a critical eye on HR policies is an important part of business success. Setting aside time to review, update and even create new policies now – before being faced with an issue – can potentially mitigate costly challenges down the road.

------

Karen Leal is performance specialist with Houston-based Insperity, a provider of human resources offering a suite of scalable HR solutions available in the marketplace.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million. Graphic via allganize.ai

Houston AI company raises $35M, plans for Japanese IPO

fresh funding

A Houston tech startup with an artificial intelligence technology has announced it's raised two rounds of funding as it plans to continue developing its product and IPO in Japan.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million, according to the company. Allganize developed Alli, an all-in-one platform for enabling large language models, that's used by over 200 enterprise and public companies globally, including Sumitomo Mitsui Banking Corporation, Nomura Securities, Hitachi, Fujitsu, and KB Securities.

The funding will go toward expanding corporate-specific LLM app markets and expanding enterprise automation AI in the United States, Korea, and Japan. The company has a goal of listing on the Japanese Stock Exchange by 2025.

"This investment accelerates our journey towards global expansion and achieving a milestone of listing on the Japanese stock exchange by 2025. Our focus is on leveraging LLMs to revolutionize work productivity. We are dedicated to empowering companies to develop custom LLM applications, enabling practical tasks execution and work automation,” Changsu Lee, CEO of Allganize, says in a news release.

In the latest round, InterVest and Murex Partners joined existing investors ATINUM Investment and Stonebridge Ventures.

"Allganize's generative AI-based services have garnered acclaim for their technological excellence and practicality among global financial firms. We foresee substantial revenue growth following this investment," Kang Dong-min, vice president of Murex, says in the release.

Allganize was founded in 2017 in California and has offices in Houston, Seoul, and Tokyo. The company's customers range from the insurance and financial services to oil and gas, construction, and more.

WellnessWits, founded by Kike Oduba to enhance patient-physician interaction, has integrated AI with the help of IBM. Photo via WellnessWits.com

Houston health tech startup taps into IBM tech for AI integration

teaming up

A Houston startup aimed at transforming healthcare with solutions for chronic disease and its prevention has teamed up with IBM technology.

WellnessWits has embedded IBM watsonx Assistant into its app for both iOS and Android. By making generative AI part of the app, WellnessWits now boasts an AI-based chat functionality.

That cutting-edge aspect of the platform allows patients to get information on chronic disease more quickly than ever, even before meeting with their physician. But it helps with that, too, aiding in scheduling appointments more easily with doctors who specialize in a host of chronic maladies.

“I founded WellnessWits as a platform for shared medical appointments where doctors with large patient loads can see them in groups and offer collective shared medical experiences to people suffering from chronic conditions. The goal is to bridge this divide, leveraging the strength of digital communities to enhance the overall well-being and healthcare experiences of individuals everywhere,” WellnessWits Founder and CEO Dr. Kike Oduba, a physician and informatician, writes in a blog post.

Oduba founded the company in 2018. In its early years, she participated in IBM’s First AI Cohort for Underrrepresented Founders Program. She believes that by using watsonx Assistant in her technology arsenal, WellnessWits will be a means to “digitize, standardize and automate care plans for patients dealing with chronic diseases.”

Diabetes management is a cornerstone of the company, but Oduba says that WellnessWits is similarly committed to impacting health issues including cancer, chronic pain, mental health, and obesity.

“By partnering with IBM, and academic leaders in institutions like The University of Texas, we aim to extend our reach and impact,” Oduba says.

In addition to the IBM program, Oduba and WellnessWits was selected for MassChallenge's recent cohort.

MassChallenge Startup Stories: WellnessWits Founder's Journey from Struggles to SuccessIn this inspiring 6-minute testimonial, we take you through the remarkable experience of Dr. Oduba, founder and CEO of ...

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston engineers develop breakthrough device to advance spinal cord treatment

future of health

A team of Rice University engineers has developed an implantable probe over a hundred times smaller than the width of a hair that aims to help develop better treatments for spinal cord disease and injury.

Detailed in a recent study published in Cell Reports, the probe or sensor, known as spinalNET, is used to explore how neurons in the spinal cord process sensation and control movement, according to a statement from Rice. The research was supported by the National Institutes of Health, Rice, the California-based Salk Institute for Biological Studies, and the philanthropic Mary K. Chapman Foundation based in Oklahoma.

The soft and flexible sensor was used to record neuronal activity in freely moving mice with high resolution for multiple days. Historically, tracking this level of activity has been difficult for researchers because the spinal cord and its neurons move so much during normal activity, according to the team.

“We developed a tiny sensor, spinalNET, that records the electrical activity of spinal neurons as the subject performs normal activity without any restraint,” Yu Wu, a research scientist at Rice and lead author of the study said in a statement. “Being able to extract such knowledge is a first but important step to develop cures for millions of people suffering from spinal cord diseases.”

The team says that before now the spinal cord has been considered a "black box." But the device has already helped the team uncover new findings about the body's rhythmic motor patterns, which drive walking, breathing and chewing.

Lan Luan (from left), Yu Wu, and Chong Xie are working on the breakthrough device. Photo by Jeff Fitlow/Rice University

"Some (spinal neurons) are strongly correlated with leg movement, but surprisingly, a lot of neurons have no obvious correlation with movement,” Wu said in the statement. “This indicates that the spinal circuit controlling rhythmic movement is more complicated than we thought.”

The team said they hope to explore these findings further and aim to use the technology for additional medical purposes.

“In addition to scientific insight, we believe that as the technology evolves, it has great potential as a medical device for people with spinal cord neurological disorders and injury,” Lan Luan, an associate professor of electrical and computer engineering at Rice and a corresponding author on the study, added in the statement.

Rice researchers have developed several implantable, minimally invasive devices to address health and mental health issues.

In the spring, the university announced that the United States Department of Defense had awarded a four-year, $7.8 million grant to the Texas Heart Institute and a Rice team led by co-investigator Yaxin Wang to continue to break ground on a novel left ventricular assist device (LVAD) that could be an alternative to current devices that prevent heart transplantation.

That same month, the university shared news that Professor Jacob Robinson had published findings on minimally invasive bioelectronics for treating psychiatric conditions. The 9-millimeter device can deliver precise and programmable stimulation to the brain to help treat depression, obsessive-compulsive disorder and post-traumatic stress disorder.

Houston clean hydrogen startup to pilot tech with O&G co.

stay gold

Gold H2, a Houston-based producer of clean hydrogen, is teaming up with a major U.S.-based oil and gas company as the first step in launching a 12-month series of pilot projects.

The tentative agreement with the unnamed oil and gas company kicks off the availability of the startup’s Black 2 Gold microbial technology. The technology underpins the startup’s biotech process for converting crude oil into proprietary Gold Hydrogen.

The cleantech startup plans to sign up several oil and gas companies for the pilot program. Gold H2 says it’s been in discussions with companies in North America, Latin America, India, Eastern Europe and the Middle East.

The pilot program is aimed at demonstrating how Gold H2’s technology can transform old oil wells into hydrogen-generating assets. Gold H2, a spinout of Houston-based biotech company Cemvita, says the technology is capable of producing hydrogen that’s cheaper and cleaner than ever before.

“This business model will reshape the traditional oil and gas industry landscape by further accelerating the clean energy transition and creating new economic opportunities in areas that were previously dismissed as unviable,” Gold H2 says in a news release.

The start of the Black 2 Gold demonstrations follows the recent hiring of oil and gas industry veteran Prabhdeep Singh Sekhon as CEO.

“With the proliferation of AI, growth of data centers, and a national boom in industrial manufacturing underway, affordable … carbon-free energy is more paramount than ever,” says Rayyan Islam, co-founder and general partner at venture capital firm 8090 Industries, an investor in Gold H2. “We’re investing in Gold H2, as we know they’ll play a pivotal role in unleashing a new dawn for energy abundance in partnership with the oil industry.”

------

This article originally ran on EnergyCapital.

3 Houston innovators to know this week

who's who

Editor's note: Every week, I introduce you to a handful of Houston innovators to know recently making headlines with news of innovative technology, investment activity, and more. This week's batch includes an e-commerce startup founder, an industrial biologist, and a cellular scientist.

Omair Tariq, co-founder and CEO of Cart.com

Omair Tariq of Cart.com joins the Houston Innovators Podcast to share his confidence in Houston as the right place to scale his unicorn. Photo via Cart.com

Houston-based Cart.com, which operates a multichannel commerce platform, has secured $105 million in debt refinancing from investment manager BlackRock.

The debt refinancing follows a recent $25 million series C extension round, bringing Cart.com’s series C total to $85 million. The scaleup’s valuation now stands at $1.2 billion, making it one of the few $1 billion-plus “unicorns” in the Houston area.

Cart.com was co-founded by CEO Omair Tariq in October 2020. Read more.

Nádia Skorupa Parachin, vice president of industrial biotechnology at Cemvita

Nádia Skorupa Parachin joined Cemvita as vice president of industrial biotechnology. Photo courtesy of Cemvita

Houston-based biotech company Cemvita recently tapped two executives to help commercialize its sustainable fuel made from carbon waste.

Nádia Skorupa Parachin came aboard as vice president of industrial biotechnology, and Phil Garcia was promoted to vice president of commercialization.

Parachin most recently oversaw several projects at Boston-based biotech company Ginkjo Bioworks. She previously co-founded Brazilian biotech startup Integra Bioprocessos. Read more.

Han Xiao, associate professor of chemistry at Rice University

The funds were awarded to Han Xiao, a chemist at Rice University.

A Rice University chemist has landed a $2 million grant from the National Institute of Health for his work that aims to reprogram the genetic code and explore the role certain cells play in causing diseases like cancer and neurological disorders.

The funds were awarded to Han Xiao, the Norman Hackerman-Welch Young Investigator, associate professor of chemistry, from the NIH's Maximizing Investigators’ Research Award (MIRA) program, which supports medically focused laboratories. Xiao will use the five-year grant to advance his work on noncanonical amino acids.

“This innovative approach could revolutionize how we understand and control cellular functions,” Xiao said in the statement. Read more.