BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies. Photo via Getty Images

Houston researchers are part of a team that has created an AI model intended to understand how brain activity relates to behavior and illness.

Scientists from Baylor College of Medicine worked with peers from Yale University, University of Southern California and Idaho State University to make Brain Language Model, or BrainLM. Their research was published as a conference paper at ICLR 2024, a meeting of some of deep learning’s greatest minds.

“For a long time we’ve known that brain activity is related to a person’s behavior and to a lot of illnesses like seizures or Parkinson’s,” Dr. Chadi Abdallah, associate professor in the Menninger Department of Psychiatry and Behavioral Sciences at Baylor and co-corresponding author of the paper, says in a press release. “Functional brain imaging or functional MRIs allow us to look at brain activity throughout the brain, but we previously couldn’t fully capture the dynamic of these activities in time and space using traditional data analytical tools.

"More recently, people started using machine learning to capture the brain complexity and how it relates it to specific illnesses, but that turned out to require enrolling and fully examining thousands of patients with a particular behavior or illness, a very expensive process,” Abdallah continues.

Using 80,000 brain scans, the team was able to train their model to figure out how brain activities related to one another. Over time, this created the BrainLM brain activity foundational model. BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies.

Abdallah said that using BrainLM will cut costs significantly for scientists developing treatments for brain disorders. In clinical trials, it can cost “hundreds of millions of dollars,” he said, to enroll numerous patients and treat them over a significant time period. By using BrainLM, researchers can enroll half the subjects because the AI can select the individuals most likely to benefit.

The team found that BrainLM performed successfully in many different samples. That included predicting depression, anxiety and PTSD severity better than other machine learning tools that do not use generative AI.

“We found that BrainLM is performing very well. It is predicting brain activity in a new sample that was hidden from it during the training as well as doing well with data from new scanners and new population,” Abdallah says. “These impressive results were achieved with scans from 40,000 subjects. We are now working on considerably increasing the training dataset. The stronger the model we can build, the more we can do to assist with patient care, such as developing new treatment for mental illnesses or guiding neurosurgery for seizures or DBS.”

For those suffering from neurological and mental health disorders, BrainLM could be a key to unlocking treatments that will make a life-changing difference.

Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. Photo via Getty Images

Houston researches tap into GenAI for communicating genetic test results

hi, tech

Artificial intelligence in the health care setting has a lot of potential, and one Houston institution is looking into one particular use.

Researchers at Baylor College of Medicine’s Human Genome Sequencing Center have trained an AI assistant to explain genetic test results to patients. According to findings published in the Journal of the American Medical Informatics Association (JAMIA), the team has developed generative AI to understand and interpret genetic tests. They have also tested its accuracy against Open AI’s ChatGPT 3.5.

“We created a chatbot that can provide guidance on general pharmacogenomic testing, dosage implications, and the side effects of therapeutics, and address patient concerns,” explains first author Mullai Murugan in a press release. Murugan is director of software engineering and programming at the Human Genome Sequencing Center. “We see this tool as a superpowered assistant that can increase accessibility and help both physicians and patients answer questions about genetic test results.”

The initial chatbot training specifically targeted pharmacogenomic testing for statins, meaning a patient’s potential response to cholesterol-lowering drugs, as dictated by genetics.

Murugan explains why they decided to create their own chatbot in the key publication on statin pharmacogenomics was published in May 2022, four months after the training cutoff date for ChatGPT 3.5 in January 2022. Alternatively, her team’s technology uses Retrieval Augmented Generation (RAG) and was trained on the most recent guidelines.

How did the two AI assistants compare? Four experts on cardiology and pharmacogenomics rated both chatbots based on accuracy, relevancy, risk management, and language clarity, among other factors. Though the AI scored similarly on language clarity, Baylor’s chatbot scored 85 percent in accuracy and 81 percent in relevancy compared to ChatGPT’s 58 percent in accuracy and 62 percent in relevancy when asked questions from healthcare providers.

“We are working to fine-tune the chatbot to better respond to certain questions, and we want to get feedback from real patients,” Murugan says. “Based on this study, it is very clear that there is a lot of potential here.” Nonetheless, Murugan emphasized that there is much work still to be done before the program is ready for clinical applications. That includes training the chatbot to explain results in the language used by genetic counselors. Funds from the NIH’s All of Us Research Program helped to make the research possible.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook. Photo via Getty Images

3 things Houston companies need to freshen up when it comes to their HR practices

guest column

Just as we typically look to freshen up our homes this time of year, the same needs to be done for employee handbooks. Employee handbooks streamline HR operations, mitigate risks and set expectations to protect a business from negative workplace behavior by outlining employee policies and procedures.

There are three topics in particular that business owners should refresh and/or make sure they include in their HR policies and employee handbook: in-office attendance, social media and artificial intelligence (AI).

In-office attendance

When taking a closer look at hybrid workplace policies, the in-office attendance policies should align with your organizational goals. Whether you decide to implement hybrid work permanently or eventually return to being in the office completely, the return-to-office (RTO) policies should reflect those goals.

Clear expectations are especially important when defining office attendance rules. When attendance policies are set, employees respond best when they are fair, accessible and easily understood. Detailed policies outlining the nuances and consequences can help reduce noncompliance while supporting accountability.

Policies need consistent enforcement for them to be effective. Hybrid policies set prior to or during the pandemic may now be loosely enforced. The policies may state for employees to be in the office three days a week, but there may be no accountability for not meeting the mandate. Not enforcing attendance policies can give the impression that it is okay to violate other policies, too. Reviewing your policies allows you to course correct and write a policy reflecting your corporate culture and goals. You’ll then be able to reintroduce the attendance policy and enforce it across the board as intended.

Social media

You are hard pressed to find an employee without a social media account, whether it is TikTok or LinkedIn. If your business does not have a social media policy with guidelines surrounding employees’ online behaviors, now is the time to put one in place. If you do have a policy, social media changes quickly enough to warrant an annual review.

Social media policies should set boundaries between personal and professional use of social media. Employee activity on social media outside of work can influence business, as employees are often seen as reflecting the company. It is also important to note that social media policies should be based on input from senior management, HR, legal and IT, not just marketing.

The social media policy should delineate between an employee’s personal and professional use, establish a code of conduct and outline its use as part of crisis communications. Social media can just as easily elevate your brand, and you can potentially ask employees to share positive work experiences online.

Cybersecurity should also be addressed in social media policies. As it has become more common for hackers to infiltrate personal emails and social media accounts, policies can prohibit employees from storing company documents in their personal social media and email accounts for security purposes.

Artificial Intelligence (AI)

AI seems to be changing the way we do business daily. However, the policies surrounding company use of AI are lacking at many organizations. Research from McKinsey states only one in five employers have established policies governing their employees use of AI.

AI technology has already streamlined many business practices, but it can also present major risks. Inaccuracy can threaten your business if employees use generative AI for assistance in completing writing tasks, for instance, and the system may not generate accurate or original information.

As we learn the evolving and complex nuances of AI, creating a policy needs careful attention. You may consider developing an AI team to write a comprehensive, well-researched AI policy tailored to your organization. This working group should gather insights from leaders within the organization, including frontline managers, to fully understand how employees use, or might use, AI. This team should be charged with considering the ethical aspects of AI’s use and ensuring the policy aligns with company values.

One of the most critical elements of the policy is an accountability process or system. The policy should clearly outline any corrective action or disciplinary steps associated with using AI in a manner that harms the business and/or its clients. Just as important, the policy should outline how to use and how to avoid misusing AI. Since AI continues to evolve month to month, this is a policy that will require more attention and revisioning throughout the year.

Keeping a critical eye on HR policies is an important part of business success. Setting aside time to review, update and even create new policies now – before being faced with an issue – can potentially mitigate costly challenges down the road.

------

Karen Leal is performance specialist with Houston-based Insperity, a provider of human resources offering a suite of scalable HR solutions available in the marketplace.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million. Graphic via allganize.ai

Houston AI company raises $35M, plans for Japanese IPO

fresh funding

A Houston tech startup with an artificial intelligence technology has announced it's raised two rounds of funding as it plans to continue developing its product and IPO in Japan.

Allganize recently closed a $20 million series B round of funding, bringing its total amount raised to $35 million, according to the company. Allganize developed Alli, an all-in-one platform for enabling large language models, that's used by over 200 enterprise and public companies globally, including Sumitomo Mitsui Banking Corporation, Nomura Securities, Hitachi, Fujitsu, and KB Securities.

The funding will go toward expanding corporate-specific LLM app markets and expanding enterprise automation AI in the United States, Korea, and Japan. The company has a goal of listing on the Japanese Stock Exchange by 2025.

"This investment accelerates our journey towards global expansion and achieving a milestone of listing on the Japanese stock exchange by 2025. Our focus is on leveraging LLMs to revolutionize work productivity. We are dedicated to empowering companies to develop custom LLM applications, enabling practical tasks execution and work automation,” Changsu Lee, CEO of Allganize, says in a news release.

In the latest round, InterVest and Murex Partners joined existing investors ATINUM Investment and Stonebridge Ventures.

"Allganize's generative AI-based services have garnered acclaim for their technological excellence and practicality among global financial firms. We foresee substantial revenue growth following this investment," Kang Dong-min, vice president of Murex, says in the release.

Allganize was founded in 2017 in California and has offices in Houston, Seoul, and Tokyo. The company's customers range from the insurance and financial services to oil and gas, construction, and more.

WellnessWits, founded by Kike Oduba to enhance patient-physician interaction, has integrated AI with the help of IBM. Photo via WellnessWits.com

Houston health tech startup taps into IBM tech for AI integration

teaming up

A Houston startup aimed at transforming healthcare with solutions for chronic disease and its prevention has teamed up with IBM technology.

WellnessWits has embedded IBM watsonx Assistant into its app for both iOS and Android. By making generative AI part of the app, WellnessWits now boasts an AI-based chat functionality.

That cutting-edge aspect of the platform allows patients to get information on chronic disease more quickly than ever, even before meeting with their physician. But it helps with that, too, aiding in scheduling appointments more easily with doctors who specialize in a host of chronic maladies.

“I founded WellnessWits as a platform for shared medical appointments where doctors with large patient loads can see them in groups and offer collective shared medical experiences to people suffering from chronic conditions. The goal is to bridge this divide, leveraging the strength of digital communities to enhance the overall well-being and healthcare experiences of individuals everywhere,” WellnessWits Founder and CEO Dr. Kike Oduba, a physician and informatician, writes in a blog post.

Oduba founded the company in 2018. In its early years, she participated in IBM’s First AI Cohort for Underrrepresented Founders Program. She believes that by using watsonx Assistant in her technology arsenal, WellnessWits will be a means to “digitize, standardize and automate care plans for patients dealing with chronic diseases.”

Diabetes management is a cornerstone of the company, but Oduba says that WellnessWits is similarly committed to impacting health issues including cancer, chronic pain, mental health, and obesity.

“By partnering with IBM, and academic leaders in institutions like The University of Texas, we aim to extend our reach and impact,” Oduba says.

In addition to the IBM program, Oduba and WellnessWits was selected for MassChallenge's recent cohort.

MassChallenge Startup Stories: WellnessWits Founder's Journey from Struggles to SuccessIn this inspiring 6-minute testimonial, we take you through the remarkable experience of Dr. Oduba, founder and CEO of ...

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

16 Houston-based companies hailed best places to work by U.S. News

the standouts

More than a dozen Houston-based companies are sharing the spotlight in U.S. News and World Report's collection of the "Best Companies to Work For" in 2024-2025.

The annual report examines publicly-traded companies around the world to determine the best employers based on six metrics including work-life balance and flexibility; quality of pay and benefits; job and company stability; career opportunities and professional development; and more. The companies were not ranked, but included based on reader surveys and publicly available data about each workplace.

New for the 2024-2025 report, U.S. News analyzed549 companies across 29 different lists, including the overall best companies list — which includes the best 300 companies across the U.S., the United Kingdom, Ireland, Switzerland, Luxembourg, and Bermuda — 24 industry-specific lists, and four regional lists.

There were 16 total companies based around Houston that made the lists, with the majority being based in the city, while one each were located in Spring and The Woodlands.

Leading the pack in Houston is construction company Comfort Systems USA, which provides HVAC, plumbing, and electrical services. Comfort Systems employs 15,800 people, brings in $5.57 billion in annual revenue, and has a market cap of $11.21 billion. The company earned high ratings for its job stability, "belongingness," and professional development opportunities, according to U.S. News.

Comfort Systems also made appearances on U.S. News' industry-specific "Best in Construction" list, and the "Best Companies in the South" list.

Independent energy company Marathon Oil was another top-rated Houston employer, with nearly 1,700 employees, an annual revenue stream of $6.38 billion, and a $15.4 billion market cap. The company was specifically highlighted with a "Top Quality of Pay" label, but also boasts high ratings for its employees' work-life balance, job stability, and belongingness.

In addition to being included in the overall "Best Companies" list, Marathon Oil earned recognition in the industry-specific "Best in Energy" list and the "Best Companies in the South" list.

A second Houston-based energy company earning a spot among the top employers is Occidental (also known as Oxy). The petroleum corporation, which has been in operation since 1920, has nearly 12,600 employees and brings in $27,43 billion in revenue every year.

According to U.S. News, Occidental offers many financial, health and wellness, and workplace benefits including 401k matching, tuition assistance, an employee assistance program, flexible work arrangements, and much more. The company was also given a "Top Quality of Pay" designation.

Occidental appeared in U.S. News' "Best in Mining and Raw Materials," the overall "Best Companies," and "Best Companies in the South" lists.

Other top companies to work for in Houston include:

  • Insperity, Kingwood – Best in Professional Services; Best Companies (overall); Best Companies in the South
  • Southwestern Energy Company, Spring – Best in Energy; Best Companies (overall); Best Companies in the South
  • PROS – Best in IT, Software and Services; Best Companies (overall); Best Companies in the South
  • Powell Industries – Best in Manufacturing; Best Companies (overall); Best Companies in the South
  • Stewart – Best in Insurance; Best Companies (overall); Best Companies in the South
  • ConocoPhillips – Best in Energy, Best Companies in the South
  • LGI Homes, The Woodlands – Best in Construction; Best Companies in the South
  • Service Corporation International – Best in Consumer Products and Services; Best Companies in the South
  • Skyward Specialty Insurance – Best Companies in the South
  • Camden Property Trust – Best in Real Estate; Best Companies in the South
  • Cheniere – Best in Energy
  • EOG Resources – Best in Energy
  • Murphy Oil Corporation – Best in Energy

"Prospective and current employees understand the significant impact their employer has on their quality of life," said Carly Chase, vice president of careers at U.S. News and World Report, in a release. "Whether a new grad seeking a company to launch their career, an established professional looking for a change or an HR professional researching the strengths of their company and others, Best Companies to Work For provides a central space to see which companies are meeting their employees' needs best.

Top workplaces around Texas
In all, 42 different employers headquartered in the Lone Star State made it onto U.S. News' 2024-2025 "Best Places to Work For" lists. The Houston metro area tied with Dallas-Fort Worth with the highest number of top-rated employers, at 16 each. Only one company from West Texas made it onto the list: Diamondback Energy in Midland.

The top companies to work for in Austin are:

  • Cirrus Logic
  • CrowdStrike
  • Digital Realty
  • Silicon Labs
  • E2open
  • Q2

The top companies to work for in San Antonio are:

  • Frost Bank
  • iHeartMedia
  • Rush Enterprises, Inc., New Braunfels

The best places to work for across Dallas-Fort Worth are:

  • Thryv Holdings, Inc., Dallas
  • Comerica, Dallas
  • Veritex Community Bank, Dallas
  • Charles Schwab, Westlake
  • Southwest Airlines, Dallas
  • CMC, Irving
  • Sabre, Southlake
  • Texas Instrument, Dallas
  • Omnicell, Fort Worth
  • Enhabit, Dallas
  • Builders FirstSource, Irving
  • Invitation Homes, Dallas
  • Celanese, Irving
  • Atmos Energy, Dallas
  • Lennox, Richardson
  • Caterpillar, Irving
The full list of the best companies to work for can be found at usnews.com

------

This article originally ran on CultureMap.

$1M donation to Rice establishes pioneering neuro-policy center in Houston

brainy support

A big donation to Rice University will soon help researchers better understand the workings of the human brain.

Harry Yan and Weiman Gao have bestowed $1 million on the Baker Institute of Public Policy to establish the interdisciplinary Neuro-Policy Program.

Neuro-policy is a newer field that explores how brain health and function can help to fuel economic growth.

“The Neuro-Policy Program is at the forefront of pioneering data analysis, empirical research and policy application,” says Harris Eyre, the lead for the program, as well as a senior fellow in brain health at the Baker Institute, in a news release. “Investing in evidence-based strategies for prevention, diagnosis and treatment can reduce brain and mental health disparities, optimize cognitive development and performance and foster innovation to build more resilient communities.”

Eyre describes the collective value of the human brain as “brain capital.” That’s because brains that are suffering from any number of neurodegenerative or mental health disorders (including depression, anxiety, brain injury and Alzheimer’s disease) have actually taken a toll on the U.S. economy, Eyre explains.

The Neuro-Policy Program seeks to improve brain performance, and consequently enhance economic growth, national security, and our overall standing as a nation of healthy brains. The program’s primary projects include establishing a task force to advise Texas “brain and mind” legislative efforts as well as a Texas Brain Capital Dashboard, collaborating on Texas Forward (Texas Brain Health Plan) with the UT Dallas Center for BrainHealth, thereby working toward U.S. brain capital policy and investment advances. These projects are expected to yield deliverables as early as 2026.

“The Neuro-Policy Program aims to leverage the university’s proximity to the Texas Medical Center and the institute’s strong connections to state and federal policymakers. This is an important yet underrepresented area of research that Houston is poised to lead,” says David Satterfield, the director of the Baker Institute.

Yan and Gao said in a press release that they were inspired to gift the grant funds to Eyre and his research after attending a March 28 Baker Institute event on brain health that featured U.S. Rep. Morgan Luttrell, a co-chair of the Congressional Neuroscience Caucus.

"We are honored to support Dr. Harris Eyre and the Neuro-Policy program he leads. Dr. Eyre’s work has greatly impressed us, highlighting the critical importance of brain health in our society today,” say Yan and Gao. “We hope our contribution can inspire further support and advocacy in the field, helping individuals lead healthier lives through a comprehensive approach to prevention.”