BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies. Photo via Getty Images

Houston researchers are part of a team that has created an AI model intended to understand how brain activity relates to behavior and illness.

Scientists from Baylor College of Medicine worked with peers from Yale University, University of Southern California and Idaho State University to make Brain Language Model, or BrainLM. Their research was published as a conference paper at ICLR 2024, a meeting of some of deep learning’s greatest minds.

“For a long time we’ve known that brain activity is related to a person’s behavior and to a lot of illnesses like seizures or Parkinson’s,” Dr. Chadi Abdallah, associate professor in the Menninger Department of Psychiatry and Behavioral Sciences at Baylor and co-corresponding author of the paper, says in a press release. “Functional brain imaging or functional MRIs allow us to look at brain activity throughout the brain, but we previously couldn’t fully capture the dynamic of these activities in time and space using traditional data analytical tools.

"More recently, people started using machine learning to capture the brain complexity and how it relates it to specific illnesses, but that turned out to require enrolling and fully examining thousands of patients with a particular behavior or illness, a very expensive process,” Abdallah continues.

Using 80,000 brain scans, the team was able to train their model to figure out how brain activities related to one another. Over time, this created the BrainLM brain activity foundational model. BrainLM is now well-trained enough to use to fine-tune a specific task and to ask questions in other studies.

Abdallah said that using BrainLM will cut costs significantly for scientists developing treatments for brain disorders. In clinical trials, it can cost “hundreds of millions of dollars,” he said, to enroll numerous patients and treat them over a significant time period. By using BrainLM, researchers can enroll half the subjects because the AI can select the individuals most likely to benefit.

The team found that BrainLM performed successfully in many different samples. That included predicting depression, anxiety and PTSD severity better than other machine learning tools that do not use generative AI.

“We found that BrainLM is performing very well. It is predicting brain activity in a new sample that was hidden from it during the training as well as doing well with data from new scanners and new population,” Abdallah says. “These impressive results were achieved with scans from 40,000 subjects. We are now working on considerably increasing the training dataset. The stronger the model we can build, the more we can do to assist with patient care, such as developing new treatment for mental illnesses or guiding neurosurgery for seizures or DBS.”

For those suffering from neurological and mental health disorders, BrainLM could be a key to unlocking treatments that will make a life-changing difference.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston researcher builds radar to make self-driving cars safer

eyes on the road

A Rice University researcher is giving autonomous vehicles an “extra set of eyes.”

Current autonomous vehicles (AVs) can have an incomplete view of their surroundings, and challenges like pedestrian movement, low-light conditions and adverse weather only compound these visibility limitations.

Kun Woo Cho, a postdoctoral researcher in the lab of Rice professor of electrical and computer engineering Ashutosh Sabharwal, has developed EyeDAR to help address such issues and enhance the vehicles’ sensing accuracy. Her research was supported in part by the National Science Foundation.

The EyeDAR is an orange-sized, low-power, millimeter-wave radar that could be placed at streetlights and intersections. Its design was inspired by that of the human eye. Researchers envision that the low-cost sensors could help ensure that AVs always pick up on emergent obstacles, even when the vehicles are not within proper range for their onboard sensors and when visibility is limited.

“Current automotive sensor systems like cameras and lidar struggle with poor visibility such as you would encounter due to rain or fog or in low-lighting conditions,” Cho said in a news release. “Radar, on the other hand, operates reliably in all weather and lighting conditions and can even see through obstacles.”

Signals from a typical radar system scatter when they encounter an obstacle. Some of the signal is reflected back to the source, but most of it is often lost. In the case of AVs, this means that "pedestrians emerging from behind large vehicles, cars creeping forward at intersections or cyclists approaching at odd angles can easily go unnoticed," according to Rice.

EyeDAR, however, works to capture lost radar reflections, determine their direction and report them back to the AV in a sequence of 0s and 1s.

“Like blinking Morse code,” Cho added. “EyeDAR is a talking sensor⎯it is a first instance of integrating radar sensing and communication functionality in a single design.”

After testing, EyeDAR was able to resolve target directions 200 times faster than conventional radar designs.

While EyeDAR currently targets risks associated with AVs, particularly in high-traffic urban areas, researchers also believe the technology behind it could complement artificial intelligence efforts and be integrated into robots, drones and wearable platforms.

“EyeDAR is an example of what I like to call ‘analog computing,’” Cho added in the release. “Over the past two decades, people have been focusing on the digital and software side of computation, and the analog, hardware side has been lagging behind. I want to explore this overlooked analog design space.”

12 winners named at CERAWeek clean tech pitch competition in Houston

top teams

Twelve teams from around the country, including several from Houston, took home top honors at this year's Energy Venture Day and Pitch Competition at CERAWeek.

The fast-paced event, held March 25, put on by Rice Alliance, Houston Energy Transition Initiative and TEX-E, invited 36 industry startups and five Texas-based student teams focused on driving efficiency and advancements in the energy transition to present 3.5-minute pitches before investors and industry partners during CERAWeek's Agora program.

The competition is a qualifying event for the Startup World Cup, where teams compete for a $1 million investment prize.

PolyJoule won in the Track C competition and was named the overall winner of the pitch event. The Boston-based company will go on to compete in the Startup World Cup held this fall in San Francisco.

PolyJoule was spun out of MIT and is developing conductive polymer battery technology for energy storage.

Rice University's Resonant Thermal Systems won the second-place prize and $15,000 in the student track, known as TEX-E. The team's STREED solution converts high-salinity water into fresh water while recovering valuable minerals.

Teams from the University of Texas won first and second place in the TEX-E competition, bringing home $25,000 and $10,000, respectively. The student winners were:

Companies that pitched in the three industry tracts competed for non-monetary awards. Here are the companies named "most-promising" by the judges:

Track A | Industrial Efficiency & Decarbonization

Track B | Advanced Manufacturing, Materials, & Other Advanced Technologies

  • First: Licube, based in Houston
  • Second: ZettaJoule, based in Houston and Maryland
  • Third: Oleo

Track C | Innovations for Traditional Energy, Electricity, & the Grid

The teams at this year's Energy Venture Day have collectively raised $707 million in funding, according to Rice. They represent six countries and 12 states. See the full list of companies and investor groups that participated here.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.