Rice University's new Bachelor of Science in AI will be one of only a few in the country. Photo via Getty Images.

Rice University announced this month that it plans to introduce a Bachelor of Science in AI in the fall 2025 semester.

The new degree program will be part of the university's department of computer science in the George R. Brown School of Engineering and Computing and is one of only a few like it in the country. It aims to focus on "responsible and interdisciplinary approaches to AI," according to a news release from the university.

“We are in a moment of rapid transformation driven by AI, and Rice is committed to preparing students not just to participate in that future but to shape it responsibly,” Amy Dittmar, the Howard R. Hughes Provost and executive vice president for academic affairs, said in the release. “This new major builds on our strengths in computing and education and is a vital part of our broader vision to lead in ethical AI and deliver real-world solutions across health, sustainability and resilient communities.”

John Greiner, an assistant teaching professor of computer science in Rice's online Master of Computer Science program, will serve as the new program's director. Vicente Ordóñez-Román, an associate professor of computer science, was also instrumental in developing and approving the new major.

Until now, Rice students could study AI through elective courses and an advanced degree. The new bachelor's degree program opens up deeper learning opportunities to undergrads by blending traditional engineering and math requirements with other courses on ethics and philosophy as they relate to AI.

“With the major, we’re really setting out a curriculum that makes sense as a whole,” Greiner said in the release. “We are not simply taking a collection of courses that have been created already and putting a new wrapper around them. We’re actually creating a brand new curriculum. Most of the required courses are brand new courses designed for this major.”

Students in the program will also benefit from resources through Rice’s growing AI ecosystem, like the Ken Kennedy Institute, which focuses on AI solutions and ethical AI. The university also opened its new AI-focused "innovation factory," Rice Nexus, earlier this year.

“We have been building expertise in artificial intelligence,” Ordóñez-Román added in the release. “There are people working here on natural language processing, information retrieval systems for machine learning, more theoretical machine learning, quantum machine learning. We have a lot of expertise in these areas, and I think we’re trying to leverage that strength we’re building.”

The new Rice Nexus is partnering with Google Public Sector and Non Sibi Ventures to support high-potential AI-focused startups. Image via Rice University.

Google teams up with Rice University to launch AI-focused accelerator

eyes on AI

Google Public Sector is teaming up with Rice University to drive early-stage artificial intelligence innovation and commercialization via the new Rice AI Venture Accelerator, or RAVA.

RAVA will use Google Cloud technology and work with venture capital firm Non Sibi Ventures to connect high-potential AI-focused startups with public and private sector organizations. The incubator will be led by Rice Nexus, which launched earlier this year in the Ion District as an AI-focused "innovation factory.”

“Google Public Sector is proud to partner with a leading institution like Rice University to launch the Rice AI Venture Accelerator,” Reymund Dumlao, director of state and local government and education at Google Public Sector, said in a news release. “By providing access to Google Cloud’s cutting-edge AI, secure cloud infrastructure and expertise, we’re enabling the next generation of AI pioneers to develop solutions that address critical challenges across industries and within the public sector. This unique partnership between education and industry will give participants access to cutting-edge research, leading technologists, specialized resources and a collaborative academic ecosystem, fostering an environment for rapid innovation and growth.”

Participants will have access to Google Public Sector’s AI leadership as well as experts from Rice’s Ken Kennedy Institute, which focuses on AI and computing research. It will be led by Sanjoy Paul, Rice Nexus’ inaugural executive director. Paul previously worked at Accenture LLC as a managing director of technology and is a lecturer in Rice's Department of Computer Science.

Rice Nexus will serve as the physical hub for RAVA, but the program will support AI startups from across the U.S., as part of Rice’s Momentous strategic plan, according to the university.

“This hub enables AI startups to go beyond building minimum viable products that meet industry privacy standards by utilizing the latest AI technologies from Google Cloud,” Paul said in the news release. “Our goal is to maximize the return on investment for our corporate partners, driving meaningful innovation that will have lasting impact on their industries.”

The 10,000-square-foot Rice Nexus space currently serves as home base for several startups with ties to Rice, including Solidec, BeOne Sports and others. Read more about the new incubation space here.

OpenSafe.AI, a new platform that utilizes AI, data, and hazard and resilience models to support storm response decision makers, has secured an NSF grant. Photo by Eric Turnquist

Houston-area researchers score $1.5M grant to develop storm response tech platform

fresh funding

Researchers from Rice University have secured a $1.5 million grant from the National Science Foundation to continue their work on improving safety and resiliency of coastal communities plagued by flooding and hazardous weather.

The Rice team of engineers and collaborators includes Jamie Padgett, Ben Hu, and Avantika Gori along with David Retchless at Texas A&M University at Galveston. The researchers are working in collaboration with the Severe Storm Prediction, Education and Evacuation from Disasters (SSPEED) Center and the Ken Kennedy Institute at Rice and A&M-Galveston’s Institute for a Disaster Resilient Texas.

Together, the team is developing and hopes to deploy “Open-Source Situational Awareness Framework for Equitable Multi-Hazard Impact Sensing using Responsible AI,” or OpenSafe.AI, a new platform that utilizes AI, data, and hazard and resilience models "to provide timely, reliable and equitable insights to emergency response organizations and communities before, during and after tropical cyclones and coastal storm events," reads a news release from Rice.

“Our goal with this project is to enable communities to better prepare for and navigate severe weather by providing better estimates of what is actually happening or might happen within the next hours or days,” Padgett, Rice’s Stanley C. Moore Professor in Engineering and chair of the Department of Civil and Environmental Engineering, says in the release. “OpenSafe.AI will take into account multiple hazards such as high-speed winds, storm surge and compound flooding and forecast their potential impact on the built environment such as transportation infrastructure performance or hazardous material spills triggered by severe storms.”

OpenSafe.AI platform will be developed to support decision makers before, during, and after a storm.

“By combining cutting-edge AI with a deep understanding of the needs of emergency responders, we aim to provide accurate, real-time information that will enable better decision-making in the face of disasters,” adds Hu, associate professor of computer science at Rice.

In the long term, OpenSafe.AI hopes to explore how the system can be applied to and scaled in other regions in need of equitable resilience to climate-driven hazards.

“Our goal is not only to develop a powerful tool for emergency response agencies along the coast but to ensure that all communities ⎯ especially the ones most vulnerable to storm-induced damage ⎯ can rely on this technology to better respond to and recover from the devastating effects of coastal storms,” adds Gori, assistant professor of civil and environmental engineering at Rice.

------

This article originally ran on EnergyCapital.

Angela Wilkins joins the Houston Innovators Podcast to discuss the intersection of data and health care. Photo courtesy

Houston data scientist joins medical device startup amid AI evolution in the sector

HOUSTON INNOVATORS PODCAST EPISODE 241

When most people hear about Houston startup Starling Medical, they might think about how much potential the medical device company has in the field of urinalysis diagnostics. But that's not quite where Angela Wilkins's head went.

Wilkins explains on the Houston Innovators Podcast that when she met the company's co-founders, Hannah McKenney and Drew Hendricks, she recognized them as very promising startup leaders taking action on a real health care problem. Starling's device can collect urine and run diagnostics right from a patient's toilet.

"It was one of those things where I just thought, 'They're going to get a bunch of data soon,'" Wilkins says. "The opportunity is just there, and I was really excited to come on and build their AI platform and the way they are going to look at data."

For about a year, Wilkins supported the startup as an adviser. Now, she's working more hands on as chief data officer as the company grows.



Wilkins, who serves as a mentor and adviser for several startups, has a 20-year career in Houston across all sides of the innovation equation, working first at Baylor College of Medicine before co-founding Mercury Data Science — now OmniScience. Most recently she served as executive director of the Ken Kennedy Institute at Rice University.

This variety in her resume makes her super connective — a benefit to all the startups she works with, she explains. The decision to transition to a startup team means she gets to work hands on in building a technology — while bringing in her experience from other institutions.

"I think I've really learned how to partner with those institutions," she says on the show. "I've really learned how to make those bridges, and that's a big challenge that startups face."

"When we talk about the Houston innovation ecosystem, it's something we should be doing better at because we have so many startups and so many places that would like to use better technology to solve problems," she continues.

Wilkins has data and artificial intelligence on the mind in everything she does, and she even serves on a committee at the state level to learn and provide feedback on how Texas should be regulating AI.

"At the end of the day, the mission is to put together a report and strategy on how we think Texas should think about AI," she explains. "It's beyond just using an algorithm, they need infrastructure."

Colorado is the first state to pass legislation surrounding AI, and Wilkins says all eyes are on how execution of that new law will go.

"We should have technology that can be double checked to make sure we're applying it in a way that's fair across all demographics. It's obvious that we should do that — it's just very hard," she says.

"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

Houston expert explains health care's inequity problem

guest column

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston research team lands $1.2M grant for ovarian cancer research

cancer funding

A team from the University of Houston and MD Anderson Cancer Center is working to find early markers for ovarian cancer.

Backed by a $1.2 million Department of Defense grant, a team led by Tianfu Wu, associate professor of biomedical engineering at UH, is studying autoantibodies that target a tumor suppressor gene that's often mutated in cancers and serves as an early marker of ovarian cancer development.

According to UH, the majority of women with ovarian cancer (70 percent and 75 percent) are diagnosed once the cancer has already spread, with the chances of survival below 32 percent. Computational models estimate that detecting ovarian cancer earlier could reduce mortality by 10 percent to 30 percent.

Doctors generally screen for ovarian cancer by measuring the rising amount of a protein known as Cancer Antigen 125 (CA125). However, additional biomarkers are needed to improve sensitivity and to detect cancer cases that are missed by CA125 testing.

“Advancing early detection methodologies is essential to improving patient prognosis and survival outcomes,” Wu said in a news release. “The technological challenges in the early detection of ovarian cancer are multifaceted, primarily due to limited sensitivity of currently available biomarkers and the absence of highly accurate biomarkers that can detect the disease well before clinical diagnosis.”

Wu’s team developed a test that detects thousands of immune reactions simultaneously by searching for immune complexes in an effort to identify new autoantibodies. They found more than 100 significantly upregulated immune complexes in ovarian cancer patients compared to healthy patients.

The team will test 10 to 20 of the biomarker candidates to assess their performance in the early detection of ovarian cancer. They will use machine learning modeling to develop computer algorithms for data analysis and disease predictions as well.

Dr. Robert C. Bast at MD Anderson Cancer Center has pioneered the practice of early detection of ovarian cancer, and is Wu’s partner on a team. Ying Lin, associate professor of industrial and systems engineering at UH, and Dr. Zhen Lu from at UT MD Anderson Cancer Center are also working on the project.

UH researchers make breakthrough in cutting carbon capture costs

carbon breakthrough

A team of researchers at the University of Houston has made two breakthroughs in addressing climate change and potentially reducing the cost of capturing harmful emissions from power plants.

Led by Professor Mim Rahimi at UH’s Cullen College of Engineering, the team released two significant publications that made significant strides relating to carbon capture processes. The first, published in Nature Communications, introduced a membraneless electrochemical process that cuts energy requirements and costs for amine-based carbon dioxide capture during the acid gas sweetening process. Another, featured on the cover of ES&T Engineering, demonstrated a vanadium redox flow system capable of both capturing carbon and storing renewable energy.

“These publications reflect our group’s commitment to fundamental electrochemical innovation and real-world applicability,” Rahimi said in a news release. “From membraneless systems to scalable flow systems, we’re charting pathways to decarbonize hard-to-abate sectors and support the transition to a low-carbon economy.”

According to the researchers, the “A Membraneless Electrochemically Mediated Amine Regeneration for Carbon Capture” research paper marked the beginning of the team’s first focus. The research examined the replacement of costly ion-exchange membranes with gas diffusion electrodes. They found that the membranes were the most expensive part of the system, and they were also a major cause of performance issues and high maintenance costs.

The researchers achieved more than 90 percent CO2 removal (nearly 50 percent more than traditional approaches) by engineering the gas diffusion electrodes. According to PhD student and co-author of the paper Ahmad Hassan, the capture costs approximately $70 per metric ton of CO2, which is competitive with other innovative scrubbing techniques.

“By removing the membrane and the associated hardware, we’ve streamlined the EMAR workflow and dramatically cut energy use,” Hassan said in the news release. “This opens the door to retrofitting existing industrial exhaust systems with a compact, low-cost carbon capture module.”

The second breakthrough, published by PhD student Mohsen Afshari, displayed a reversible flow battery architecture that absorbs CO2 during charging and releases it upon discharge. The results suggested that the technology could potentially provide carbon removal and grid balancing when used with intermittent renewables, such as solar or wind power.

“Integrating carbon capture directly into a redox flow battery lets us tackle two challenges in one device,” Afshari said in the release. “Our front-cover feature highlights its potential to smooth out renewable generation while sequestering CO2.”

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

Houston cancer diagnostics company enters new phase with patient testing

fighting cancer

A Houston-based company is beginning a push on its proven test for central nervous system (CNS) cancers.

“We're going to start rolling out just in Texas and doing patient testing in the state of Texas, first with a few accounts where we've established a relationship, and then we'll continue our rollout through the United States in the next year or so,” says Russell Bradley, president and general manager of CNSide Diagnostics.

Bradley had retired from multinational diagnostics company Abbott Laboratories when he met Marc Hendrick, the CEO of Austin’s Plus Therapeutics, last year. When Hendrick told him about the recent acquisition of CNSide, a company formerly based in San Diego, Bradley says he felt compelled to join in its mission.

CNSide’s CSF assay tests cerebrospinal fluid for cancers that have metastasized to the spine or brain, primarily carcinomas and melanomas.

“Typically, they do an MRI, and that won't always show anything. If it's early stage, they do cytology, which is not very sensitive at finding cancer cells in the cerebrospinal fluid. By the time they're diagnosed, it can be very late-stage, and oftentimes, in fact, the studies show that half of these patients don't get treated,” Bradley says.

CNSide, then, is a ray of hope for patients who are often consigned to palliative care. By diagnosing their metastasis sooner, physicians have more treatment options to stop the CNS cancer before it’s wreaked havoc. Bradley also points out that once a treatment regimen is underway, doctors can continue to measure the cancer’s progress or lack thereof. He claims that, of the roughly 300 neuro-oncologists in the United States, about 200 have already used the test.

Moving from California to Houston briefly slowed progress for CNSide, but now, matters are moving ahead at a steady clip.

“It takes a little bit of time to establish the test in a new location, move the apparatus and establish the processes,” Bradley says. “You have to get the lab accredited, which we just did. So we're now accredited to run patient samples, and we've really just been doing our research samples as part of the clinical studies.”

Texas institutions, such as the University of Texas - Southwestern, MD Anderson Cancer Center, Mays Cancer Center, Baylor Scott & White Health and Texas Oncology, are beginning to use the technology.

Bradley, who lives in Austin but spends much of his time in Houston, says that the city has been nothing less than an ideal fit for the needs of his growing company and a lab that’s currently hiring. He praises the logistics potential of being close to a major hub, which will eventually be a key factor for getting lumbar puncture samples from around the country to the lab for quick testing.

“I think the business environment in Texas, generally, and in Houston, specifically, for us and the access to talent with a lot of institutions here around the Houston area that graduate the type of people that we want to employ is remarkable. And I'd say the cherry on top is really just access to world-class institutions like MD Anderson. I think from a holistic and comprehensive point of view, Houston has a lot to offer a company like us,” Bradley says.

And ultimately, what brought Bradley and CNSide to Texas is the quest to prolong the lives of people living with cancer. As he puts it, “It's a true privilege—and I know I speak on behalf of the team at CNSide and Plus—to be able to impact these patients and have the tools at this time in the history of cancer diagnostics to be able to really make a difference.”