Just like any workplace, labs can get toxic. Graphic by Miguel Tovar/University of Houston

There are many types of toxic bosses. The Micromanager. The Narcissist. The Incompetent Boss. The list goes on. But labs led by toxic PIs not only make for an abysmal workplace they can actually encourage research misconduct.

According to Charles Wood, author of “When lab leaders take too much control,” there are two types of toxic labs most at risk for this type of behavior: the executive model and the competition model.

Executive model

Wood described the executive approach to lab management as one where the mentor sets expectations for trainees, often with a particular goal in mind. In its negative form, this includes specifying experimental outcomes and instructing trainees on particular experiments to achieve a desired result.

It comes as no surprise that experimenting with the answer already in mind goes against scientific principles. Spiking biological samples, manipulating instruments – all these things have been suspected in labs according to the U.S. government’s Office of Research Integrity. The first line of defense is having the investigators replicate their experiment while being closely supervised. The consequences of misconduct, if the allegations are found to be credible, can include being debarred from further federal funding and having data sequestered.

Competition model

The competition model pits graduate students or postdocs against one another. In this case, whoever gets the result first is rewarded, while the others are punished. This makes a perfect breeding ground for misconduct. Imagine if a foreign student’s citizenship status is affected by whether or not they can produce the results their PI wants them to obtain. Of the competition model, Wood said that what students and postdocs learn can be catastrophic: “competition over collaboration and conformity over creativity.” He posits that researchers graduating from the PI’s toxic lab may be influenced to drop out of science completely or go on to run their own labs in a toxic way.

A correlation between mentors and ethical decision-making

Michael D. Mumford, et al. in “Environmental influences on ethical decision making: Climate and environmental predictors of research integrity” (Ethics & Behavior journal) found that for first-year doctoral students, “environmental experiences (including professional leadership) exert stronger effects on ethical decision making than the climate of the work group.”

Wood also noted that, regardless of the management style, certain scientists may be more prone to cheating. However, active involvement and openness by the principal investigator can serve as a preventive measure against this.

What can you do about it?

Chris Sowers in the “Toxic Boss Syndrome: How To Recover and Get Your Mojo Back” episode of his Better Humans podcast, shared how a few toxic bosses affected his job performance, self esteem and even interpersonal relationships. His first piece of advice is to get out quickly, even if you need to take a pay cut – he says a few thousand dollars are not worth the hit to your mental and physical health.

Vetting your lab’s PI will help enormously. Does the PI have a good track record of being a fair and kind mentor?

“If your principal investigator starts to exhibit toxic behavior, address this with him or her,” said Wood. He goes on to advise that “if you find yourself in a truly toxic environment, seek guidance from a graduate coordinator, assistant dean or other authority figure who oversees the pre- or postdoctoral training programs — and ask for help in finding another mentor.”

The Big Idea

No one has time or energy to dedicate to a toxic workplace. The costs are way too high to risk manipulating data. For one, all authors on a paper will be held responsible for the misconduct– not to mention the physical and mental stress a toxic lab will invite into your life.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic by Miguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Understanding the Fly America Act is important for all researchers planning government-funded travel. Graphic by Miguel Tovar/University of Houston

What Houston researchers should know about the Fly America Act

houston voices

Commercial aviation witnessed a transformative shift following World War II. Initially reserved for military purposes, commercial air travel began to flourish as civilians embraced its convenience. This surge in air travel highlighted the necessity for regulating the industry.

In response, the Federal Aviation Administration, or FAA, emerged from the Federal Aviation Act of 1958, later becoming a component of the Department of Transportation under the Department of Transportation Act in 1967.

The evolution of air travel regulation continued in 1974 with the enactment of the Fly America Act. Designed to safeguard U.S. interests in international air travel funded by the government, this act prioritizes U.S. airline carriers. This initiative serves both to support domestic airlines and promote the U.S. aviation industry on a global stage when passengers travel on federal funds.

What some might not know is this legislation can impact researchers and their organizations.

Importance for researchers

Adhering to the Fly America Act applies not only to federal government employees but also their dependents, grantees, and other travelers funded by federal resources. Even foreign researchers visiting the U.S. under federally funded grants must choose U.S. flag air carriers for their travel.

A U.S. flag air carrier should not be confused with a traditional flag carrier. These are airlines that have historically been government-owned or are otherwise closely tied to the identity of a particular country, like British Airways or Aeroméxico. U.S. flag air carriers encompass a wide range of airlines, including smaller entities like Air Wisconsin Airlines and Avelo, a new carrier based in Houston. You can find a comprehensive list of U.S. flag air carriers here.

Navigating exceptions

While the Fly America Act carries strict guidelines, exceptions do exist. Instances where no U.S. flag air carriers serve the destination or where such carriers would extend the trip by over 24 hours warrant special consideration. In these cases, maintaining meticulous records is essential in order to validate the use of non-U.S. airlines. A list of exceptions can be found here.

Open Skies Agreements introduce another facet to the Fly America Act. These agreements between the U.S. Government and other countries enable travelers, including researchers, to use foreign air carriers for government-funded international travel. Several countries, including those in the European Union, Australia, Switzerland, and Japan, maintain Open Skies Agreements. Flights on British Airways are no longer permitted under an Open Skies Agreement due to the United Kingdom’s exit from the European Union. Proper documentation is essential when claiming a Fly America Act exception, even if covered under an Open Skies Agreement. Detailed travel itineraries, internal agency forms, and evidence of a Fly America exception must be included in travel receipts.

The Big Idea

Compliance with the Fly America Act ensures your travel expenses are reimbursable on government grants. It’s important to remember that cost and convenience are not exceptions to the act. A thorough understanding of Fly America Act’s provisions and exceptions is a must before you book your next flight.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Absolutism has no bearing on the scientific process. Graphic by Miguel Tovar/University of Houston

Why absolutism has no place in research, according to University of Houston

Houston voices

Science, like politics, can elicit polarizing opinions. But with an ever-expanding body of knowledge — and the especially dizzying flurry of findings during the pandemic — is it fair to say that views on science are becoming more extreme?

Measuring the polarization

“A standard way of measuring polarization in the U.S. is asking Democrats and Republicans how warmly they feel toward members of their own group and members of their outgroup on a feeling thermometer from 0 to 100,” said Jessica Gottlieb, professor at the UH Hobby School of Public Affairs. “The difference in ingroup-outgroup warmth is then considered a measure of polarization. This has been measured by the American National Elections Studies systematically over the past several decades, and indeed the level of affective polarization has been increasing in the U.S.”

“Absolutism is the culprit.”

In an article in Foreign Affairs entitled, “How Extremism Went Mainstream,” the author notes that “the tools that authorities use to combat extremists become less useful when the line between the fringe and the center starts to blur.”

Science has traditionally been one such tool. However, this extremism — where everything is black and white — in politics, has made its unfortunate way into academia. John Lienhard is a professor at the University of Houston and host of “Engines of Our Ingenuity,” a national radio program which has been telling stories of how creativity has shaped our culture since 1988. According to Lienhard, extremism — as seen within the scientific enterprise — goes by a different name.

“Absolutism is the culprit – the need on the part of so many of us to know The Right Answer. The absolutists in the world will glom onto whatever vehicle suits them – religion, politics, education, and ultimately, science itself,” said Lienhard. In other words, good scientists amend and revise, while “the absolutist finds the honest practice of science hateful,” he says, “because science is a way of life where everything lies open to question.”

A series of approximations

In an article entitled, “If You Say Science Is Right You’re Wrong,” professor Naomi Oreskes introduces this quote by Nobel Prize–winning physicist Steven Weinberg:

“Even though a scientific theory is in a sense a social consensus, it is unlike any other sort of consensus in that it is culture-free and permanent.”

Well, no. Even a modest familiarity with the history of science offers many examples of matters that scientists thought they had resolved, only to discover that they needed to be reconsidered.

Some familiar examples are Earth as the center of the universe, the absolute nature of time and space, the stability of continents and the cause of infectious disease.

Absolutism in science is dangerous. Good scientists know how important it is to ask probing questions. In his book entitled, Science versus Absolutism: Science Approaches Truth by a Series of Approximations, the chemist T. Swann Harding asks the question: “What are scientific laws?” He goes on to answer:

“Most people appear to regard them as singularly exact and unalterable things … to violate them brings swift retribution. They are unchanging and eternal in character. Yet the so-called laws of science are really rules pieced together by man on a basis of much observation and experiment.”

In the past, so much of science was just plain wrong – until another researcher came around and amended the original belief (think Galileo). How are our modern times any different? There are still many situations where scientific thought has needed to be amended. Even as recently as the COVID crisis, researchers were revising their thoughts about the spread and contagiousness of the disease.

Allowing for dissent

In a Scientific American blog, Matt Nolan writes that “Dissent in Science Is Essential–up to a Point.” In it, he said, “It is the public who pay the price when marginalized science informs policy. History reminds us this is unsafe territory.” However, Lienhard adds that Einstein set limits on the validity of Newton’s laws just as nuclear fission provided an amendment to the conservation of energy law. There is always a new question to formalize where experimentation is being conducted.

Referred to as the “file drawer effect,” another predicament occurs when a researcher does not get the answer they were expecting, and therefore, decides to not publish the negative findings. Every answer is meaningful. And sometimes a negative answer — or no answer — is an answer.

Dissent, and perhaps a certain measure of disappointment, is a critical part of scientific inquiry.

The Big Idea

Science can be thought of as the best we know to the degree we understand a given problem at a given place and time. Absolutism has no bearing on the scientific process and in some cases actively obscures and colors that understanding. And that’s not black and white at all; that’s about as gray as it gets.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

If there are fewer grant proposals, does that mean innovation has slowed? UH gets to the bottom of the question. Graphic by Miguel Tovar/University of Houston

University of Houston: What a drop in NSF proposals means for the country's rate of innovation

houston voices

A 17 percent drop in proposals over the past decade to the National Science Foundation may be a mixed blessing.

A consistently rising budget – and this is in billions of dollars – is the preferred method of keeping the number of funded proposals ever higher. But a dip in the number of proposals submitted in the first place can have a similar effect of increasing the number of funded proposals, since the pool of submissions is much smaller.

In an article for Science Magazine, author Jeffrey Mervis poses the question: Has there been a decline in grant-worthy ideas? In NSF’s biology sector, Mervis notes that “demand has tumbled by 50 percent over the decade and the chances of winning a grant have doubled, from 18 percent in 2011 to 36 percent in 2020.” NSF’s leadership suggests two possible reasons for this phenomenon.

Eliminating fixed deadlines

“Dear Colleague” letters went out to numerous directorates within the NSF notifying PIs that fixed deadlines for small projects ($500,000 and less) would be taken out of the equation. For instance, the Directorate for Computer and Information Science and Engineering’s letter read: “in order to allow principal investigators (PIs) more flexibility and to better facilitate interdisciplinary research across disciplines” deadlines would be eliminated. The letter goes on to state that by eliminating fixed deadlines, PIs will be free to think more creatively and collaboratively – without the added stress of a deadline.

Wouldn’t less stress mean more applications? This doesn’t seem to be the case. In one instance, according to another article in Science, proposals dropped when the program ceased annual deadlines and replaced them with rolling deadlines.

Reducing stress for grant reviewers

That article goes on to say that these changes alleviate the strain on the grant reviewers without lowering standards. James Olds, assistant director of the Directorate for Biological Sciences, anticipated that the NSF program managers would get somewhat of a break, and that the new policy would relieve university administrators who process the applications from being overwhelmed.

Other factors at play

“It is highly unlikely there was one specific reason for the decrease,” said David Schultz, assistant vice president for Sponsored Projects in the Office of Contracts and Grants at the University of Houston, “but rather multiple factors contributing over time. One potential cause is that many major research institutions are diversifying their funding sources away from NSF and into other federal agencies more aligned with their strategic areas of research interest, such as NIH, DOD, and DOE. The NIH has seen an 11 percent increase in proposals over the same period, from 49,592 in 2011 to 55,038 in 2020.”

Tenure

“Another component is the documented decrease in the number of tenured faculty across the nation. Generally tenured faculty are more research-focused, as their ability to obtain externally funded research is a major criterion for promotion and tenure,” said Schultz. “While this may lead to fewer proposals, it does encourage new tenure track faculty to focus more efforts on the higher likelihood of being awarded an NSF grant.”

The Big Idea

Some people work better and more efficiently when presented with a deadline. Could that be the reason fewer proposals are being turned in? In his article, Mervis, deliberates over whether the number of proposals means that the nation is innovating more slowly than before. But how could that be?

The National Science Board, NSF’s presidentially appointed oversight committee, is trying to get to the bottom of the issue so as to mitigate it. Olds stands by the decision to remove deadlines, pointing out that it should be the strength of the proposal not the threat of a deadline which motivates the research project.

Schultz sees a silver lining. “With fewer proposals being submitted to the NSF, the shift creates an opportunity for smaller, emerging universities to increase their proposal submission and success rates.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution. Graphic by Miguel Tovar/University of Houston

Houston research: Why you need a data management plan

Houston voices

Why do you need a data management plan? It mitigates error, increases research integrity and allows your research to be replicated – despite the “replication crisis” that the research enterprise has been wrestling with for some time.

Error

There are many horror stories of researchers losing their data. You can just plain lose your laptop or an external hard drive. Sometimes they are confiscated if you are traveling to another country — and you may not get them back. Some errors are more nuanced. For instance, a COVID-19 repository of contact-traced individuals was missing 16,000 results because Excel can’t exceed 1 million lines per spreadsheet.

Do you think a hard drive is the best repository? Keep in mind that 20 percent of hard drives fail within the first four years. Some researchers merely email their data back and forth and feel like it is “secure” in their inbox.

The human and machine error margins are wide. Continually backing up your results, while good practice, can’t ensure that you won’t lose invaluable research material.

Repositories

According to Reid Boehm, Ph.D., Research Data Management Librarian at the University of Houston Libraries, your best bet is to utilize research data repositories. “The systems and the administrators are focused on file integrity and preservation actions to mitigate loss and they often employ specific metadata fields and documentation with the content,” Boehm says of the repositories. “They usually provide a digital object identifier or other unique ID for a persistent record and access point to these data. It’s just so much less time and worry.”

Integrity

Losing data or being hacked can challenge data integrity. Data breaches do not only compromise research integrity, they can also be extremely expensive! According to Security Intelligence, the global average cost of a data breach in a 2019 study was $3.92 million. That is a 1.5 percent increase from the previous year’s study.

Sample size — how large or small a study was — is another example of how data integrity can affect a study. Retraction Watch removes approximately 1,500 articles annually from prestigious journals for “sloppy science.” One of the main reasons the papers end up being retracted is that the sample size was too small to be a representative group.

Replication

Another metric for measuring data integrity is whether or not the experiment can be replicated. The ability to recreate an experiment is paramount to the scientific enterprise. In a Nature article entitled, 1,500 scientists lift the lid on reproducibility, “73 percent said that they think that at least half of the papers can be trusted, with physicists and chemists generally showing the most confidence.”

However, according to Kelsey Piper at Vox, “an attempt to replicate studies from top journals Nature and Science found that 13 of the 21 results looked at could be reproduced.”

That's so meta

The archivist Jason Scott said, “Metadata is a love note to the future.” Learning how to keep data about data is a critical part of reproducing an experiment.

“While this will be always be determined by a combination of project specifics and disciplinary considerations, descriptive metadata should include as much information about the process as possible,” said Boehm. Details of workflows, any standard operating procedures and parameters of measurement, clear definitions of variables, code and software specifications and versions, and many other signifiers ensure the data will be of use to colleagues in the future.

In other words, making data accessible, useable and reproducible is of the utmost importance. You make reproducing experiments that much easier if you are doing a good job of capturing metadata in a consistent way.

The Big Idea

A data management plan includes storage, curation, archiving and dissemination of research data. Your university’s digital librarian is an invaluable resource. They can answer other tricky questions as well: such as, who does data belong to? And, when a post-doctoral student in your lab leaves the institution, can s/he take their data with them? Every situation is unique and deserves a one-of-the-kind data management plan, not a one-size-fits-all solution.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston climbs to top 10 spot on North American tech hubs index

tech report

Houston already is the Energy Capital of the World, and now it’s gaining ground as a tech hub.

On Site Selection magazine’s 2026 North American Tech Hub Index, Houston jumped to No. 10 from No. 16 last year. The index relies on data from Site Selection as well as data from CBRE, CompTIA and TeleGeography to rank the continent’s tech hotspots. The index incorporates factors such as internet connectivity, tech talent and facility projects for tech companies.

In 2023, the Greater Houston Partnership noted the region had “begun to receive its due as a prominent emerging tech hub, joining the likes of San Francisco and Austin as a major player in the sector, and as a center of activity for the next generation of innovators and entrepreneurs.”

The Houston-area tech sector employs more than 230,000 people, according to the partnership, and generates an economic impact of $21.2 billion.

Elsewhere in Texas, two other metros fared well on the Site Selection index:

  • Dallas-Fort Worth nabbed the No. 1 spot, up from No. 2 last year.
  • Austin rose from No. 8 last year to No. 7 this year.

San Antonio slid from No. 18 in 2025 to No. 22 in 2026, however.

Two economic development officials in DFW chimed in about the region’s No. 1 ranking on the index:

  • “This ranking affirms what we’ve long seen on the ground — Dallas-Fort Worth is a top-tier technology and innovation center,” said Duane Dankesreiter, senior vice president of research and innovation at the Dallas Regional Chamber. “Our region’s scale, talent base, and diverse strengths … continue to set DFW apart as a national leader.”
  • “Being recognized as the top North American tech hub underscores the strength of the entire Dallas-Fort Worth region as a center of innovation and next-generation technology,” said Robert Allen, president and CEO of the Fort Worth Economic Development Partnership.

While not directly addressing Austin’s Site Selection ranking, Thom Singer, CEO of the Austin Technology Council, recently pondered whether Silicon Hills will grow “into the kind of community that other cities study for the right reasons.”

“Austin tech is not a club. It is not a scene. It is not a hashtag, a happy hour, or any one place or person,” Singer wrote on the council’s blog. “Austin tech is an economic engine and a global brand, built by thousands of people who decided to take a risk, build something, hire others, and be part of a community that is still young enough to reinvent itself.”

South of Austin, Port San Antonio is driving much of that region’s tech activity. Occupied by more than 80 employers, the 1,900-acre tech and innovation campus was home to 18,400 workers in 2024 and created a local economic impact of $7.9 billion, according to a study by Zenith Economics.

“Port San Antonio is a prime example of how innovation and infrastructure come together to strengthen [Texas’] economy, support thousands of good jobs, and keep Texas competitive on the global stage,” said Kelly Hancock, the acting state comptroller.

14 Houston startups starting 2026 with fresh funding

cha-ching

Houston startups closed out the last half of 2025 with major funding news.

Here are 14 Houston companies—from groundbreaking energy leaders to growing space startups—that secured funding in the last six months of the year, according to reporting by InnovationMap and our sister site, EnergyCapitalHTX.com.

Did we miss a funding round? Let us know by emailing innoeditor@innovationmap.com.

Fervo Energy

Fervo Energy has closed an oversubscribed Series E. Photo via Fervo Energy

Houston-based geothermal energy company Fervo Energy closed an oversubscribed $462 million series E funding round, led by new investor B Capital, in December.

The company also secured $205.6 million from three sources in June.

“Fervo is setting the pace for the next era of clean, affordable, and reliable power in the U.S.,” Jeff Johnson, general partner at B Capital, said in a news release.

The funding will support the continued buildout of Fervo’s Utah-based Cape Station development, which is slated to start delivering 100 MW of clean power to the grid beginning in 2026. Cape Station is expected to be the world's largest next-generation geothermal development, according to Fervo. The development of several other projects will also be included in the new round of funding. Continue reading.

Square Robot

Houston robotics co. unveils new robot that can handle extreme temperatures

Square Robot's technology eliminates the need for humans to enter dangerous and toxic environments. Photo courtesy of Square Robot

Houston- and Boston-based Square Robot Inc. announced a partnership with downstream and midstream energy giant Marathon Petroleum Corp. (NYSE: MPC) last month.

The partnership came with an undisclosed amount of funding from Marathon, which Square Robot says will help "shape the design and development" of its submersible robotics platform and scale its fleet for nationwide tank inspections. Continue reading.

Eclipse Energy

Eclipse Energy and Weatherford International are expected to launch joint projects early this year. Photo courtesy of Eclipse Energy.

Oil and gas giant Weatherford International (NASDAQ: WFRD) made a capital investment for an undisclosed amount in Eclipse Energy in December as part of a collaborative partnership aimed at scaling and commercializing Eclipse's clean fuel technology.

According to a release, joint projects from the two Houston-based companies are expected to launch as soon as this month. The partnership aims to leverage Weatherford's global operations with Eclipse Energy's pioneering subsurface biotechnology that converts end-of-life oil fields into low-cost, sustainable hydrogen sources. Continue reading.

Venus Aerospace 

Lockheed Martin Ventures says it's committed to helping Houston-based Venus Aerospace scale its technology. Photo courtesy Venus Aerospace

Venus Aerospace, a Houston-based startup specializing in next-generation rocket engine propulsion, has received funding from Lockheed Martin Ventures, the investment arm of aerospace and defense contractor Lockheed Martin, for an undisclosed amount, the company announced in November. The product lineup at Lockheed Martin includes rockets.

The investment follows Venus’ successful high-thrust test flight of its rotating detonation rocket engine (RDRE) in May. Venus says it’s the only company in the world that makes a flight-proven, high-thrust RDRE with a “clear path to scaled production.”

Venus says the Lockheed Martin Ventures investment reflects the potential of Venus’ dual-use technology for defense and commercial uses. Continue reading.

Koda Health

Tatiana Fofanova and Dr. Desh Mohan, founders of Koda Health, which recently closed a $7 million series A. Photo courtesy Koda Health.

Houston-based digital advance care planning company Koda Health closed an oversubscribed $7 million series A funding round in October.

The round, led by Evidenced, with participation from Mudita Venture Partners, Techstars and Texas Medical Center, will allow the company to scale operations and expand engineering, clinical strategy and customer success, according to a news release.

The company shared that the series A "marks a pivotal moment," as it has secured investments from influential leaders in the healthcare and venture capital space. Continue reading.

Hertha Metals

U.S. Rep. Morgan Luttrell, a Magnolia Republican, and Hertha Metals founder and CEO Laureen Meroueh toured Hertha’s Conroe plant in August. Photo courtesy Hertha Metals/Business Wire.

Conroe-based Hertha Metals, a producer of substantial steel, hauled in more than $17 million in venture capital from Khosla Ventures, Breakthrough Energy Fellows, Pear VC, Clean Energy Ventures and other investors.

The money was put toward the construction and the launch of its 1-metric-ton-per-day pilot plant in Conroe, where its breakthrough in steelmaking has been undergoing tests. The company uses a single-step process that it claims is cheaper, more energy-efficient and equally as scalable as conventional steelmaking methods. The plant is fueled by natural gas or hydrogen.

The company, founded in 2022, plans to break ground early this year on a new plant. The facility will be able to produce more than 9,000 metric tons of steel per year. Continue reading.

Helix Earth Technologies, Resilitix Intelligence and Fluxworks Inc.

Helix Earth's technology is estimated to save up to half of the net energy used in commercial air conditioning, reducing both emissions and costs for operators. Photo via Getty Images

Houston-based Helix Earth Technologies, Resilitix Intelligence and Fluxworks Inc. each secured $1.2 million in federal funding through the Small Business Innovation Research (SBIR) Phase II grant program this fall.

The three grants from the National Scienve foundation officially rolled out in early September 2025 and are expected to run through August 2027, according to the NSF. The SBIR Phase II grants support in-depth research and development of ideas that showed potential for commercialization after receiving Phase I grants from government agencies.

However, congressional authority for the program, often called "America's seed fund," expired on Sept. 30, 2025, and has stalled since the recent government shutdown. Continue reading.

Solidec Inc. (pre-seed)

7 innovative startups that are leading the energy transition in Houston

Houston-based Solidec was founded around innovations developed by Rice University associate professor Haotian Wang (far left). Photo courtesy Greentown Labs.

Solidec, a Houston startup that specializes in manufacturing “clean” chemicals, raised more than $2 million in pre-seed funding in August.

Houston-based New Climate Ventures led the oversubscribed pre-seed round, with participation from Plug and Play Ventures, Ecosphere Ventures, the Collaborative Fund, Safar Partners, Echo River Capital and Semilla Climate Capital, among other investors. Continue reading.

Molecule

Sameer Soleja is the founder and CEO of Molecule, which just closed its series B round. Photo courtesy of Molecule Software.

Houston-based energy trading risk management (ETRM) software company Molecule completed a successful series B round for an undisclosed amount, according to a July 16 release from the company.

The raise was led by Sundance Growth, a California-based software growth equity firm. Sameer Soleja, founder and CEO of Molecule, said in the release that the funding will allow the company to "double down on product innovation, grow our team, and reach even more markets." Continue reading.

Rarefied Studios, Solidec Inc. and Affekta

Houston startups were named among the nearly 300 recipients that received a portion of $44.85 million from NASA to develop space technology this fall. Photo via NASA/Ben Smegelsky

Houston-based Rarefied Studios, Solidec Inc. and Affekta were granted awards from NASA this summer to develop new technologies for the space agency.

The companies are among nearly 300 recipients that received a total agency investment of $44.85 million through the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) Phase I grant programs, according to NASA.

Each selected company received $150,000 and, based on their progress, will be eligible to submit proposals for up to $850,000 in Phase II funding to develop prototypes. The SBIR program lasts for six months and contracts small businesses. Continue reading.

Intuitive Machines 

Intuitive Machines expects to begin manufacturing and flight integration on its orbital transfer vehicle as soon as 2026. Photo courtesy Intuitive Machines.

Houston-based Intuitive Machines secured a $9.8 million Phase II government contract for its orbital transfer vehicle in July.

The contract was expected to push the project through its Critical Design Review phase, which is the final engineering milestone before manufacturing can begin, according to a news release from the company. Intuitive Machines reported that it expected to begin manufacturing and flight integration for its orbital transfer vehicle as soon as this year, once the design review is completed.

The non-NASA contract is for an undisclosed government customer, which Intuitive Machines says reinforces its "strategic move to diversify its customer base and deliver orbital capabilities that span commercial, civil, and national security space operations." Continue reading.

NRG inks new virtual power plant partnership to meet surging energy demands

Powering Up

Houston-based NRG Energy recently announced a new long-term partnership with San Francisco-based Sunrun that aims to meet Texas’ surging energy demands and accelerate the adoption of home battery storage in Texas. The partnership also aligns with NRG’s goal of developing a 1-gigawatt virtual power plant by connecting thousands of decentralized energy sources by 2035.

Through the partnership, the companies will offer Texas residents home energy solutions that pair Sunrun’s solar-plus-storage systems with optimized rate plans and smart battery programming through Reliant, NRG’s retail electricity provider. As new customers enroll, their stored energy can be aggregated and dispatched to the ERCOT grid, according to a news release.

Additionally, Sunrun and NRG will work to create customer plans that aggregate and dispatch distributed power and provide electricity to Texas’ grid during peak periods.

“Texas is growing fast, and our electricity supply must keep pace,” Brad Bentley, executive vice president and president of NRG Consumer, said in the release. “By teaming up with Sunrun, we’re unlocking a new source of dispatchable, flexible energy while giving customers the opportunity to unlock value from their homes and contribute to a more resilient grid

Participating Reliant customers will be paid for sharing their stored solar energy through the partnership. Sunrun will be compensated for aggregating the stored capacity.

“This partnership demonstrates the scale and strength of Sunrun’s storage and solar distributed power plant assets,” Sunrun CEO Mary Powell added in the release. “We are delivering critical energy infrastructure that gives Texas families affordable, resilient power and builds a reliable, flexible power plant for the grid.”

In December, Reliant also teamed up with San Francisco tech company GoodLeap to bolster residential battery participation and accelerate the growth of NRG’s virtual power plant network in Texas.

In 2024, NRG partnered with California-based Renew Home to distribute hundreds of thousands of VPP-enabled smart thermostats by 2035 to help households manage and lower their energy costs. At the time, the company reported that its 1-gigawatt VPP would be able to provide energy to 200,000 homes during peak demand.

---

This article originally appeared on EnergyCapitalHTX.com.