As a researcher, what is more important to you than a record of your research and scholarship? A Digital Persistent Identifier, or DPI, distinguishes you and your work from that of your peers. Graphic byMiguel Tovar/University of Houston

Every researcher needs a Digital Persistent Identifier.

As a researcher, what is more important to you than a record of your research and scholarship? A Digital Persistent Identifier, or DPI, distinguishes you and your work from that of your peers – and having one will be mandated for those receiving federal funding. Let’s take a deeper look at why this number is so important. We’ll also compare the different platforms— ORCID, Web of Science, Scopus and Google Scholar — so that you can be sure your publications, presentations, peer reviews and even information about who is citing you are being properly stored and accessed.

ORCID

There are many types of profiles and DPIs that can meet your needs, but there’s no silver bullet. Placing your work onto multiple platforms is necessary according to Andrea Malone, Research Visibility and Impact Coordinator at UH Libraries. She cautions researchers to “be realistic about how many identifiers you can maintain.”

The most popular is ORCID, which stands for Open Researcher and Contributor ID. It’s free to set up, and there is no chance of accidentally or on-purpose having multiple ORCID accounts – it’s assigned to you like a social security number and follows you, the researcher. This comes in particularly especially handy for researchers with common names.

An identifier is federally mandated for those receiving governmental funds. It is not specified that ORCID must be that identifier. For example, according to Malone: “a Web of Science profile also assigns an identifier, which would also satisfy the mandate.” But most researchers choose ORCID because it’s publicly available with no access restrictions.

While an ORCID number is free for researchers, there is a subscription fee for an institution to be associated with ORCID. Information will not pre-populate in an ORCID profile and it doesn’t track citation counts – it only shows what you put in. There are, however, linking wizards that allow you to link from Web of Science and Scopus to your ORCID account. If you choose this option, citations will automatically populate in your ORCID profile. It’s up to the researcher to doublecheck to be sure the information has automated, however.

Google Scholar

Google Scholar is a profile, not an identifier, so it does not comply with federal funding requirements. It is free, however, and it pulls from the open web. You can choose to have your list of articles updated automatically, review the updates yourself or manually update your articles at any time. Google Scholar also specifies which articles are open access. A PDF or HTML icon will appear on the righthand side of each citation for one to download articles.

Web of Science Vs. Scopus

Scopus is known for covering more journals and a wider range of metrics to evaluate research impact than Web of Science. Different platforms are a go-to for certain disciplines – for example, Web of Science is usually associated with hard sciences, although investigators in the social sciences and humanities also place their work on this platform from time to time. It’s a good idea to check out which platforms others in your discipline are using for their profiles.

Staying up-to-date

Of course, DPIs don’t work as intended unless researchers keep their profiles current. That means you need to check your profile after every publication and every time you switch to a new institution. Just as you would update your CV, you must update your ORCID or other DPI profile.

One tactic Malone suggests is setting a schedule either biweekly or monthly to check all your profiles. “One thing that’s helpful is that with all of them, you can set up alerts and create an alert as often as you want,” Malone goes on. “At that time, the program will scrawl the content within the source and alert you to anytime any of your publications appear in their database.”

The Big Idea

No one tool can paint a complete picture of all your scholarship. Be strategic and intentional about which platforms you use. Consider your audience, the platforms others in your discipline use and make sure you have an ORCID profile to comply with the federal mandate. But be careful not to sign up for more than you can feasibly maintain and keep current.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.


Just like any workplace, labs can get toxic. Graphic byMiguel Tovar/University of Houston

How to navigate toxic lab workplaces, according to UH research

Houston voices

There are many types of toxic bosses. The Micromanager. The Narcissist. The Incompetent Boss. The list goes on. But labs led by toxic PIs not only make for an abysmal workplace they can actually encourage research misconduct.

According to Charles Wood, author of “When lab leaders take too much control,” there are two types of toxic labs most at risk for this type of behavior: the executive model and the competition model.

Executive model

Wood described the executive approach to lab management as one where the mentor sets expectations for trainees, often with a particular goal in mind. In its negative form, this includes specifying experimental outcomes and instructing trainees on particular experiments to achieve a desired result.

It comes as no surprise that experimenting with the answer already in mind goes against scientific principles. Spiking biological samples, manipulating instruments – all these things have been suspected in labs according to the U.S. government’s Office of Research Integrity. The first line of defense is having the investigators replicate their experiment while being closely supervised. The consequences of misconduct, if the allegations are found to be credible, can include being debarred from further federal funding and having data sequestered.

Competition model

The competition model pits graduate students or postdocs against one another. In this case, whoever gets the result first is rewarded, while the others are punished. This makes a perfect breeding ground for misconduct. Imagine if a foreign student’s citizenship status is affected by whether or not they can produce the results their PI wants them to obtain. Of the competition model, Wood said that what students and postdocs learn can be catastrophic: “competition over collaboration and conformity over creativity.” He posits that researchers graduating from the PI’s toxic lab may be influenced to drop out of science completely or go on to run their own labs in a toxic way.

A correlation between mentors and ethical decision-making

Michael D. Mumford, et al. in “Environmental influences on ethical decision making: Climate and environmental predictors of research integrity” (Ethics & Behavior journal) found that for first-year doctoral students, “environmental experiences (including professional leadership) exert stronger effects on ethical decision making than the climate of the work group.”

Wood also noted that, regardless of the management style, certain scientists may be more prone to cheating. However, active involvement and openness by the principal investigator can serve as a preventive measure against this.

What can you do about it?

Chris Sowers in the “Toxic Boss Syndrome: How To Recover and Get Your Mojo Back” episode of his Better Humans podcast, shared how a few toxic bosses affected his job performance, self esteem and even interpersonal relationships. His first piece of advice is to get out quickly, even if you need to take a pay cut – he says a few thousand dollars are not worth the hit to your mental and physical health.

Vetting your lab’s PI will help enormously. Does the PI have a good track record of being a fair and kind mentor?

“If your principal investigator starts to exhibit toxic behavior, address this with him or her,” said Wood. He goes on to advise that “if you find yourself in a truly toxic environment, seek guidance from a graduate coordinator, assistant dean or other authority figure who oversees the pre- or postdoctoral training programs — and ask for help in finding another mentor.”

The Big Idea

No one has time or energy to dedicate to a toxic workplace. The costs are way too high to risk manipulating data. For one, all authors on a paper will be held responsible for the misconduct– not to mention the physical and mental stress a toxic lab will invite into your life.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.” Graphic byMiguel Tovar/University of Houston

Houston expert: Analyzing the impact of generative AI on research

houston voices

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office Kirstin Holzschuh puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired trope of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Understanding the Fly America Act is important for all researchers planning government-funded travel. Graphic byMiguel Tovar/University of Houston

What Houston researchers should know about the Fly America Act

houston voices

Commercial aviation witnessed a transformative shift following World War II. Initially reserved for military purposes, commercial air travel began to flourish as civilians embraced its convenience. This surge in air travel highlighted the necessity for regulating the industry.

In response, the Federal Aviation Administration, or FAA, emerged from the Federal Aviation Act of 1958, later becoming a component of the Department of Transportation under the Department of Transportation Act in 1967.

The evolution of air travel regulation continued in 1974 with the enactment of the Fly America Act. Designed to safeguard U.S. interests in international air travel funded by the government, this act prioritizes U.S. airline carriers. This initiative serves both to support domestic airlines and promote the U.S. aviation industry on a global stage when passengers travel on federal funds.

What some might not know is this legislation can impact researchers and their organizations.

Importance for researchers

Adhering to the Fly America Act applies not only to federal government employees but also their dependents, grantees, and other travelers funded by federal resources. Even foreign researchers visiting the U.S. under federally funded grants must choose U.S. flag air carriers for their travel.

A U.S. flag air carrier should not be confused with a traditional flag carrier. These are airlines that have historically been government-owned or are otherwise closely tied to the identity of a particular country, like British Airways or Aeroméxico. U.S. flag air carriers encompass a wide range of airlines, including smaller entities like Air Wisconsin Airlines and Avelo, a new carrier based in Houston. You can find a comprehensive list of U.S. flag air carriers here.

Navigating exceptions

While the Fly America Act carries strict guidelines, exceptions do exist. Instances where no U.S. flag air carriers serve the destination or where such carriers would extend the trip by over 24 hours warrant special consideration. In these cases, maintaining meticulous records is essential in order to validate the use of non-U.S. airlines. A list of exceptions can be found here.

Open Skies Agreements introduce another facet to the Fly America Act. These agreements between the U.S. Government and other countries enable travelers, including researchers, to use foreign air carriers for government-funded international travel. Several countries, including those in the European Union, Australia, Switzerland, and Japan, maintain Open Skies Agreements. Flights on British Airways are no longer permitted under an Open Skies Agreement due to the United Kingdom’s exit from the European Union. Proper documentation is essential when claiming a Fly America Act exception, even if covered under an Open Skies Agreement. Detailed travel itineraries, internal agency forms, and evidence of a Fly America exception must be included in travel receipts.

The Big Idea

Compliance with the Fly America Act ensures your travel expenses are reimbursable on government grants. It’s important to remember that cost and convenience are not exceptions to the act. A thorough understanding of Fly America Act’s provisions and exceptions is a must before you book your next flight.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

How this UH pharmacologist tapped into humanities to improve health outcomes in her study. Graphic byMiguel Tovar/University of Houston

Integrating humanities can improve health outcomes, Houston researcher finds

houston voices

The humanities, encompassing language, literature, arts, and history, are often seen as separate from scientific research in universities.

However, they hold the potential to contribute to improved health outcomes by fostering empathy, understanding of cultural context, and enhancing human connections. Meghana Trivedi, an associate professor of Pharmacy Practice and Translational Research at the University of Houston, has embarked on a research project to explore this transformative potential.

Using media to improve medicine adherence

Trivedi, a pharmacologist focusing on developing new drugs for breast cancer treatment, noticed a common issue among breast cancer patients — non-adherence to medication instructions, particularly among minority patients with low socioeconomic status. This non-adherence increases the risk of recurrence and poses a significant national problem.

Trivedi sought to understand the reasons behind this non-adherence and discovered that traditional educational pamphlets were ineffective in improving medication adherence. Instead, she decided to explore a theater-based approach.

Trivedi’s research involves developing a culturally sensitive educational video for breast cancer patients, written by a local African American playwright in collaboration with the clinical team and input from African American survivors. Additionally, her team is testing the effectiveness of motivational interviewing, where pharmacy students personally engage with patients to address barriers to adherence and suggest solutions. Trivedi believes that these approaches, rooted in the humanities, will demonstrate their impact on health outcomes.

The role of humanities in team science

Recognizing the value of interdisciplinary collaboration, Trivedi incorporated the arts and humanities into her research by partnering with professors from the School of Theatre and Dance and the Valenti School of Communication at UH. This collaboration has highlighted the importance of incorporating arts into STEM fields, leading Trivedi to advocate for STEAM (Science, Technology, Engineering, Arts, and Mathematics).

Trivedi emphasizes that effective team science requires shared interest in the project, transparency, and honesty among team members. “We are a very efficient team working to achieve a common and important goal – to improve treatment outcomes and survival in patients.”

The Big Idea

Trivedi’s research underscores the role of social and cultural factors in medication adherence among specific patient groups. By posing her research question outside of her discipline, Trivedi engaged new collaborators who became invested in — and contributed directly to — positive health outcomes. This interdisciplinary approach, combining the insights of the humanities and the collaborative nature of team science, facilitated the development of novel solutions to enhance medication adherence.

------

This article originally appeared on the University of Houston's The Big Idea. Cory Thaxton, the author of this piece, is the communications coordinator for The Division of Research.

The process of breaking up research is dangerous one, according to UH's Big Idea. Graphic byMiguel Tovar/University of Houston

University of Houston: Why breaking up research studies does more harm than good

houston voices

Salami slicing, breaking a paper on a single study up into smaller “slices” and publishing them in more than one journal, is broadly discouraged and considered unethical. Why does the practice persist? What do PIs believe are the benefits of doing it?

Two problems

Breaking up research into smaller slices can have serious consequences for scientific integrity. Researchers, especially younger researchers, may get used to looking at data in smaller pieces and not as a whole. This is dangerous from an academic perspective as valuable conclusions, that could have been derived if the data were presented as a whole, are overlooked.

Further, salami slicing of data may do more harm than good to a researcher’s career over time because it significantly reduces their chances of publishing in high impact journals, thereby lessening the weight of their accrued body of work.

One reason salami slicing still persists, is that there is a veritable avalanche of papers vying for publication. And the number seems to be steadily increasing.

“The academic market became more competitive after the nation’s economic downturn, in 2008,” said Rodica Damian, UH associate professor of psychology. “We saw a lot of competition between those with Ph.D.s and those who were conducting postdoc research. Before, you needed a postdoc if you were in Biology, for instance – but you didn’t need one if you had a doctorate in Psychology. That is no longer the case.”

Another reason salami slicing might persist is that advisors may suggest to a graduate student that they write a series of simpler papers as opposed to a more complex paper consisting of multiple measurements. A researcher might get these “single-lens papers” published much more quickly than their multi-faceted counterparts, due to the amount of background research the journal’s editors need to do on the more complicated papers.

How to avoid self-plagiarism

Salami slicing is not necessarily self-plagiarism, but often the practice does feature a large amount of “text overlap,” according to Miguel Roig, Ph.D. on the website of the Office of Research Integrity for the U.S. Department of Health and Human Services. One example Roig gives is as follows:

“Several months ago, for example, we received a manuscript describing a controlled intervention in a birthing center. The authors sent the results on the mothers to us, and the results on the infants to another journal. The two outcomes would have more appropriately been reported together…The important point is that readers need to be made aware that the data being reported were collected in the context of a larger study.”

The Big Idea

An article published by the NIH suggests this rule of thumb: “If the ‘slice’ of the study in question tests a different hypothesis as opposed to the larger study or has a distinct methodology or populations being studied, then it is acceptable to publish it separately.”

However, when a colleague is trying to do a meta analysis, they need to know what your study actually measured. “One thing you can do to avoid salami slicing,” said Damian, “is to pre-register all the projects you’re planning to do from a specific data set. Then ask yourself, do they use different hypotheses, measures, literatures, etc.”

After all is said and done, are they substantively methodically different research papers? If so, they can be sent to different, separate journals.

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Early-stage accelerator names 9th Houston cohort

ready to grow

For the ninth time, gBETA is incubating five early-stage Houston startups providing innovative solutions across skincare, human resources, and more.

Global organization gener8tor, along with Downtown Launchpad, started its ninth gBETA Houston cohort last month. The free seven-week, no-equity accelerator program selected five Houston-based founders to provide helpful programming, support, and connections to mentors, customers, corporate partners, and investors.

"We're thrilled to continue fostering innovation in Houston and are thankful for our collaboration with Downtown Launchpad as we launch the ninth cohort of gBETA Houston,” says Vanessa Huerta, vice president of gBETA at gener8tor, in a statement.

The program has accelerated 40 Houston companies since its launch in Houston a few years ago. The companies have gone on to raise over $8.6 million in funding and created more than 70 jobs.

“With each new cohort, we witness the power of innovation unleashed,” Muriel Foster, gBETA Houston director, says in the release. “The Spring 2024 gBETA Houston cohort embodies the spirit of relentless creativity and boundless ambition.”

The gBETA Houston Spring 2024 Cohort includes:

  • Cosnetix is innovating within personalized skincare, leveraging genetic and microbial skin profiling to offer users custom skincare product recommendations. The platform has been developed through over 100 customer discovery interviews and is headed for beta-testing.
  • Kannect has created an innovative community engagement platform — already used by 20 organizations — to streamline communication, foster collaboration, and enhance member engagement. The tools can be used by nonprofits, associations, religious institutions, and beyond as a digital dashboard to manage memberships, organize events, and facilitate meaningful interactions.
  • Targeting college grads and career pivoters, No Experience Jobs helps users find entry-level jobs that don’t require experience. In its first three months of launching, NoExperienceJobs.io received more than 72,000 unique monthly visitors, gained over 1,300 newsletter subscribers, generated more than 700,000 social media engagements, and is already revenue-generating.
  • The Roo App partners with bars and restaurants to connect designating drivers to those who need designated driver services. The company is currently operation on a web-based platform with over 1,500 current visitors, but plans to launch the mobile application later this year.
  • Yuyo.love is changing the fitness game by providing bilingual fitness classes ranging from yoga, pilates, dance, fitness, nutrition, and meditation. The company's hybrid classes have over 150 participants per class and plans to launch the platform this quarter.

Houston organization introduces inaugural cancer-fighting cohort of data sciences, experts

new to hou

The University of Texas MD Anderson Cancer Center is one step closer to ending cancer thanks to its new institute that's focused on data science.

MD Anderson’s goal with the new Institute for Data Science in Oncology (IDSO) is to advance collaborative projects that will bring the power of data science to every decision made at the hospital. And now, the IDSO has announced its inaugural cohort of 33 scientists, clinicians, and staff that will bring it to life, joining the already appointed leadership and focus area co-leads.

“By engaging diverse expertise across all of our mission areas, we will enhance the rich and productive data science ecosystem at MD Anderson to deliver transformational impact for patients,” David Jaffray, Ph.D., director of IDSO and chief technology and digital officer at MD Anderson, says in a press release.

The focus areas for the IDSO are quantitative pathology and medical imaging; single-cell analytics; computational modeling for precision medicine; decision analytics for health; and safety, quality, and access.

The IDSO Affiliates, as they are known, are a mix of existing contributors to the IDSO and team members who were recruited specifically for their expertise in data science. The affiliates were chosen to fulfill a two-year term, during which they will focus on IDSO projects related to the focus areas above. The diverse roster of professionals includes:

“Our affiliates bring expertise, perspectives and commitment from across the institution to foster impactful data science in order to tackle the most urgent needs of our patients and their families,” said Caroline Chung, M.D., director of Data Science Development and Implementation for IDSO and chief data officer at MD Anderson. “People and community are at the heart of our efforts, and establishing the IDSO Affiliates is an exciting step in growing the most impactful ecosystem for data science in the world.”

Houston initiative selected for DOE program developing hubs for clean energy innovation

seeing green

Houston has been selected as one of the hubs backed by a new program from the United States Department of Energy that's developing communities for clean energy innovation.

The DOE's Office of Technology Transitions announced the the first phase of winners of the Energy Program for Innovation Clusters, or EPIC, Round 3. The local initiative is one of 23 incubators and accelerators that was awarded $150,000 to support programming for energy startups and entrepreneurs.

The Houston-based participant is called "Texas Innovates: Carbon and Hydrogen Innovation and Learning Incubator," or CHILI, and it's a program meant to feed startups into the DOE recognized HyVelocity program and other regional decarbonization efforts.

EPIC was launched to drive innovation at a local level and to inspire commercial success of energy startups. It's the third year of the competition that wraps up with a winning participant negotiating a three-year cooperative agreement with OTT worth up to $1 million.

“Incubators and Accelerators are uniquely positioned to provide startups things they can't get anywhere else -- mentorship, technology validation, and other critical business development support," DOE Chief Commercialization Officer and Director of OTT Vanessa Z. Chan says in a news release. “The EPIC program allows us to provide consistent funding to organizations who are developing robust programming, resources, and support for innovative energy startups and entrepreneurs.”

CHILI, the only participant in Texas, now moves on to the second phase of the competition, where they will design a project continuation plan and programming for the next seven months to be submitted in September.

Phase 2 also includes two national pitch competitions with a total of $165,000 in cash prizes up for grabs for startups. The first EPIC pitch event for 2024 will be in June at the 2024 Small Business Forum & Expo in Minneapolis, Minnesota.

Last fall, the DOE selected the Gulf Coast's project, HyVelocity Hydrogen Hub, as one of the seven regions to receive a part of the $7 billion in Bipartisan Infrastructure Law. The hub was announced to receive up to $1.2 billion — the most any hub will get.

The DOE's OTT selections are nationwide. Photo via energy.gov

------

This article originally ran on EnergyCapital.