Texans appear to be OK with embracing AI. Image via Getty Images

When new technology emerges, many of us approach it with a certain amount of skepticism. That’s certainly true with artificial intelligence, which is creeping into practically every part of our existence. Pew Research Center surveys show that more than half (52 percent) of Americans are increasingly cautious about the growing presence of AI in their everyday lives.

So, how wary are Texans of AI? A new ranking from ZapCap, whose AI generates captions for videos, gives us a clue. Texas ranks 15th among the states that are most trusting of AI, with a trust score of 85 out of 100.

Translation: Texans appear to be OK with embracing AI.

To develop the ranking, ZapCap collected search volume data for AI-related queries, including terms such as “best AI tools,” “best AI assistants,” “how to use AI” and “ChatGPT.” ZapCap then calculated a trust score based on each state’s search activity and population.

“This research provides an insightful look into AI engagement patterns across the U.S., highlighting the states where AI is most actively explored and potentially trusted,” says ZapCap.

With an off-the-charts score of 116, California tops the list. California “demonstrates extraordinary AI engagement with over 44 million ChatGPT searches and 77,910 Claude.AI queries, marking the highest AI tool adoption rates across all metrics,” says ZapCap.

Here’s the rest of the top five, including their AI trust scores:

  • New York — 108
  • Massachusetts — 106
  • Virginia — 102
  • New Jersey — 99

The state with the least amount of AI trust is Minnesota, which received a ZapCap score of 22.

“What’s fascinating is that innovation is blooming far beyond the usual tech hotspots,” ZapCap’s Jessica Bui said in a release. “While California and New York continue to lead, states like Massachusetts and Virginia are rising as innovation powerhouses. Their rapid adoption of new technology proves that it's not about market size — it’s about fostering a culture where businesses and everyday people feel confident exploring what's next.”

See the full findings here.

ChatGPT enhances creativity and problem-solving in ways that traditional search tools can’t match. Photo courtesy of Rice Business Wisdom

Houston researchers find AI provides fresh perspectives to everyday problems

houston voices

We all know ChatGPT has forever changed how we do business. It’s modified how we access information, compose content and analyze data. It’s revolutionized the future of work and education. And it has transformed the way we interact with technology.

Now, thanks to a recent paper by Jaeyeon (Jae) Chung (Rice Business), we also know it’s making us better problem solvers.

Key findings:

  • A recent study finds ChatGPT-generated ideas are deemed an average of 15% more creative than traditional methods.
  • ChatGPT enhances “incremental,” but not “radical,” innovation.
  • ChatGPT boosts creativity in tasks normally associated with human traits, like empathy-based challenges.

According to the study published in Nature Human Behavior by Chung and Byung Cheol Lee (University of Houston), ChatGPT enhances our problem-solving abilities, especially with everyday challenges. Whether coming up with gifts for your teenage niece or pondering what to do with an old tennis racquet, ChatGPT has a unique ability to generate creative ideas.

“Creative problem-solving often requires connecting different concepts in a cohesive way,” Chung says. “ChatGPT excels at this because it pulls from a vast range of data, enabling it to generate new combinations of ideas.”

Can ChatGPT Really Make Us More Creative?

Chung and Lee sought to answer a central question: Can ChatGPT help people think more creatively than traditional search engines? To answer this, they conducted five experiments.

Each experiment asked participants to generate ideas for solving challenges, such as how to repurpose household items. Depending on the experiment, participants were divided into one of two or three groups: one that used ChatGPT; one that used conventional web search tools (e.g., Google); and one that used no external tool at all. The resulting ideas were evaluated by both laypeople and business experts based on two critical aspects of creativity: originality and appropriateness (i.e., practicality).

In one standout experiment, participants were asked to come up with an idea for a dining table that doesn’t exist on the market. The ChatGPT group came up with suggestions like a “rotating table,” a “floating table” and even “a table that adjusts its height based on the dining experience.” According to both judges and experts, the ChatGPT group consistently delivered the most creative solutions.

On average, across all experiments, ideas generated with ChatGPT were rated 15% more creative than those produced by traditional methods. This was true even when tasks were specifically designed to require empathy or involved multiple constraints — tasks we typically assume humans might be better at performing.

However, Chung and Lee also found a caveat: While ChatGPT excels at generating ideas that are “incrementally” new — i.e., building on existing concepts — it struggles to produce “radically” new ideas that break from established patterns. “ChatGPT is an incredible tool for tweaking and improving existing ideas, but when it comes to disruptive innovation, humans still hold the upper hand,” Chung notes.

Charting the Next Steps in AI and Creativity

Chung and Lee’s paper opens the door to many exciting avenues for future study. For example, researchers could explore whether ChatGPT’s creative abilities extend to more complex, high-stakes problem-solving environments. Could AI be harnessed to develop groundbreaking solutions in fields like medicine, engineering or social policy? Understanding the nuances of the collaboration between humans and AI could shape the future of education, work and even (as many people fear) art.

For professionals in creative fields like product design or marketing, the study holds especially significant implications. The ability to rapidly generate fresh ideas can be a game-changer in industries where staying ahead of trends is vital. For now, take a second before you throw out that old tennis racquet. Ask ChatGPT for inspiration — you’ll be surprised at how many ideas it comes up with, and how quickly.

-----

This article originally appeared on Rice Business Wisdom. Based on research by Jaeyeon (Jae) Chung and Byung Cheol Lee (University of Houston). Lee and Chung, “An empirical investigation of the impact of ChatGPT on creativity.” Nature Human Behavior (2024): https://doi.org/10.1038/s41562-024-01953-1.


Konect.ai is using AI and natural language processing within the automotive retail industry. Image via Getty Images

Houston AI SaaS startup secures $5.5M seed funding from Austin VC

money moves

A Houston startup that's using artificial intelligence and natural language processing to disrupt the retail automotive industry has raised seed funding.

Konect.ai announced a $5.5 million seed investment from Austin-based Silverton Partners. The funding will support the company's development of its software, which hopes to advance communications between auto dealerships and auto tech companies and customers.

"This investment from Silverton Partners is a strong validation of our vision and the hard work of our talented team. With this support, we are poised to accelerate our growth and continue to innovate, bringing the most advanced conversational AI products to the automotive retail industry," Cole Kutschinski, president and CEO of Konect.ai, says in a news release.

The Konect.ai platform features tools such as acquisition lead management, SMS campaigns, and after-hours support — all with a goal of enhancing the customer experience and helping achieve sales goals. With the investment the company plans to expand its product offerings, grow research and development, and increase a presence in key markets.

"We at Silverton are excited to partner with and support the Konect.ai team as they accelerate their growth," Rob Taylor, operating partner at Silverton Partners, says in the release. "We were drawn not just to the incredible business they have already built but also to their forward vision of utilizing machine intelligence innovation to create delightful experiences for all parties across the automotive retail ecosystem.”

Konect.ai's tech is enhancing communications in the automotive retail industry. Konect.ai

ThirdAI's new PocketLLM app is free to use and completely secure. Photo via Getty Images

Houston startup launches innovative chat tool on its mission to democratize AI

smart tech

Artificial intelligence has a big potential to disrupt the technology industry, and one Houston company that was founded by a computer science professor at Rice University, is fast on its way to help lead that future now in a convenient and affordable way.

Founded by Anshumali Shrivastava and Tharun Medini, a recent Ph.D. who graduated under Shrivastava from Rice's Department of Electrical and Computer Engineering, ThirdAI is building AI deep learning tools that aim to be sustainable and scalable to fit the changing needs of the industry. The company is on a mission to democratize AI, Shrivastava tells InnovationMap.

Shrivastava likes to use the word efficiently when describing what makes ThirdAI different, and how its programs can teach AI via multiple avenues to be what he refers to as “1,000 times more efficient.”

“The carbon footprint of these models are off the charts, and so expensive,” Shrivastava. “We believe this could be made efficient. … We use the same ideas that were developed, but we do it on a massive scale.”

ThirdAI's latest tool is a multilingual ChatGPT-like AI training tool PocketLLM app. Announced earlier this month, the tool is free. According to the company, users have access to a personalized chatbot that understands what the user is searching within documents, and can be fine-tuned to help elaborate your thoughts through a neural search.

ThirdAI's PocketLLM app is free to use. Image courtesy of ThirdAI

The app is private and secure and runs on deep-learning algorithms according to Vinod Iyengar, head of product at ThirdAI, and no one — not even ThirdAI — has access to the documents except the user.

“Tools exist to help people search text files, but that requires sharing your data with third parties,” says Iyengar in a news release. “Our solution is private and secure, powered by deep learning algorithms. And it returns results lightning fast.”

The process includes the user installing the app, uploading any text document files, and clicking "train." Minutes later, you have an AI tool that can process the information in those documents.

“The neural search encourages you to elaborate on your thoughts with details in the discover window and see the difference in results,” says Shrivastava in the release. “It can also be fine-tuned to your tastes by selecting the relevant option and hitting the update button to re-train."

In September of 2021, ThirdAI — pronounced "third eye" — raised $6 million in seed funding. The round was invested in by three California-based VCs — Neotribe Ventures and Cervin Ventures, which co-led the round with support from Firebolt Ventures. The technology ThirdAI is working with comes from 10 years of deep learning research and innovation. The company's technology has the potential to make computing 15-times faster, the company reports.

Anshumali Shrivastava is an associate professor of computer science at Rice University. Photo via rice.edu

Here's how AI-based chat will effect research. Graphic by Miguel Tovar/University of Houston

University of Houston: Here's what intuitive AI and ChatGPT mean for research

HOUSTON VOICES

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office, Kirstin Holzschuh, puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired troupe of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Here's what you should think about before rolling this hot new technology into your business. Photo via Getty Images

Houston expert: How ChatGPT affects business skills in 2023

guest column

The world has been captivated by ChatGPT, an artificial intelligence program that can understand and respond to questions and statements using natural language, just like humans. It has been trained on a large amount of text data and uses this knowledge to generate helpful and informative responses to users.

As great and resourceful as this can be, there are some major aspects about it that can be harmful in a business setting, such as the inability to make personal connections. A sales manager using AI to write sales scripts cannot incorporate the emotional intelligence needed to form a connection. With the switch to AI and loss of this personal touch, the company’s close rate drops significantly, and the sales manager’s effort to find solution may just be to run more numbers in terms of contacts and sales attempts, which usually exacerbates the problem.

Another example of how ChatGPT can hurt your business is by relying on it to generate website and social media content. A business owner that believes ChatGPT will do the “heavy lifting” and grow his or her business is overlooking the importance of creating real and experiential marketing experiences for customers. Business owners can inadvertently spend entire budgets on AI driven social media ands and not have the sales numbers to cover these costs due to their low returns on investments for many industries and keep the business in operation.

The overarching theme, or danger behind ChatGPT, is that people are relying heavily on it to produce their work. After all, relying on technology is part of our human nature. When great technology is introduced, such as email, teleconferencing, AI assisted searching, etc., we rarely ask ‘how can this technology assist me?’ versus ‘how can this technology do things for me?’ The greater the technology, the greater likelihood humans will take the easiest path.

ChatGPT not only affects businesses, but it also applies to education. Teachers are already seeing a drop in math skills as kids carry around calculators. Just wait until next semester when educators are reading thousands of essays written by ChatGPT.

Just as we would hate to see our children deprive themselves of actual skills, the same can be said for our business people. Some of the main issues that arise from the use of ChatGPT are:

Diminishing Rates of Return

When we embrace technology to the point that we no longer put forth effort from a practiced skill set, we can expect to see declining engagement rates, click-through rates and customer loyalty. As of 2023, the online engagement rate has fallen from 5% to 0.06%. Click-through rates are not faring much better with a measly 6.3%. As these numbers continue to fall (which they have every year for the past couple decades now), we continue to just brush it off as this is how business is done.

Aversion in the Marketplace

People are becoming so displeased with technology driven processes (as opposed to technology assisted processes) that they have a strong aversion to companies using it. How many social media ads have prompted you to make a purchase? How many times do you provide a bogus email to a website form? When doing a search, how many times do you scroll to the bottom without looking and hit page two because you know you are not getting any real results on the first page anymore?

Yes, ChatGPT is cool and yes, there are some amazing uses you can implement into your business; however, do not look at it as the answer to any and all business problems. Embrace your craft as a leader and avoid subbing the work out to tech - doing so could cost you everything.

As a business owner myself, I am not opposed to technology. I am all in favor of what technology can do. However, there is no denying that the more we look to technology to do the work for us instead of with us, the more we see a drastic decline in the overall skill set of business people without an increase in business success rates.

------

Josh Tolley is the founder of Kingsbridge LLC and is based in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Trailblazing Houston entrepreneur brings big ideas to new Yahoo Finance show

tune in

Elizabeth Gore, co-founder and president of Houston's Hello Alice, debuted the first episode of her new video podcast series with Yahoo Finance on Thursday, April 24.

The weekly series, known as "The Big Idea with Elizabeth Gore," will focus on providing information and resources to small business owners and sharing stories of entrepreneurship, according to a news release from Yahoo Finance.

“Entrepreneurs and small business owners drive our country’s economy forward. With a record number of small businesses launching in our communities, my goal is to help every citizen live the American Dream. On the Big Idea, we will break down barriers for entrepreneurs and lift up opportunities for every person wanting to be their own boss,” Gore said in the release.

“By hosting the 'Big Idea' on Yahoo Finance, I’m looking forward to elevating business owners’ stories and providing actionable insights to small business owners at a scale like never before. I am blown away to be joining the number one finance news source that is already trusted by so many.”

Gore was joined by Hello Alice co-founder and CEO Carolyn Rodz in the premiere episode, titled "Got a big idea for a small business? Here's your first step," to discuss the steps they took when launching the business.

Gore and Rodz founded Hello Alice in 2017. The fintech platform supports over 1.5 million small businesses across the nation. It has helped owners access affordable capital and credit and distributed over $57 million in grants to businesses across various industries. The company raised a series C round backed by Mastercard last year for an undisclosed amount and reported that the funding brought the company's valuation up to $130 million at the time.

According to Yahoo Finance, Gore's experience and expertise build on its "mission to be the trusted guide of financial information to all investors, and democratize access to quality content."

“Over the past year, we invested in expanding our programming lineup with the launch of new shows and podcasts, and welcomed new financial creators and influencers into our newsroom,” Anthony Galloway, head of content at Yahoo Finance, added the release. “By diversifying our programming and talent roster, Yahoo Finance is introducing unique points-of-view that make financial topics more engaging, actionable, and personalized. Small business owners are a vital part of our audience, so we’re excited to welcome Elizabeth Gore from Hello Alice, whose insights and expertise will help us serve and connect with this important cohort in meaningful ways.”

The show is available on Spotify, Apple Podcasts, iHeart, Pandora, and Amazon Music for listening. Streamers can view it on yahoofinance.com, Amazon Prime Video, Samsung TV, Fire TV, Vizio, Haystack, DirectTV and other streaming platforms. Watch the premiere here:

7 top Houston researchers join Rice innovation cohort for 2025

top of class

The Liu Idea Lab for Innovation and Entrepreneurship (Lilie) has announced its 2025 Rice Innovation Fellows cohort, which includes students developing cutting-edge thermal management solutions for artificial intelligence, biomaterial cell therapy for treating lymphedema, and other innovative projects.

The program aims to support Rice Ph.D. students and postdocs in turning their research into real-world solutions and startups.

“Our fourth cohort of fellows spans multiple industries addressing the most pressing challenges of humanity,” Kyle Judah, Lilie’s executive director, said in a news release. “We see seven Innovation Fellows and their professors with the passion and a path to change the world.”

The seven 2025 Innovation Fellows are:

Chen-Yang Lin, Materials Science and Nanoengineering, Ph.D. 2025

Professor Jun Lou’s Laboratory

Lin is a co-founder of HEXAspec, a startup that focuses on creating thermal management solutions for artificial intelligence chips and high-performance semiconductor devices. The startup won the prestigious H. Albert Napier Rice Launch Challenge (NRLC) competition last year and also won this year's Energy Venture Day and Pitch Competition during CERAWeek in the TEX-E student track.

Sarah Jimenez, Bioengineering, Ph.D. 2027

Professor Camila Hochman-Mendez Laboratory

Jimenez is working to make transplantable hearts out of decellularized animal heart scaffolds in the lab and the creating an automated cell delivery system to “re-cellularize” hearts with patient-derived stem cells.

Alexander Lathem, Applied Physics and Chemistry, Ph.D. 2026

Professor James M. Tour Laboratory

Lathem’s research is focused on bringing laser-induced graphene technology from “academia into industry,” according to the university.

Dilrasbonu Vohidova is a Bioengineering, Ph.D. 2027

Professor Omid Veiseh Laboratory

Vohidova’s research focuses on engineering therapeutic cells to secrete immunomodulators, aiming to prevent the onset of autoimmunity in Type 1 diabetes.

Alexandria Carter, Bioengineering, Ph.D. 2027

Professor Michael King Laboratory

Carter is developing a device that offers personalized patient disease diagnostics by using 3D culturing and superhydrophobicity.

Alvaro Moreno Lozano, Bioengineering, Ph.D. 2027

Professor Omid Veiseh Lab

Lozano is using novel biomaterials and cell engineering to develop new technologies for patients with Type 1 Diabetes. The work aims to fabricate a bioartificial pancreas that can control blood glucose levels.

Lucas Eddy, Applied Physics and Chemistry, Ph.D. 2025

Professor James M. Tour Laboratory

Eddy specializes in building and using electrothermal reaction systems for nanomaterial synthesis, waste material upcycling and per- and polyfluoroalkyl substances (PFAS) destruction.

This year, the Liu Lab also introduced its first cohort of five commercialization fellows. See the full list here.

The Rice Innovation Fellows program assists doctoral students and postdoctoral researchers with training and support to turn their ideas into ventures. Alumni have raised over $20 million in funding and grants, according to Lilie. Last year's group included 10 doctoral and postdoctoral students working in fields such as computer science, mechanical engineering and materials science.

“The Innovation Fellows program helps scientist-led startups accelerate growth by leveraging campus resources — from One Small Step grants to the Summer Venture Studio accelerator — before launching into hubs like Greentown Labs, Helix Park and Rice’s new Nexus at The Ion,” Yael Hochberg, head of the Rice Entrepreneurship Initiative and the Ralph S. O’Connor Professor in Entrepreneurship, said in the release. “These ventures are shaping Houston’s next generation of pillar companies, keeping our city, state and country at the forefront of innovation in mission critical industries.”

Houston startup Collide secures $5M to grow energy-focused AI platform

Fresh Funds

Houston-based Collide, a provider of generative artificial intelligence for the energy sector, has raised $5 million in seed funding led by Houston’s Mercury Fund.

Other investors in the seed round include Bryan Sheffield, founder of Austin-based Parsley Energy, which was acquired by Dallas-based Pioneer Natural Resources in 2021; Billy Quinn, founder and managing partner of Dallas-based private equity firm Pearl Energy Investments; and David Albin, co-founder and former managing partner of Dallas-based private equity firm NGP Capital Partners.

“(Collide) co-founders Collin McLelland and Chuck Yates bring a unique understanding of the oil and gas industry,” Blair Garrou, managing partner at Mercury, said in a news release. “Their backgrounds, combined with Collide’s proprietary knowledge base, create a significant and strategic moat for the platform.”

Collide, founded in 2022, says the funding will enable the company to accelerate the development of its GenAI platform. GenAI creates digital content such as images, videos, text, and music.

Originally launched by Houston media organization Digital Wildcatters as “a professional network and digital community for technical discussions and knowledge sharing,” the company says it will now shift its focus to rolling out its enterprise-level, AI-enabled solution.

Collide explains that its platform gathers and synthesizes data from trusted sources to deliver industry insights for oil and gas professionals. Unlike platforms such as OpenAI, Perplexity, and Microsoft Copilot, Collide’s platform “uniquely accesses a comprehensive, industry-specific knowledge base, including technical papers, internal processes, and a curated Q&A database tailored to energy professionals,” the company said.

Collide says its approximately 6,000 platform users span 122 countries.

---

This story originally appeared on our sister site, EnergyCapitalHTX.com.