Konect.ai is using AI and natural language processing within the automotive retail industry. Image via Getty Images

A Houston startup that's using artificial intelligence and natural language processing to disrupt the retail automotive industry has raised seed funding.

Konect.ai announced a $5.5 million seed investment from Austin-based Silverton Partners. The funding will support the company's development of its software, which hopes to advance communications between auto dealerships and auto tech companies and customers.

"This investment from Silverton Partners is a strong validation of our vision and the hard work of our talented team. With this support, we are poised to accelerate our growth and continue to innovate, bringing the most advanced conversational AI products to the automotive retail industry," Cole Kutschinski, president and CEO of Konect.ai, says in a news release.

The Konect.ai platform features tools such as acquisition lead management, SMS campaigns, and after-hours support — all with a goal of enhancing the customer experience and helping achieve sales goals. With the investment the company plans to expand its product offerings, grow research and development, and increase a presence in key markets.

"We at Silverton are excited to partner with and support the Konect.ai team as they accelerate their growth," Rob Taylor, operating partner at Silverton Partners, says in the release. "We were drawn not just to the incredible business they have already built but also to their forward vision of utilizing machine intelligence innovation to create delightful experiences for all parties across the automotive retail ecosystem.”

Konect.ai's tech is enhancing communications in the automotive retail industry. Konect.ai

ThirdAI's new PocketLLM app is free to use and completely secure. Photo via Getty Images

Houston startup launches innovative chat tool on its mission to democratize AI

smart tech

Artificial intelligence has a big potential to disrupt the technology industry, and one Houston company that was founded by a computer science professor at Rice University, is fast on its way to help lead that future now in a convenient and affordable way.

Founded by Anshumali Shrivastava and Tharun Medini, a recent Ph.D. who graduated under Shrivastava from Rice's Department of Electrical and Computer Engineering, ThirdAI is building AI deep learning tools that aim to be sustainable and scalable to fit the changing needs of the industry. The company is on a mission to democratize AI, Shrivastava tells InnovationMap.

Shrivastava likes to use the word efficiently when describing what makes ThirdAI different, and how its programs can teach AI via multiple avenues to be what he refers to as “1,000 times more efficient.”

“The carbon footprint of these models are off the charts, and so expensive,” Shrivastava. “We believe this could be made efficient. … We use the same ideas that were developed, but we do it on a massive scale.”

ThirdAI's latest tool is a multilingual ChatGPT-like AI training tool PocketLLM app. Announced earlier this month, the tool is free. According to the company, users have access to a personalized chatbot that understands what the user is searching within documents, and can be fine-tuned to help elaborate your thoughts through a neural search.

ThirdAI's PocketLLM app is free to use. Image courtesy of ThirdAI

The app is private and secure and runs on deep-learning algorithms according to Vinod Iyengar, head of product at ThirdAI, and no one — not even ThirdAI — has access to the documents except the user.

“Tools exist to help people search text files, but that requires sharing your data with third parties,” says Iyengar in a news release. “Our solution is private and secure, powered by deep learning algorithms. And it returns results lightning fast.”

The process includes the user installing the app, uploading any text document files, and clicking "train." Minutes later, you have an AI tool that can process the information in those documents.

“The neural search encourages you to elaborate on your thoughts with details in the discover window and see the difference in results,” says Shrivastava in the release. “It can also be fine-tuned to your tastes by selecting the relevant option and hitting the update button to re-train."

In September of 2021, ThirdAI — pronounced "third eye" — raised $6 million in seed funding. The round was invested in by three California-based VCs — Neotribe Ventures and Cervin Ventures, which co-led the round with support from Firebolt Ventures. The technology ThirdAI is working with comes from 10 years of deep learning research and innovation. The company's technology has the potential to make computing 15-times faster, the company reports.

Anshumali Shrivastava is an associate professor of computer science at Rice University. Photo via rice.edu

Here's how AI-based chat will effect research. Graphic by Miguel Tovar/University of Houston

University of Houston: Here's what intuitive AI and ChatGPT mean for research

HOUSTON VOICES

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office, Kirstin Holzschuh, puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired troupe of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Here's what you should think about before rolling this hot new technology into your business. Photo via Getty Images

Houston expert: How ChatGPT affects business skills in 2023

guest column

The world has been captivated by ChatGPT, an artificial intelligence program that can understand and respond to questions and statements using natural language, just like humans. It has been trained on a large amount of text data and uses this knowledge to generate helpful and informative responses to users.

As great and resourceful as this can be, there are some major aspects about it that can be harmful in a business setting, such as the inability to make personal connections. A sales manager using AI to write sales scripts cannot incorporate the emotional intelligence needed to form a connection. With the switch to AI and loss of this personal touch, the company’s close rate drops significantly, and the sales manager’s effort to find solution may just be to run more numbers in terms of contacts and sales attempts, which usually exacerbates the problem.

Another example of how ChatGPT can hurt your business is by relying on it to generate website and social media content. A business owner that believes ChatGPT will do the “heavy lifting” and grow his or her business is overlooking the importance of creating real and experiential marketing experiences for customers. Business owners can inadvertently spend entire budgets on AI driven social media ands and not have the sales numbers to cover these costs due to their low returns on investments for many industries and keep the business in operation.

The overarching theme, or danger behind ChatGPT, is that people are relying heavily on it to produce their work. After all, relying on technology is part of our human nature. When great technology is introduced, such as email, teleconferencing, AI assisted searching, etc., we rarely ask ‘how can this technology assist me?’ versus ‘how can this technology do things for me?’ The greater the technology, the greater likelihood humans will take the easiest path.

ChatGPT not only affects businesses, but it also applies to education. Teachers are already seeing a drop in math skills as kids carry around calculators. Just wait until next semester when educators are reading thousands of essays written by ChatGPT.

Just as we would hate to see our children deprive themselves of actual skills, the same can be said for our business people. Some of the main issues that arise from the use of ChatGPT are:

Diminishing Rates of Return

When we embrace technology to the point that we no longer put forth effort from a practiced skill set, we can expect to see declining engagement rates, click-through rates and customer loyalty. As of 2023, the online engagement rate has fallen from 5% to 0.06%. Click-through rates are not faring much better with a measly 6.3%. As these numbers continue to fall (which they have every year for the past couple decades now), we continue to just brush it off as this is how business is done.

Aversion in the Marketplace

People are becoming so displeased with technology driven processes (as opposed to technology assisted processes) that they have a strong aversion to companies using it. How many social media ads have prompted you to make a purchase? How many times do you provide a bogus email to a website form? When doing a search, how many times do you scroll to the bottom without looking and hit page two because you know you are not getting any real results on the first page anymore?

Yes, ChatGPT is cool and yes, there are some amazing uses you can implement into your business; however, do not look at it as the answer to any and all business problems. Embrace your craft as a leader and avoid subbing the work out to tech - doing so could cost you everything.

As a business owner myself, I am not opposed to technology. I am all in favor of what technology can do. However, there is no denying that the more we look to technology to do the work for us instead of with us, the more we see a drastic decline in the overall skill set of business people without an increase in business success rates.

------

Josh Tolley is the founder of Kingsbridge LLC and is based in Houston.

AI and automation are coming for Texas jobs, according to ChatGPT. Image via Getty Images

Here's how many Texas jobs are at risk of being replaced by AI — according to AI

the tech is coming

ChatGPT seems to have its eyes set on at least 200,000 Texas jobs, according to a new study.

The report by UK-based affiliate marketing site NetVoucherCodes compiled data on nearly 200 U.S. job types and then asked ChatGPT for each job’s relative risk from AI, automation, and also the likelihood of AI increasing that job’s productivity.

The query found that 237,000 jobs in Texas have a high risk of being replaced by AI, and 1.07 million jobs are considered at a medium risk of being replaced by AI.

Similarly 1.8 million Texas jobs were considered at a high risk of being replaced by automation. More than 3 million jobs were considered at a medium risk of being replaced by automation.

In both the AI and automation queries, Texas had the second-most at risk jobs among the states. California had the most jobs at risk in both categories. New York fell behind Texas in the No. 3 spot for most jobs being at risk of being replaced by AI, and Florida had the third most jobs at risk of being replaced by automation.

Maine had the highest number of low-risk jobs, followed by Hawaii and Montana.

Texas jobs that were both at high risk and made up a large portion of Texas's workforce include fast food and counter workers, cashiers, customer service representatives, and stockers and order fillers.

“The data we’ve pulled together shows that a lot of lower income jobs are the ones that are at a higher risk of being replaced. When we break this down, we realize it’s because these jobs often involve a lot of the same repetitive tasks. This can be inputted into a computer and can be done automatically," John Strizaker from NetVoucherCodes said in a statement.

“The jobs that aren’t at risk are ones that really require human thinking and decision making, like nurses, doctors along with creative roles such as fashion designers and hairdressers," he continued.

At the same time, the study found that more than 1.4 million Texas jobs have a high likelihood of being able to use AI and automation to boost productivity. This was the second highest number of jobs in this category behind California.

“Although this might be a concern in the near future, there’s a lot of benefits to AI and automation," Strizaker said in the statement. "It will increase productivity, especially in creative fields and it will be able to take over repetitive tasks and free up workers for other things that require thinking.”

Some Houston companies are already using GPT, or generative pre-trained transformer, technology to their advantage. Femtech company ema pivoted to use the powerful tool to provide access to women's health information 24/7 to its users about 16 months ago.

And at the start of this year the University of Houston and Intel launched a new artificial intelligence program to help bridge the AI knowledge gap and fill hundreds of AI-related jobs that are open in the Houston area.

A Houston founder is introducing you to ema — a GPT-based chat platform and your new best friend in women's health. Photo via Canva

Exclusive: Houston startup rebrands to provide AI chat tool focused on women’s health

meet ema

Amanda Ducach set out to create a platform where mothers could connect with each other socially, but when she launched SocialMama just ahead of a global pandemic, she soon learned there was a bigger market need for access to information surrounding women's health — from fertility to menopause.

After pivoting her femtech platform to include women's health experts, she realized her technology wasn't able to completely support growing user base. The platform, which was called SocialMama, saw users engaging with experts in similar ways — and as Ducach looked into growing the platforms users, she realized that 24/7 access to experts was going to be hard to scale.

"We noticed that most of these conversations were repetitive," Ducach tells InnovationMap. "You had women asking an expert about tracking ovulation a hundred times a day. Having an OBGYN answer that question a hundred times a day was crazy and just not scalable."

Ducach says that about 16 months ago, her team took a step back to recreate the platform incorporating GPT technology. GPT stands for generative pre-trained transformer, and is a family of artificial intelligence language models most recently made popular but ChatGPT developed by OpenAI.

Now, after building out the platform, Ducach's company has rebranded to ema. The AI-based chat tool — named from the three letters in the middle of "female" — is meant to feel like texting "your childhood best friend who became an OBGYN physician," Ducach says. Not only can the chat provide crucial medical information, but it has a memory and can pick up conversations where they left off to be a constant resource to users.

The new platform, deemed ema, operates as an AI-based chat for women to engage with. Screenshot courtesy of ema

"Ema can answer everything from, 'how do I improve my baby's latch,' to 'how to I get a diabetic-friendly brownie recipe,' to 'give me an affirmation that's spoken like Snoop Dog because I'm feeling sad today,'" Ducach says.

Ducach first described the evolution of the company to AI-based communication last summer on the Houston Innovators Podcast. Now, the platform is gearing up for its launch next month and plans to raise seed funding this year to double her current team of 10 people to support the company's growth. Ducach, who was accepted into the Techstars Austin program in 2021, also says she's looking for more beta users in the meantime, and those interested should reach out to her or her team.

Ultimately, Ducach says the mission of ema is to democratize access to women's health care so that women feel supported and just a few taps away from important information.

"Barriers to care for women who face socioeconomic disparities is where you see the need for change," Ducach says. "For us, it's reducing those barriers of care. Ema is always in your pocket. You have access to her 24/7. The way that ema is really structured and her purpose is to catch red flags so that we can then help the female user get to positive health outcomes."

Amanda Ducach founded the company in 2019. Photo via Twitter

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

FDA greenlights Houston surgery robotics company's unique technology

headed to clinical trials

A Houston surgical robotics company has gotten a Investigational Device Exemption from the FDA to go forward with human trials.

This news allows EndoQuest Robotics to begin its Prospective Assessment of a Robotic-Assisted Device in Gastrointestinal Medicine (PARADIGM) study, which will be conducted at leading United States health care facilities, including Brigham and Women’s Hospital (Boston), Mayo Clinic (Scottsdale), Cleveland Clinic (Cleveland), AdventHealth (Orlando), and HCA Healthcare (Houston). The study will include surgeries on 50 subjects, who will hopefully begin to enroll in January.

“The foundational thesis is we're trying to make sure that the world's largest medical center is also the world's largest med tech innovation center,” Eduardo Fonseca, interim CEO of EndoQuest Robotics, tells InnovationMap.

His company is well on its way to helping to assure that, through making history of its own. EndoQuest is behind the world's first Flexible Robotic Surgical System, a technology that may one day transform surgery as we know it.

The idea to use these novel robots for surgery came from Dr. Todd Wilson, a surgeon at UTHealth Houston, who spent his medical education, residency, and fellowship at the institution.

“I had really focused in my practice on trying to do everything possible to improve outcomes for patients,” Wilson explains. “And there seemed to be a pretty good correlation that the smaller the incisions or the fewer incisions, the better patients would do.”

The stumbling block? The necessary small incisions are difficult for human surgeons to make with current technology. But UTHealth was part of the solution.

“Right there in the University of Texas was a microsurgical lab where they were focusing on trying to develop robotics, but the application was still a little bit fuzzy,” Wilson says.

Using their innovations to solve Wilson’s problem turned out to be the start of the company now known as EndoQuest Robotics.

The first indication for the system is for colon lesions. But in the future it could be used for practically any minimally invasive surgery (MIS). That means that the robots could help to perform anything from a tonsillectomy to cholecystectomy (gallbladder removal) to non-invasive colorectal procedures, should those lesions prove to be cancerous.

According to Fonseca, last year was the first on record that there were more MIS, including laparoscopic and robotic surgeries, than conventional ones in the U.S. The time is right to forge ahead with the flexible robotic surgical system. Days ago, the EndoQuest team announced that its Investigational Device Exemption (IDE) application for its pivotal colorectal clinical study was approved by the FDA.

“Our end point is a device that can be mass-manufactured and very safe for patients and has a short learning curve, so therefore, we intend to learn a lot during these trials that will inform our ultimate design,” says Fonseca.

He adds that it’s a “brilliant” group of engineers that has set EndQuest apart, including both teams in Houston and in South Korea.

“We can move twice as fast as anyone else,” jokes engineer Jiwon Choi.

Despite the extra brain power provided by the South Korea engineers, Fonseca says that EndoQuest’s beginnings are “as much of a Houston story as you could find.”

Founder bets on Houston to grow innovative corrosion detection technology

HOUSTON INNOVATORS PODCAST EPISODE 265

Despite having success in taking his technology from lab to commercialization, Anwar Sadek made the strategic decision to move his company, Corrolytics, from where it was founded in Ohio to Houston.

"Houston is the energy capital of the world. For the technology we are developing, it is the most strategic move for us to be in this ecosystem and in this city where all the energy companies are, where all the investors in the energy space are — and things are moving really fast in Houston in terms of energy transition and developing the current infrastructure," Sadek, co-founder and CEO of Corrolytics, says on the Houston Innovators Podcast.

And as big as a move as it was, it was worth it, Sadek says.

"It's been only a year that we've been here, but we've made the most developments, the most outreach to clients in this one last year."



The technology Sadek and his team have created is a tool to detect microbial corrosion — a major problem for industrial businesses, especially within the energy sector. Sadek describes the product as being similar to a testing hit a patient would use at home or in a clinic setting to decipher their current ailments.

Users of the Corrolytics test kit can input their pipeline sample in the field and receive results via Corrolytics software platform.

"This technology, most importantly, is noninvasive. It does not have to be installed into any pipelines or assets that the company currently has," Sadek explains. "To actually use it, you don't have to introduce new techniques or new processes in the current operations. It's a stand-alone, portable device."

Corrolytics approach is to help revolutionize and digitize microbial corrosion detection — both to improves efficiency and operational cost for industrial companies, but also to move the needle on a cleaner future for the energy industry.

"We are having an energy transition — that is a given. As we are bringing new energy, there will be growth of infrastructure to them. Every single path for the energy transition, corrosion will play a primary role as well," Sadek says.

Corrolytics hopes to work with new energies from the beginning to used the data they've collected to prevent corrosion in new facilities. However, the company's technology is already making an impact.

"Every year, there is about 1.2 gigaton of carbon footprint a year that is released into the environment that is associated with replacing corroded steel in general industries," Sadek says. "With Corrolytics, (industrial companies) have the ability to extend the life of their current infrastructure."

Sadek says his move to Houston has already paid off, and he cites one of the company's big wins was at the 2024 Houston Innovation Awards, where Corrolytics won two awards.

UH researchers secure $3.3M for AI-powered subsurface sensing system to revolutionize underground power lines

going under

Researchers from the University of Houston — along with a Hawaiian company — have received $3.3 million in funding to explore artificial intelligence-backed subsurface sensing system for safe and efficient underground power line installation.

Houston's power lines are above ground, but studies show underground power is more reliable. Installing underground power lines is costly and disruptive, but the U.S. Department of Energy, in an effort to find a solution, has put $34 million into its new GOPHURRS program, which stands for Grid Overhaul with Proactive, High-speed Undergrounding for Reliability, Resilience, and Security. The funding has been distributed across 12 projects in 11 states.

“Modernizing our nation’s power grid is essential to building a clean energy future that lowers energy costs for working Americans and strengthens our national security,” U.S. Secretary of Energy Jennifer M. Granholm says in a DOE press release.

UH and Hawaii-based Oceanit are behind one of the funded projects, entitled “Artificial Intelligence and Unmanned Aerial Vehicle Real-Time Advanced Look-Ahead Subsurface Sensor.”

The researchers are looking a developing a subsurface sensing system for underground power line installation, potentially using machine learning, electromagnetic resistivity well logging, and drone technology to predict and sense obstacles to installation.

Jiefu Chen, associate professor of electrical and computer engineering at UH, is a key collaborator on the project, focused on electromagnetic antennas installed on UAV and HDD drilling string. He's working with Yueqin Huang, assistant professor of information science technology, who leads the geophysical signal processing and Xuqing Wu, associate professor of computer information systems, responsible for integrating machine learning.

“Advanced subsurface sensing and characterization technologies are essential for the undergrounding of power lines,” says Chen in the release. “This initiative can enhance the grid's resilience against natural hazards such as wildfires and hurricanes.”

“If proven successful, our proposed look-ahead subsurface sensing system could significantly reduce the costs of horizontal directional drilling for installing underground utilities,” Chen continues. “Promoting HDD offers environmental advantages over traditional trenching methods and enhances the power grid’s resilience.”

------

This article originally ran on EnergyCapital.