short stories

Mobile ordering rolls out at resort, Houston VC's latest investment, and more local innovation news

Here's your latest roundup of Houston startup and innovation news you may have missed. Photo via Getty Images

We're on the other side of the hill that is Houston's summer, but the Bayou City's still hot — especially in terms of innovation news, and there might be some headlines you may have missed.

In this roundup of short stories within Houston startups and tech, a Houston venture capital fund has made its latest investment, Houston startups share big updates, and more.

Rivalry Technology rolls out mobile ordering at hot summer spot

You can now order poolside at this Houston-area resort. Image courtesy of Rivalry Tech

Lounging at Margaritaville Lake Resort at Lake Conroe was just made easier by Rivalry Tech, a Houston-based mobile ordering platform company. Rivalry Tech upgraded poolside ordering with its myEATz. According to a news release, customers can now order food and drinks from the 5 o’Clock Somewhere Bar and Lone Palm Bar via a custom QR code system for each lounge chair and table to increase operational efficiency for the Margaritaville Lake Resort staff.

“We wanted to be sure the rollout of the myEATz mobile ordering platform was helpful to the Margaritaville staff, not a hindrance to their existing process. We created custom QR codes and a color coded map to easily identify where the mobile orders are going,” says Charles Willis, COO of Rivalry Tech, in the release.

Rivalry, which provides mobile ordering at numerous sports stadiums and venues with sEATz, expanded into hospitality this year.

“The Rivalry Tech team helped us to seamlessly implement mobile ordering at Margaritaville Lake Resort. They created the marketing materials, established custom QR codes, uploaded mentors and trained our staff onsite. The whole process has been easy and collaborative,” says Amit Sen, director food and beverage for Margaritaville Lake Resort, in the release.

Mercury Fund invests in ReturnLogic's latest round

Mercury has led the latest fundraising round from a SaaS company. Image via Getty Images

Houston-based venture capital firm Mercury led Phillidelphia-based SaaS company ReturnLogic's $8.5 million series A funding round, which also had participation from Revolution’s Rise of the Rest Fund, White Rose Ventures, and Ben Franklin Technology Partners. The fresh funding will help the company double its workforce, accelerate product development, and expand Application Programming Interface capabilities, according to a news release.

Founded by CEO Peter Sobotta, Return Logic's SaaS platform, which can be plugged into existing e-commerce platforms, helps to enhance management of returns and prevent the challenging financial impacts of returns.

“While retailers have largely mastered forward logistics to get products into customers hands, the returns process remains an under-addressed, resource-draining problem that eats away at brands’ profits,” says Blair Garrou, managing director of Mercury, in a news release. “ReturnLogic is something entirely new to this market and uniquely built on Peter Sobotta’s deep operational experience in reverse logistics and supply chain management.

"While serving in the U.S. Navy, Peter specialized in reverse logistics and gained extensive expertise in ecommerce operations," Garrou continues. "With Peter at the helm, ReturnLogic’s innovative API-first returns solution is well-positioned to tackle the ever-growing operational returns problem facing retailers. We are excited to partner with Peter and his team as they continue to solve this massive problem for online retailers.”

Fluence Analytics named a top advanced manufacturing startup

Fluence Analytics was selected as one of 50 startups recognized. Graphic courtesy

Fluence Analytics, an analytics and process control solutions platform for the polymer and biopharmaceutical industries, was named as a Top 50 global advanced manufacturing startup by CB Insights. The Inaugural list breaks down 16 different cohorts, narrowed down from more than 6,000 companies who either submitted an application or were nominated. Fluence Analytics was one of three companies featured in the R&D Optimization category.

"Our team is very excited that our real-time process analytics, optimization and control products for the polymer and biopharma industries are included among such elite startups," says Jay Manouchehri, CEO of Fluence Analytics, in a statement to InnovationMap. "We wish to thank CB Insights for including Fluence Analytics in its inaugural list of the Top 50 global advanced manufacturing startups, as well as our customers and investors for supporting the development and roll-out of our transformative technology solutions."

Fluence Analytics moved to the Houston area from New Orleans last year. The company's tech platform allows for optimization and control products to polymer and biopharmaceutical customers worldwide.

HTX Labs secures $1.7M contract to expand within United States Air Force

HTX Labs' EMPACT product will be further developed to support the Air Force. Image courtesy of HTX Labs

HTX Labs, a Houston-based company that designs extended reality training for military and business purposes, that it has been awarded a $1.7 million Small Business Innovation Research Phase II Tactical Funding Increase with the US Air Force to enhance and operationalize to its product, EMPACT Immersive Learning Platform, in support of training modernization.

“We are very thankful to AFWERX and AFDT for this great opportunity to play an increasingly important role in helping the USAF accelerate training modernization," says Chris Verret, president HTX Labs, in a news release. "This TACFI award shows continued confidence in HTX Labs, with a strong commitment to accelerate usage and adoption of EMPACT.”

HTX Labs will leverage this contract to expand EMPACT's ability to rapidly create and distribute interactive, immersive training, collaborating closely with Advanced Force Development Technologies, per the release.

OpenStax to publish free edition of updated science textbook

OpenStax is growing its access to free online textbooks. Image via openstax.org

OpenStax, a tech initiative from Rice University that uploads free learning resources, has announced it will publish the 10th edition of an organic chemistry textbook by Cornell University professor emeritus John McMurry.

“This is a watershed moment for OpenStax and the open educational resources (OER) movement,” says Richard Baraniuk, founder and director of OpenStax, in a news release. “This publication will quickly provide a free, openly licensed, high-quality resource to hundreds of thousands of students in the U.S. taking organic chemistry, removing what can be a considerable cost and access barrier.”

Usually a big expense for organic chemistry students, McMurry, with the support of publisher Cengage, made the decision to offer the latest edition online as a tribute to his son, Peter McMurry, who died in 2019 after a long struggle with cystic fibrosis.

“If Peter were still alive, I have no doubt that he would want me to work on this 10th edition with a publisher that made the book free to students,” McMurry says in the release. “To make this possible, I am not receiving any payment for this book, and generous supporters have covered not only the production costs but have also made a donation of $500,000 to the Cystic Fibrosis Foundation to help find a cure for this terrible disease.”

Trending News

Building Houston

 
 

Here's how AI-based chat will effect research. Graphic byMiguel Tovar/University of Houston

Researchers have to write extremely specific papers that require higher-order thinking — will an intuitive AI program like OpenAI’s ChatGPT be able to imitate the vocabulary, grammar and most importantly, content, that a scientist or researcher would want to publish? And should it be able to?

University of Houston’s Executive Director of the Research Integrity and Oversight (RIO) Office, Kirstin Holzschuh, puts it this way: “Scientists are out-of-the box thinkers – which is why they are so important to advancements in so many areas. ChatGPT, even with improved filters or as it continues to evolve, will never be able to replace the critical and creative thinking we need in these disciplines.”

“A toy, not a tool”

The Atlantic published, “ChatGPT Is Dumber Than You Think,” with a subtitle advising readers to “Treat it like a toy, not a tool.” The author, Ian Bogost, indulged in the already tired troupe of asking ChatGPT to write about “ChatGPT in the style of Ian Bogost.” The unimaginative but overall passable introduction to his article was proof that, “any responses it generates are likely to be shallow and lacking in depth and insight.”

Bogost expressed qualms similar to those of Ezra Klein, the podcaster behind, “A Skeptical Take on the AI Revolution.” Klein and his guest, NYU psychology and neural science professor Gary Marcus, mostly questioned the reliability and truthfulness of the chatbot. Marcus calls the synthesizing of its databases and the “original” text it produces nothing more than “cut and paste” and “pastiche.” The algorithm used by the program has been likened to auto-completion, as well.

However, practical use cases are increasingly emerging, which blur the lines between technological novelty and professional utility. Whether writing working programming code or spitting out a rough draft of an essay, ChatGPT does have a formidable array of competencies. Even if just how competent it is remains to be seen. All this means that as researchers look for efficiencies in their work, ChatGPT and other AI tools will become increasingly appealing as they mature.

Pseudo-science and reproducibility

The Big Idea reached out to experts across the country to determine what might be the most pressing problems and what might be potential successes for research now that ChatGPT is readily accessible.

Holzschuh, stated that there are potential uses, but also potential misuses of ChatGPT in research: “AI’s usefulness in compiling research proposals or manuscripts is currently limited by the strength of its ability to differentiate true science from pseudo-science. From where does the bot pull its conclusions – peer-reviewed journals or internet ‘science’ with no basis in reproducibility?” It’s “likely a combination of both,” she says. Without clear attribution, ChatGPT is problematic as an information source.

Camille Nebeker is the Director of Research Ethics at University of California, San Diego, and a professor who specializes in human research ethics applied to emerging technologies. Nebeker agrees that because there is no way of citing the original sources that the chatbot is trained on, researchers need to be cautious about accepting the results it produces. That said, ChatGPT could help to avoid self-plagiarism, which could be a benefit to researchers. “With any use of technologies in research, whether they be chatbots or social media platforms or wearable sensors, researchers need to be aware of both the benefits and risks.”

Nebeker’s research team at UC San Diego is conducting research to examine the ethical, legal and social implications of digital health research, including studies that are using machine learning and artificial intelligence to advance human health and wellbeing.

Co-authorship

The conventional wisdom in academia is “when in doubt, cite your source.” ChatGPT even provides some language authors can use when acknowledging their use of the tool in their work: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” A short catchall statement in your paper will likely not pass muster.

Even when being as transparent as possible about how AI might be used in the course of research or in development of a manuscript, the question of authorship is still fraught. Holden Thorp, editor-in-chief of the Science, writes in Nature, that “we would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism.” Thorp went on to say that a co-author of an experiment must both consent to being a co-author and take responsibility for a study. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” Thorp said.

Informed consent

On NBC News, Camille Nebeker stated that she was concerned there was no informed consent given by the participants of a study that evaluated the use of a ChatGPT to support responses given to people using Koko, a mental health wellness program. ChatGPT wrote responses either in whole or in part to the participants seeking advice. “Informed consent is incredibly important for traditional research,” she said. If the company is not receiving federal money for the research, there isn’t requirement to obtain informed consent. “[Consent] is a cornerstone of ethical practices, but when you don’t have the requirement to do that, people could be involved in research without their consent, and that may compromise public trust in research.”

Nebeker went on to say that study information that is conveyed to a prospective research participant via the informed consent process may be improved with ChatGPT. For instance, understanding complex study information could be a barrier to informed consent and make voluntary participation in research more challenging. Research projects involve high-level vocabulary and comprehension, but informed consent is not valid if the participant can’t understand the risks, etc. “There is readability software, but it only rates the grade-level of the narrative, it does not rewrite any text for you,” Nebeker said. She believes that one could input an informed consent communication into ChatGPT and ask for it to be rewritten at a sixth to eighth grade level (which is the range that Institutional Review Boards prefer.)

Can it be used equitably?

Faculty from the Stanford Accelerator for Learning, like Victor Lee, are already strategizing ways for intuitive AI to be used. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”

Stanford’s approach will involve scheduling listening sessions and other opportunities to gather expertise directly from educators as to how to strike an effective balance between the use of these innovative technologies and its academic mission.

The Big Idea

Perhaps to sum it up best, Holzschuh concluded her take on the matter with this thought: “I believe we must proceed with significant caution in any but the most basic endeavors related to research proposals and manuscripts at this point until bot filters significantly mature.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

Trending News