Houstonian Joe Schurman's latest venture PhenomAInon is aimed at tapping into AI and data analytics for for space domain awareness and threat detection. Photo via Getty Images

As artificial intelligence continues to expand its sphere of influence, Spring-based expert Joe Schurman is looking to take this technology to an out-of-this-world space.

With his background includes working with advising defense and aerospace organizations like NASA, Schurman's latest venture PhenomAInon is perfectly aligned with what he’s been working towards since 2019. The company aims to be a multi-tiered subscription service and application that will be the world’s first cloud native data and AI platform for phenomenon-based data analysis that can analyze data from any source for space domain awareness and threat detection, according to Schurman.

The platform aims to provide end-to-end data and AI analysis, publish insights, build community, and provide cloud, data, and software consulting. PhenomAInon deploys data and AI services alongside modern data and AI engineering, per the website, to surface insights to explorers, researchers, organizations, publications, and communities through advanced data and AI analysis. Schurman has worked with the U.S. government's task force for unidentified anomalous phenomenon — any perceived aerial phenomenon that cannot be immediately identified or explained — known as UAPTF. The tool will run sensitive information and then get back custom video analysis. The public version of the tool will give the public the option to view videos and cases, and form their own analysis.

“We are working together with multiple teams both public and private to continue to curate the data sets, clear documents for public review, and provide advanced analytics and AI capabilities never seen before to the public,” Schurman tells InnovationMap. “From a data and analytics perspective, we are applying machine learning and advanced analytics to find correlations and anomalies in the incident reports across multiple data sets.

"Some of these are public, some are private, and some we are clearing for public review," he continues. "The analytics will go far beyond incident reporting and showcase heat maps, correlative incident maps to key private and public sector facilities, and trends analysis never reported — e.g. incident reporting correlated with time, weather, FAA, and drone flight data, etc. We also have a new content analysis platform where users will be able to eventually run their own AI and ML analysis on their own videos.”

Schurman was first able to show this to the world in 2019, when as an adviser for To The Stars Academy of Arts and Science, or TTSA. He also appeared on History Channel’s “Unidentified: Inside America's UFO Investigation” to show the Pentagon’s former Advanced Aerospace Threat Identification Program head and TTSA Director of Special Programs Luis Elizondo how the AI platform could be helpful in tracking data related to Unidentified Aerial Phenomena.

Now, PhenomAInon's app is a work-in-progress. While it soft launched in May of 2022, Schurman says they have several data sets that are awaiting clearing from the U.S. government, as well as the content analysis tool in development to launch possibly by the summer. Schurman also hopes they will curate the largest library of incident videos, images, and audio recordings.

The subject of UAP continues to attract new discussions from government officials and industry professions across aerospace, academia, and more. In Houston, Rice University's Woodson Research Center and its humanities department host one of the largest archives of UAP and paranormal data, notes, and research that include documents from CIA programs on remote viewing.

Schurman says he's looking to provide even more data and information in this space.

“This phenomenon, it’s implications to multiple aspects of our lives and possible security threats, all come down to a data problem and the organizations that have been in place to-date just have not had the level of cloud, data and AI engineering capabilities we take for granted and have access to in the private sector,” says Schurman. “My goal is to bring this all together, starting with PhenomAInon.”

Sieve Health is an AI cloud-based SaaS platform designed to automate and accelerate matching patients with clinical trials. Photo via Getty Images

Houston-based health tech startup is revolutionizing patient selection for clinical trials

working smarter

On many occasions in her early career, Dr. Arti Bhosale, co-founder and CEO of Sieve Health, found herself frustrated with having to manually sift through thousands of digital files.

The documents, each containing the medical records of a patient seeking advanced treatment through a clinical trial, were always there to review — and there were always more to read.

Despite the tediousness of prescreening, which could take years, the idea of missing a patient and not giving them the opportunity to go through a potentially life-altering trial is what kept her going. The one she didn’t read could have slipped through the cracks and potentially not given someone care they needed.

“Those stories have stayed with me,” she says. “That’s why we developed Sieve.”

When standard health care is not an option, advances in medical treatment could be offered through clinical trials. But matching patients to those trials is one of the longest standing problems in the health care industry. Now with the use of new technology as of 2018, the solution to the bottleneck may be a new automated approach.

“Across the globe, more than 30 percent of clinical trials shut down as a result of not enrolling enough patients,” says Bhosale. “The remaining 80 percent never end up reaching their target enrollment and are shut down by the FDA.”

In 2020, Bhosale and her team developed Sieve Health, an AI cloud-based SaaS platform designed to automate and accelerate matching patients with clinical trials and increase access to clinical trials.

Sieve’s main goal is to reduce the administrative burden involved in matching enrollments, which in turn will accelerate the trial execution. They provide the matching for physicians, study sponsors and research sites to enhance operations for faster enrollment of the trials.

The technology mimics but automates the traditional enrollment process — reading medical notes and reviewing in the same way a human would.

“I would have loved to use something like this when I was on the front lines,” Bhosale says, who worked in clinical research for over 12 years. “Can you imagine going through 10,000 records manually? Some of the bigger hospitals have upwards of 100,000 records and you still have to manually review those charts to make sure that the patient is eligible for the trial. That process is called prescreening. It is painful.”

Because physicians wear many hats and have many clinical efforts on their plates, research tends to fall to the bottom of the to-do list. Finding 10-20 patients can take the research team on average 15-20 months to find those people — five of which end up unenrolling, she says.

“We have designed the platform so that the magic can happen in the background, and it allows the physician and research team to get a jumpstart,” she says.” They don’t have to worry about reviewing 10,000 records — they know what their efforts are going to be and will ensure that the entire database has been scanned.”

With Sieve, the team was able to help some commercial pilot programs have a curated data pool for their trials – cutting the administrative burden and time spent searching to less than a week.

Sieve is in early-stage start up mode and the commercial platform has been rolled out. Currently, the team is conducting commercial projects with different research sites and hospitals.

“Our focus now is seeing how many providers we can connect into this,” she says. “There’s a bigger pool out there who want to participate in research but don’t know where to start. That’s where Sieve is stepping in and enabling them to do this — partnering with those and other groups in the ecosystem to bring trials to wherever the physicians and the patients are.”

Arti Bhosale is the co-founder and CEO of Sieve Health. Photo courtesy of Sieve

Let's talk about dark data — what it means and how to navigate it. Graphic byMiguel Tovar/University of Houston

Houston expert: Navigating dark data within research and innovation

houston voices

Is it necessary to share ALL your data? Is transparency a good thing or does it make researchers “vulnerable,” as author Nathan Schneider suggests in the Chronicle of Higher Education article, “Why Researchers Shouldn’t Share All Their Data.”

Dark Data Defined

Dark data is defined as the universe of information an organization collects, processes and stores – oftentimes for compliance reasons. Dark data never makes it to the official publication part of the project. According to the Gartner Glossary, “storing and securing data typically incurs more expense (and sometimes greater risk) than value.”

This topic is reminiscent of the file drawer effect, a phenomenon which reflects the influence of the results of a study on whether or not the study is published. Negative results can be just as important as hypotheses that are proven.

Publication bias and the need to only publish positive research that supports the PI’s hypothesis, it can be argued, is not good science. According to an article in the Indian Journal of Anaesthesia, authors Priscilla Joys Nagarajan, et al., wrote: “It is speculated that every significant result in the published world has 19 non-significant counterparts in file drawers.” That’s one definition of dark data.

Total Transparency

But what to do with all your excess information that did not make it to publication, most likely because of various constraints? Should everything, meaning every little tidbit, be readily available to the research community?

Schneider doesn’t think it should be. In his article, he writes that he hides some findings in a paper notebook or behind a password, and he keeps interviews and transcripts offline altogether to protect his sources.

Open-source

Open-source software communities tend to regard total transparency as inherently good. What are the advantages of total transparency? You may make connections between projects that you wouldn’t have otherwise. You can easily reproduce a peer’s experiment. You can even become more meticulous in your note-taking and experimental methods since you know it’s not private information. Similarly, journalists will recognize this thought pattern as the recent, popular call to engage in “open journalism.” Essentially, an author’s entire writing and editing process can be recorded, step by step.

TMI

This trend has led researchers to open-source programs like Jupyter and GitHub. Open-source programs detail every change that occurs along a project’s timeline. Is unorganized, excessive amounts of unpublishable data really what transparency means? Or does it confuse those looking for meaningful research that is meticulously curated?

The Big Idea

And what about the “vulnerability” claim? Sharing every edit and every new direction taken opens a scientist up to scoffers and harassment, even. Dark data in industry even involves publishing salaries, which can feel unfair to underrepresented, marginalized populations.

In Model View Culture, Ellen Marie Dash wrote: “Let’s give safety and consent the absolute highest priority, with openness and transparency prioritized explicitly below those. This means digging deep, properly articulating in detail what problems you are trying to solve with openness and transparency, and handling them individually or in smaller groups.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

Houston expert explains health care's inequity problem

guest column

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Here's what Houston employers need to know about using artificial intelligence in the hiring process

guest column

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Artificial intelligence is changing Houston — one industry at a time. Photo via Getty Images

3 ways artificial intelligence is changing Houston's future

Guest column

Artificial intelligence is the buzzword of the decade. From grocery shopping assistance to personal therapy apps, AI has sunk its teeth into every single industry. Houston is no exception to the AI boom. Enterprise-level companies and startups are already flocking to H-town to make their mark in AI and machine learning.

Since the world is generating more data every minute — 1,736 terabytes to be exact — Houston-based companies are already thinking ahead about how to make sense of all of that information in real-time. That's where AI comes in. By 2021, 80 percent of emerging technologies will have AI foundations — Houston is already ninth on the list of AI-ready cities in the world.

AI and machine learning can process large amounts of data quickly and use that data to inform decisions much like a human would. Here are three ways Houston-based companies are using these emerging technologies to revolutionize the city's future.

Health care

The health care industry is primed for AI's personalization capabilities. Each patient that doctors and nurses encounter has different symptoms, health backgrounds, and prescriptions they have to remember. Managing that amount of information can be dangerous if done incorrectly. With AI, diseases are diagnosed quicker, medications are administered more accurately, and nurses have help monitoring patients.

Decisio Health Inc., a Houston-based health tech startup has already made its mark in the healthcare industry with its AI software helping to tackle the COVID-19 pandemic. Their software, in collaboration with GE Healthcare Inc, allows health care providers to remotely monitor patients. By looking at data from ventilators, patient monitoring systems, health records, and other data sources, doctors can make better decisions about patients from a safe distance.

Climate change

Climate change isn't solved overnight. It's an issue that covers water salinity, deforestation, and even declining bee populations. With a problem as large as climate change, huge amounts of data are collected and need to be analyzed. AI can interpret all of that information, show possible future outcomes, track current weather patterns, and find solutions to environmental destruction.

One Houston-based company in the energy tech industry, Enovate Upstream, has created a new AI platform that will help digitize the oil and gas sector. Their AI-powered platform looks at data from digital drilling, digital completions, and digital production, to give oil companies real-time production forecasting. Their work will hopefully make their oil production more efficient and reduce their carbon emission output. Since oil drilling and fracking are a major cause for concern around climate change, their work will make a difference in slowing climate change and make their industry as a whole more climate-conscious.

Energy

Energy is an industry rich with data opportunities—and as Houston's energy sector grows, AI has become a core part of their work. Houston's large influence in the energy sector has primed it for AI integration from startups like Adapt2 Solutions Inc. By using AI and machine learning in their software, they hope to help energy companies make strategic predictions on how to serve energy to the public efficiently. Their work has become especially important in the wake of COVID-19 and the resulting changing energy needs.

Another Houston-based company using AI to influence the energy industry is the retail energy startup Evolve Energy. Their AI and machine learning system help customers find better prices on fluctuating renewable resource—helping them save money on electricity and reducing emissions. The positive feedback from the public on their AI model has shown how energy companies are using emerging technologies like AI in a positive way in their communities.

The bottom line

Houston is more primed than most cities to integrate AI and machine learning into every industry. While there are valid concerns as to how much we should lean on technology for necessary daily tasks, it's clear that AI isn't going anywhere. And it's clear that Houston is currently taking the right steps to continue its lead in this emerging AI market.

------

Natasha Ramirez is a Utah-based tech writer.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston innovator on the importance of commercial missions for the future of space health research

HOUSTON INNOVATORS PODCAST EPISODE 189

With the rise of commercial space flight, researchers have increased access to space health data that's key to the future of the industry as a whole. The organization that's conducting this valuable research is based right in Houston's Texas Medical Center.

TRISH, or the Translational Research Institute for Space Health, is an organization based out of Baylor College of Medicine and partnered with NASA's Human Spaceflight group. As commercial space companies have emerged, TRISH has strategically aligned with these businesses to bring back health data from the civilian trips.

“Most of the research that’s done at NASA and other government agencies usually takes decades to get something that could be implemented in space or terrestrially," Dr. Emmanuel Urquieta, chief medical officer for TRISH, says on the Houston Innovators Podcast. "What we do at TRISH is something different.

"On the one hand, we look at really new technologies that are just an idea, but could be really game changing," he continues. "Then on the other hand, we look at technologies already in the market that could be tweaked to work in spaceflight.”

Since 2021, TRISH has conducted its research on four missions — Inspiration4, the first all-civilian mission to space; Axiom Mission 1, the first all civilian mission to the International Space Station; MS20, which flew two Japanese civilians to ISS; and, most recently, Axiom Mission 2, which included the first all-private crew commanded by a woman and two members of the Kingdom of Saudi Arabia's national astronaut program.

“We really saw the value of implementing research in civilians because they are different from your traditional government astronaut,” Urquieta says. “In civilians, you see a more diverse population.”

Urquieta says TRISH's experiments on these missions all fall within a few pillars of space health, including space's effects on sensory motor skills, like balance and motion sickness, as well as mental health, environmental data from the vehicles, vital monitoring, and more.

“We’ve developed a capability to collect high-priority, high-value data from these space flight participants without having to train them for long periods of time — which is a challenge, because they don’t train for years like traditional astronauts,” he explains.

The plan, Urquieta says, is to be able to share TRISH's space health data in order to more safely send humans into space. He shares more about TRISH's program and the challenges the organization faces on the show. Listen to the interview below — or wherever you stream your podcasts — and subscribe for weekly episodes.

Massive sports festival reveals plans to take over downtown Houston next spring

pokatok prep

A Houston team announced their plans to bring the “world’s fair for sports” to downtown Houston in April 2024.

Pokatok, the four-day festival, will feature a sports tech expo, a film festival, speakers and panels, live music, pitch competitions, and more. The venue will be George R. Brown Convention Center, Discovery Green, and various nearby hotels, according to the release.

Gow Companies, founded by Lawson Gow (who is the son of David Gow, InnovationMap's parent company's CEO), announced that the team has secured support from Houston First, the Greater Houston Partnership, and the Harris County Houston Sports Authority to put on the event, which is slated to take place April 4-7, 2024. The company also owns Houston Exponential and a sports accelerator called Pokatok Labs.

“Pokatok will not only be the largest gathering of the entire sports tech ecosystem, it will also be a true fan festival for sports enthusiasts,” says Gow in the news release. “Everyone speaks the language of sport, it’s an incredibly powerful unifier of our society, and this festival will bring together people from around the world to experience hundreds of events revolving around the new and the next in sport.”

The festival will take place in April 2024 in downtown. Rendering courtesy of Pokatok

The festival will feature two tracks — one focused on sports innovation and the other surrounding a fan experience. Pokatok X will include an expo and showcase focused on sports innovation, bringing together startups, investors, accelerators, athletes, and industry experts to dive into sports tech.

The Pokatok Fan Festival's track will include product releases, demos for sports technology, sporting events, competitions, tournaments, and more.

Houston is no stranger to hosting major sport events, Harris County - Houston Sports Authority CEO Janis Burke points out in the news release, including the 2023 NCAA Men’s Final Four and the upcoming 2024 College Football National Championship, the 2024 Cricket World Cup, and the 2026 FIFA World Cup.

"Houston is known as one of the best sports destinations in the world," Burke continues. "As an organization, we are consistently looking for ways to innovate and grow in the sports sector. Events like Pokatok are great for advancing sports within the region and providing unique opportunities for our community!"

Tickets are expected to go on sale in the fall, and the organization is looking for potential speakers and partners. The festival's name derives from sport of pok-a-tok, which dates back thousands of years as the world’s first team sport played throughout Mesoamerica.

“The City of Houston is a sports town to its core and has been host to some of the greatest events and moments in sports,” says Mayor Sylvester Turner in the release. “Pokatok will help further Houston’s vision of being a destination city for global sporting events and innovations. The business community also supports this venture, and I thank them for their involvement and support. This project is an excellent example of local business leaders joining forces to expand the attractions the City has to offer to both residents and visitors.”

Pokatok will take place in and around the George R. Brown Convention Center. Rendering courtesy of Pokatok

Greentown announces startup accelerator with multinational manufacturer

go make

A climatetech incubator with locations in Houston and Somerville, Massachusetts, has announced an accelerator program with a corporate partner.

Greentown Labs andSaint-Gobain, a multinational manufacturer and distributor of high-performance materials, have opened applications forGreentown Go Build 2023. The program intends to support and accelerate startup-corporate partnerships to advance climatetech, specifically focused on circularity and decarbonizing the built environment per a news release from Greentown.

It's the third Greentown Go Build program the incubator has hosted. Applications, which are open online, are due by August 31.

“The Greentown Go Build program is an opportunity for innovative startups to share how they are disrupting the construction market with innovative and sustainable solutions that address the need for circularity and sustainability and that align with our mission of making the world a better home,” says Minas Apelian, vice president of external and internal venturing at Saint-Gobain. “Through this program, we are eager to identify companies dedicated to reducing our reliance on raw materials and associated supply chain risk to ensure circular solutions result in profitable, sustainable growth for business and sustainable construction solutions for our industries.”

For the six months of the program, the startups selected for the program will have access to mentorship, networking opportunities, and workshops. Program benefits for the participating startups, according to Greentown, include:

  • Access to a structured platform to engage leadership from Saint-Gobain and explore potential partnerships
  • A $25,000 stipend per startup
  • Access to Greentown's community of mentors, partners, and community of climatetech startup experts
  • Access to Saint-Gobain network
  • Desk space and membership within Greentown for the duration of the program

“We are thrilled to be building on our successful track record of Greentown Go programs with Saint-Gobain and look forward to driving decarbonization of the sector through startup-corporate partnerships,” says Kevin T. Taylor, CFO and interim CEO at Greentown Labs. “Saint-Gobain has been an exemplary partner for our Greentown Go programs and for Greentown more broadly—working collaboratively with our startups and deploying many of their technologies. We are eager to meet the world-class building tech startups that apply for the program.”