InformAI has three AI-based products geared at improving health care. Photo via Getty Images

In Houston, we’re lucky to have top-tier doctors in the Texas Medical Center, ready to treat us with the newest technology. But what about our family members who have to rely on rural hospitals? Thanks to one Houston company, doctors in smaller community hospitals may soon have new tools at their disposal that could improve outcomes for patients around the world.

Since InnovationMap last caught up with Jim Havelka, CEO of InformAI, two years ago, that hope has come far closer to a reality. InformAI is a VC-backed digital health company. Part of JLABS @ TMC innovation facilities, the company uses artificial intelligence to develop both diagnostic tools and clinical outcome predictors. And two of the company’s products will undergo FDA regulatory testing this year.

SinusAI, which helps to detect sinus-related diseases in CT scans, received its CE Mark — the European equivalent of FDA approval — last year and is being sold across the Atlantic today, says Havelka. He adds that in the United States alone, there are roughly 700,000 sinus surgeries that the product is positioned to support.

Another product, RadOnc-AI, is designed to help doctors prescribe radiation dose plans for head and neck cancers.

“Ideally the perfect plan would be to provide radiation to the tumor and nothing around it,” says Havelka. “We’ve built a product, RadOnc-AI, which autogenerates the dose treatment plan based on medical images of that patient.”

It can be an hours-long process for doctors to figure out the path and dose of radiation themselves, but the new product “can build that initial pass in about five minutes,” Havelka says.

That in itself is an exciting development, but because this technology was developed using the expertise of some of the world’s top oncologists, “the first pass plan is in line with what [patients would] get at tier-one institutions,” explains Havelka. This creates “tremendous equity” among patients who can afford to travel to major facilities and those that can’t.

To that end, RadOnc-AI was recently awarded a $1.55 million grant from the Cancer Prevention and Research Institute of Texas, or CPRIT, a state agency that funds cancer research. The Radiological Society of North America announced late last year that InformAI was named an Aunt Minnie Best of Radiology Finalist.

“It’s quite prestigious for our company,” says Havelka. Other recent laurels include InformAI being named one of the 10 most promising companies by the Texas Life Science Forum in November.

And InformAI is only gaining steam. A third product is earlier in its stage of development. TransplantAI will optimize donor organ and patient recipient matches.

“A lot of organs are harvested and discarded,” Havelka says.

His AI product has been trained on a million donor transplants to help determine who is the best recipient for an organ. It even takes urgency into account, based on a patient’s expected mortality within 90 days. The product is currently a fully functional prototype and will soon move through its initial regulatory clearances.

The company — currently backed by three VC funds, including DEFTA Partners, Delight Ventures, and Joyance Partners — is planning to do another seed round in Q2 of 2023.

“We’ve been able to get recognized for digital health products that can be taken to market globally,” says Havelka.

But what he says he’s most excited about is the social impact of his products. With more money raised, InformAI will be able to speed up development of additional products, including expanding the cancers that the company will be targeting. And with that, more and more patients will one day be treated with the highest level of care.

Let's talk about dark data — what it means and how to navigate it. Graphic by Miguel Tovar/University of Houston

Houston expert: Navigating dark data within research and innovation

houston voices

Is it necessary to share ALL your data? Is transparency a good thing or does it make researchers “vulnerable,” as author Nathan Schneider suggests in the Chronicle of Higher Education article, “Why Researchers Shouldn’t Share All Their Data.”

Dark Data Defined

Dark data is defined as the universe of information an organization collects, processes and stores – oftentimes for compliance reasons. Dark data never makes it to the official publication part of the project. According to the Gartner Glossary, “storing and securing data typically incurs more expense (and sometimes greater risk) than value.”

This topic is reminiscent of the file drawer effect, a phenomenon which reflects the influence of the results of a study on whether or not the study is published. Negative results can be just as important as hypotheses that are proven.

Publication bias and the need to only publish positive research that supports the PI’s hypothesis, it can be argued, is not good science. According to an article in the Indian Journal of Anaesthesia, authors Priscilla Joys Nagarajan, et al., wrote: “It is speculated that every significant result in the published world has 19 non-significant counterparts in file drawers.” That’s one definition of dark data.

Total Transparency

But what to do with all your excess information that did not make it to publication, most likely because of various constraints? Should everything, meaning every little tidbit, be readily available to the research community?

Schneider doesn’t think it should be. In his article, he writes that he hides some findings in a paper notebook or behind a password, and he keeps interviews and transcripts offline altogether to protect his sources.

Open-source

Open-source software communities tend to regard total transparency as inherently good. What are the advantages of total transparency? You may make connections between projects that you wouldn’t have otherwise. You can easily reproduce a peer’s experiment. You can even become more meticulous in your note-taking and experimental methods since you know it’s not private information. Similarly, journalists will recognize this thought pattern as the recent, popular call to engage in “open journalism.” Essentially, an author’s entire writing and editing process can be recorded, step by step.

TMI

This trend has led researchers to open-source programs like Jupyter and GitHub. Open-source programs detail every change that occurs along a project’s timeline. Is unorganized, excessive amounts of unpublishable data really what transparency means? Or does it confuse those looking for meaningful research that is meticulously curated?

The Big Idea

And what about the “vulnerability” claim? Sharing every edit and every new direction taken opens a scientist up to scoffers and harassment, even. Dark data in industry even involves publishing salaries, which can feel unfair to underrepresented, marginalized populations.

In Model View Culture, Ellen Marie Dash wrote: “Let’s give safety and consent the absolute highest priority, with openness and transparency prioritized explicitly below those. This means digging deep, properly articulating in detail what problems you are trying to solve with openness and transparency, and handling them individually or in smaller groups.”

------

This article originally appeared on the University of Houston's The Big Idea. Sarah Hill, the author of this piece, is the communications manager for the UH Division of Research.

"Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers." Photo via UH.edu

Houston expert explains health care's inequity problem

guest column

We are currently in the midst of what some have called the "wild west" of AI. Though healthcare is one of the most heavily regulated sectors, the regulation of AI in this space is still in its infancy. The rules are being written as we speak. We are playing catch-up by learning how to reap the benefits these technologies offer while minimizing any potential harms once they've already been deployed.

AI systems in healthcare exacerbate existing inequities. We've seen this play out into real-world consequences from racial bias in the American justice system and credit scoring, to gender bias in resume screening applications. Programs that are designed to bring machine "objectivity" and ease to our systems end up reproducing and upholding biases with no means of accountability.

The algorithm itself is seldom the problem. It is often the data used to program the technology that merits concern. But this is about far more than ethics and fairness. Building AI tools that take account of the whole picture of healthcare is fundamental to creating solutions that work.

The Algorithm is Only as Good as the Data

By nature of our own human systems, datasets are almost always partial and rarely ever fair. As Linda Nordling comments in a Nature article, A fairer way forward for AI in healthcare, "this revolution hinges on the data that are available for these tools to learn from, and those data mirror the unequal health system we see today."

Take, for example, the finding that Black people in US emergency rooms are 40 percent less likely to receive pain medication than are white people, and Hispanic patients are 25 percent less likely. Now, imagine the dataset these findings are based on is used to train an algorithm for an AI tool that would be used to help nurses determine if they should administer pain relief medication. These racial disparities would be reproduced and the implicit biases that uphold them would remain unquestioned, and worse, become automated.

We can attempt to improve these biases by removing the data we believe causes the bias in training, but there will still be hidden patterns that correlate with demographic data. An algorithm cannot take in the nuances of the full picture, it can only learn from patterns in the data it is presented with.

Bias Creep

Data bias creeps into healthcare in unexpected ways. Consider the fact that animal models used in laboratories across the world to discover and test new pain medications are almost entirely male. As a result, many medications, including pain medication, are not optimized for females. So, it makes sense that even common pain medications like ibuprofen and naproxen have been proven to be more effective in men than women and that women tend to experience worse side effects from pain medication than men do.

In reality, male rodents aren't perfect test subjects either. Studies have also shown that both female and male rodents' responses to pain levels differ depending on the sex of the human researcher present. The stress response elicited in rodents to the olfactory presence of a sole male researcher is enough to alter their responses to pain.

While this example may seem to be a departure from AI, it is in fact deeply connected — the current treatment choices we have access to were implicitly biased before the treatments ever made it to clinical trials. The challenge of AI equity is not a purely technical problem, but a very human one that begins with the choices that we make as scientists.

Unequal Data Leads to Unequal Benefits

In order for all of society to enjoy the many benefits that AI systems can bring to healthcare, all of society must be equally represented in the data used to train these systems. While this may sound straightforward, it's a tall order to fill.

Data from some populations don't always make it into training datasets. This can happen for a number of reasons. Some data may not be as accessible or it may not even be collected at all due to existing systemic challenges, such as a lack of access to digital technology or simply being deemed unimportant. Predictive models are created by categorizing data in a meaningful way. But because there's generally less of it, "minority" data tends to be an outlier in datasets and is often wiped out as spurious in order to create a cleaner model.

Data source matters because this detail unquestionably affects the outcome and interpretation of healthcare models. In sub-Saharan Africa, young women are diagnosed with breast cancer at a significantly higher rate. This reveals the need for AI tools and healthcare models tailored to this demographic group, as opposed to AI tools used to detect breast cancer that are only trained on mammograms from the Global North. Likewise, a growing body of work suggests that algorithms used to detect skin cancer tend to be less accurate for Black patients because they are trained mostly on images of light-skinned patients. The list goes on.

We are creating tools and systems that have the potential to revolutionize the healthcare sector, but the benefits of these developments will only reach those represented in the data.

So, what can be done?

Part of the challenge in getting bias out of data is that high volume, diverse and representative datasets are not easy to access. Training datasets that are publicly available tend to be extremely narrow, low-volume, and homogenous—they only capture a partial picture of society. At the same time, a wealth of diverse health data is captured every day in many healthcare settings, but data privacy laws make accessing these more voluminous and diverse datasets difficult.

Data protection is of course vital. Big Tech and governments do not have the best track record when it comes to the responsible use of data. However, if transparency, education, and consent for the sharing of medical data was more purposefully regulated, far more diverse and high-volume data sets could contribute to fairer representation across AI systems and result in better, more accurate results for AI-driven healthcare tools.

But data sharing and access is not a complete fix to healthcare's AI problem. Better and personalized healthcare through AI is still a hugely challenging problem that will take an army of scientists and engineers. At the end of the day, we want to teach our algorithms to make good choices but we are still figuring out what good choices should look like for ourselves.

AI presents the opportunity to bring greater personalization to healthcare, but it equally presents the risk of entrenching existing inequalities. We have the opportunity in front of us to take a considered approach to data collection, regulation, and use that will provide a fuller and fairer picture and enable the next steps for AI in healthcare.

------

Angela Wilkins is the executive director of the Ken Kennedy Institute at Rice University.

UH is officially part of an initiative to diversify machine learning research. Photo courtesy of University of Houston

University of Houston joins $50M initiative to expand and diversify AI and machine learning research

money moves

A $50 million grant from the National Institutes of Health is expanding research in machine learning and artificial intelligence, and the University of Houston now has a seat at the table.

UH has joined in on a national initiative to increase the diversity of artificial intelligence researchers, according to a news release from the school. Thanks to a $50 million grant from the National Institutes of Health, the University of North Texas Health Science Center will lead the coordinating center of the AIM-AHEAD program, which stands for Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity.

"Beyond health care, AI has been used in areas from facial recognition to self-driving cars and beyond, but there is an extreme lack of diversity among the developers of AI/ML tools. Many studies have shown that flawed AI systems and algorithms perpetuate gender and racial biases and have resulted in untoward outcomes," says Bettina Beech, chief population health officer at the University of Houston and newly named AIM-AHEAD coordinating center team member.

The initiative will bring together collaborators and experts across AI and machine learning, health equity research, data science training, data infrastructure and more. The other universities involved include: University of Colorado-Anschutz Medical Center in Aurora; University of California, Los Angeles; Meharry Medical College in Nashville; Morehouse School of Medicine in Atlanta; Johns Hopkins University, and Vanderbilt University Medical Center.

"This network will be foundational to achieving the goals of the AIM-AHEAD program, which include providing more inclusive data for health disparities research, and enhancing the diversity of AI/ML leadership," says Susan Gregurick, NIH associate director for data science, in the release.

Unfortunately, AI — designed by humans — mimics human decision making through its choice of algorithms. This means that the same biases humans deal with have made it into the AI decision making too. These gaps can lead to continued disparities and inequities for underrepresented communities especially in regards to health care, job hiring, and more.

"AI solutions need to be implemented in a responsible manner and are now guided by AI ethical FAIR (findable, accessible, interoperable, reusable) principles," says Beech in the release. "The AIM-AHEAD project directly connects with the University of Houston's plan to train and diversify the future workforce in population health, increase the use of digital tools for chronic disease self-management, and to advance population health research."

Bettina Beech is the chief population health officer at the University of Houston and newly named AIM-AHEAD coordinating center team member. Photo via UH.edu

This Houston startup has a game-changing technology for deep learning. Photo via Getty Images

Houston artificial intelligence startup raises $6M in seed funding

money moves

A computer science professor at Rice University has raised seed funding last month in order to grow his company that's focused on democratizing artificial intelligence tools.

ThirdAI, founded by Anshumali Shrivastava in April, raised $6 million in a seed funding round from three California-based VCs — Neotribe Ventures and Cervin Ventures, which co-led the round with support from Firebolt Ventures.

Shrivastava, CEO, co-founded the company with Tharun Medini, a recent Ph.D. who graduated under Shrivastava from Rice's Department of Electrical and Computer Engineering. Medini serves as the CTO of ThirdAI — pronounced "third eye." The startup is building the next generation of scalable and sustainable AI tools and deep learning systems.

"We are democratizing artificial intelligence through software innovations," says Shrivastava in a news release from Rice. "Our innovation would not only benefit current AI training by shifting to lower-cost CPUs, but it should also allow the 'unlocking' of AI training workloads on GPUs that were not previously feasible."

The technology ThirdAI is working with comes from 10 years of deep learning research and innovation. The company's technology has the potential to make computing 15-times faster.

"ThirdAI has developed a breakthrough approach to train deep learning models with a large number of parameters that run efficiently on general purpose CPUs. This technology has the potential to result in a gigantic leap forward in the accuracy of deep learning models," per and announcement from Cervin Ventures. "Our investment in ThirdAI was a no-brainer and we are fortunate to have had the opportunity to invest."

Anshumali Shrivastava is an associate professor of computer science at Rice University. Photo via rice.edu

In a guest column, these lawyers explain the pros and cons of using AI for hiring. Photo via Getty Images

Here's what Houston employers need to know about using artificial intelligence in the hiring process

guest column

Workplace automation has entered the human resource department. Companies rely increasingly on artificial intelligence to source, interview, and hire job applicants. These AI tools are marketed to save time, improve the quality of a workforce, and eliminate unlawful hiring biases. But is AI incapable of hiring discrimination? Can a company escape liability for discriminatory hiring because, "the computer did it?"

Ultimately, whether AI is a solution or a landmine depends on how carefully companies implement the technology. AI is not immune from discrimination and federal law holds companies accountable for their hiring decisions, even if those decisions were made in a black server cabinet. The technology can mitigate bias, but only if used properly and monitored closely.

Available AI tools

The landscape of AI technology is continually growing and covers all portions of the hiring process — recruiting, interviewing, selection, and onboarding. Some companies use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates' resumes best match the requirements of open positions. And some employers use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits.

Federal anti-discrimination law

Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one's race, color, national origin, gender, or religion.

The Equal Employment Opportunity Commission, the federal agency in charge of enforcing workplace anti-discrimination laws, has demonstrated an interest in AI and has indicated that such technology is not an excuse for discriminatory impacts.

Discrimination associated with AI tools

The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume screening tool is often set up by uploading sample resumes of high-performing employees. If those resumes favor a particular race or gender, and the tool is instructed to find comparable resumes, then the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Sample resumes may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm's preference for ".edu" email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.

Steps to mitigate risk

There are a handful of steps that employers can take to use these technologies and remain compliant with anti-discrimination laws.

First, companies should demand that AI vendors disclose as much as possible about how their products work. Vendors may be reticent to disclose details about proprietary information, but employers will ultimately be responsible for discriminatory impacts. Thus, as part of contract negotiations, a company should consider seeking indemnification from the vendor for discrimination claims.

Second, companies should consider auditing the tool to ensure it does not yield a disparate impact on protected individuals. Along the same lines, companies should be careful in selecting input data. If the inputs reflect a diverse workforce, a properly functioning algorithm should, in theory, replicate that diversity.

Third, employers should stay abreast of developments in the law. This is an emerging field and state legislators have taken notice. Illinois recently passed regulation governing the use of AI in the workplace and other states, including New York, have introduced similar bills.

AI can solve many hiring challenges and help cultivate a more diverse and qualified workforce. But the tools are often only as unbiased as the creators and users of that technology. Careful implementation will ensure AI becomes a discrimination solution — not a landmine.

------

Kevin White is a partner and Dan Butler is an associate with Hunton Andrews Kurth LLP, which has an office in Houston.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston-based creator economy platform goes live nationally

so clutch

An app that originally launched on Houston college campuses has announced it's now live nationwide.

Clutch founders Madison Long and Simone May set out to make it easier for the younger generation to earn money with their skill sets. After launching a beta at local universities last fall, Clutch's digital marketplace is now live for others to join in.

The platform connects brands to its network of creators for reliable and authentic work — everything from social media management, video creation, video editing, content creation, graphic design projects, and more. With weekly payments to creators and an inclusive platform for users on both sides of the equation, Clutch aims to make digital collaboration easier and more reliable for everyone.

“We’re thrilled to bring our product to market to make sustainable, authentic lifestyles available to everyone through the creator economy," says May, CTO and co-founder of Clutch. "We’re honored to be part of the thriving innovation community here in Houston and get to bring more on-your-own-terms work opportunities to all creators and businesses through our platform.”

In its beta, Clutch facilitated collaborations for over 200 student creators and 50 brands — such as DIGITS and nama. The company is founded with a mission of "democratizing access to information and technology and elevating the next generation for all people," according to a news release from Clutch. In the beta, 75 percent of the creators were people of color and around half of the businesses were owned by women and people of color.

“As a Clutch Creator, I set my own pricing, schedule and services when collaborating on projects for brands,” says Cathy Syfert, a creator through Clutch. “Clutch Creators embrace the benefits of being a brand ambassador as we create content about the products we love, but do it on behalf of the brands to help the brands grow authentically."

The newly launched product has the following features:

  • Creator profile, where users can share their services, pricing, and skills and review inquiries from brands.
  • Curated matching from the Clutch admin team.
  • Collab initiation, where users can accept or reject incoming collab requests with brands.
  • Collab management — communication, timing, review cycles — all within the platform.
  • In-app payments with a weekly amount selected by the creators themselves.
  • Seamless cancellation for both brands and creators.
Clutch raised $1.2 million in seed funding from Precursor Ventures, Capital Factory, HearstLab, and more. Clutch was originally founded as Campus Concierge in 2021 and has gone through the DivInc Houston program at the Ion.

Madison Long, left, and Simone May co-founded Clutch. Photo courtesy of Clutch

2 Houston suburbs roll onto top-15 spots on U-Haul’s list of growing cities

on the move

More movers hauled their belongings to Texas than any other state last year. And those headed to the Greater Houston area were mostly pointed toward Missouri City and Conroe, according to a new study.

In its recently released annual growth report, U-Haul ranks Missouri City and Conroe at No. 13 and No. 19, respectively among U.S. cities with the most inbound moves via U-Haul trucks in 2022. Richardson was the only other Texas cities to make the list coming in at No. 15.

Texas ranks No. 1 overall as the state with the most in-bound moves using U-Haul trucks. This is the second year in a row and the fifth year since 2016 that Texas has earned the distinction.

“The 2022 trends in migration followed very similar patterns to 2021 with Texas, Florida, the Carolinas and the Southwest continuing to see solid growth,” U-Haul international president John Taylor says in a news release. “We still have areas with strong demand for one-way rentals. While overall migration in 2021 was record-breaking, we continue to experience significant customer demand to move out of some geographic areas to destinations at the top of our growth list.”

U-Haul determines the top 25 cities by analyzing more than 2 million one-way U-Haul transactions over the calendar year. Then the company calculated the net gain of one-way U-Haul trucks entering a specific area versus departing from that area. The top U-Haul growth states are determined the same way.

The studies note that U-Haul migration trends do not directly correlate to population or economic growth — but they are an “effective gauge” of how well cities and states are attracting and maintaining residents.

Missouri City is known for its convenient location only minutes from downtown Houston. The city’s proximity to major freeways, rail lines, the Port of Houston, and Bush and Hobby Airports links its businesses with customers “around the nation and the world,” per its website.

The No. 19-ranked city of Conroe is “the perfect blend of starry nights and city lights,” according to the Visit Conroe website. Conroe offers plenty of outdoor activities, as it is bordered by Lake Conroe, Sam Houston National Forest and W. Goodrich Jones State Forest. But it also has a busy downtown area with breweries, theaters, shopping and live music.

To view U-Haul’s full growth cities report, click here.

------

This article originally ran on CultureMap.

Houston expert: Space tourism is the future — do we have the workforce to run it?

guest column

Throughout history, humans have always been fascinated in exploring and traveling around the world, taking them to many exotic places far and away. On the same token, ever since the dimension of space travel has been inaugurated with multiple private companies launching rockets into space, it has become an agenda to make space travel public and accessible to all. We believe that space travel is the next frontier for tourism just like for our forefathers world travel to faraway places was the next frontier, for recreational and adventure purposes.

In a world racing on technology, we can picture flying cars, invisible doors, and international cuisine in space. With this rapid expansion of the land, the idea of space tourism has stirred the space industry to think about running businesses, start trade, and set up universalization beyond the ring of the earth. It is no longer science fiction but our immediate future. However, the true question remains. Who will be responsible for all of it? Are we training the right workforce that is needed to build and run all of this?

Space tourism is an exciting idea in theory, traveling to extra-terrestrial destinations, exploring new planets, all by being in an anti-gravitational environment. Through these diminishing borders and rapid advancements soon we'll be living the space life, all the virtual, metaverse gigs coming to reality. But before that let's explore space tourism and how the solar system will welcome humans.

What is Space tourism?

Ever since 1967, Apollo opened the getaway of space travel and the technological intervention spun to rise. Just like nomad tourism, space tourism is human space travel for commercializing interstellar for leisure or pleasurable adventures of the unknown. Space has different levels of horizons, according to research, orbital space has high speeds of 17,400 mph to allow the rocket to orbit around the Earth without falling onto the land. While lunar space tourism goes into subcortical flights and brings people back at a slower speed.

Studies have shown that in the upcoming years, commercial space exploration will hike up the economical database, by generating more than expected revenue. On these grounds, space tourism won't be limited to suborbital flights but rather take onto orbital flights, this revolutionary expenditure will change the future.

Everything aligns when the right team works together endlessly to reach the stars. The space exploration will only take place with enthusiastic and empowered individuals catering towards their roles.

Astronomers, space scientists, meteorologists, plasma physicists, aerospace engineers, avionics technicians, technical writers, space producers, and more will work in the field to make this space dream come true.

The attraction of Space exploration

Curiosity is the gateway to the seven wonders of the world. Humans are born with novelty-seeking, the drive to explore the unknown and push boundaries. This exploration has benefited society in a million ways, from making bulbs to jets.

The attraction towards exploring the space stems from the same desire for novelty seeking. We want to answer the most difficult questions about the universe, is there only darkness beyond that sky? Can we live on another planet if ours die? To address the challenges of space and the world, we have created new technologies, industries, and a union worldwide. This shows how vital space exploration is to humans. Many astronauts dwell on the idea of seeing the iconic thin blue outline of our planet, the quintessential experience makes the astronaut go back and back. However, are we entering this dimension with the right skills? Is our future workforce ready to take need the best

Who will lead the path?

The main question that still goes unanswered is who will run space tourism. When it comes to the future, there are infinite options. One decision and you will fly into an endless sky.

This expenditure has opened multiple career opportunities for the future workforce to take on for diversification and exploration of space. Currently, we cannot predict how people will find meaning and improve their lives through space tourism, but it will be a soul-awakening experience. According to experts, travelers would prefer a livelihood in space for which companies are working day and night to figure out accommodation and properties. The ideas include having space hotels, offices, research labs, and tents for operations.

Lastly, space tourism is just a start, we are moving into a dimensional field of physics and astronomy to create new opportunities and ground-breaking inventions to explore the untouchable. The new era of more refined and thoroughly accessed careers are on the rise, let's see how the world evolves in the next 10 years.

------

Ghazal Qureshi is the founder and CEO of UpBrainery, a Houston-based immersive educational technology platform that taps into neuroscience research-based programs to provide adaptive learning and individualized pathways for students at home or in the classroom.