ChatGPT enhances creativity and problem-solving in ways that traditional search tools can’t match. Photo courtesy of Rice Business Wisdom

We all know ChatGPT has forever changed how we do business. It’s modified how we access information, compose content and analyze data. It’s revolutionized the future of work and education. And it has transformed the way we interact with technology.

Now, thanks to a recent paper by Jaeyeon (Jae) Chung (Rice Business), we also know it’s making us better problem solvers.

Key findings:

  • A recent study finds ChatGPT-generated ideas are deemed an average of 15% more creative than traditional methods.
  • ChatGPT enhances “incremental,” but not “radical,” innovation.
  • ChatGPT boosts creativity in tasks normally associated with human traits, like empathy-based challenges.

According to the study published in Nature Human Behavior by Chung and Byung Cheol Lee (University of Houston), ChatGPT enhances our problem-solving abilities, especially with everyday challenges. Whether coming up with gifts for your teenage niece or pondering what to do with an old tennis racquet, ChatGPT has a unique ability to generate creative ideas.

“Creative problem-solving often requires connecting different concepts in a cohesive way,” Chung says. “ChatGPT excels at this because it pulls from a vast range of data, enabling it to generate new combinations of ideas.”

Can ChatGPT Really Make Us More Creative?

Chung and Lee sought to answer a central question: Can ChatGPT help people think more creatively than traditional search engines? To answer this, they conducted five experiments.

Each experiment asked participants to generate ideas for solving challenges, such as how to repurpose household items. Depending on the experiment, participants were divided into one of two or three groups: one that used ChatGPT; one that used conventional web search tools (e.g., Google); and one that used no external tool at all. The resulting ideas were evaluated by both laypeople and business experts based on two critical aspects of creativity: originality and appropriateness (i.e., practicality).

In one standout experiment, participants were asked to come up with an idea for a dining table that doesn’t exist on the market. The ChatGPT group came up with suggestions like a “rotating table,” a “floating table” and even “a table that adjusts its height based on the dining experience.” According to both judges and experts, the ChatGPT group consistently delivered the most creative solutions.

On average, across all experiments, ideas generated with ChatGPT were rated 15% more creative than those produced by traditional methods. This was true even when tasks were specifically designed to require empathy or involved multiple constraints — tasks we typically assume humans might be better at performing.

However, Chung and Lee also found a caveat: While ChatGPT excels at generating ideas that are “incrementally” new — i.e., building on existing concepts — it struggles to produce “radically” new ideas that break from established patterns. “ChatGPT is an incredible tool for tweaking and improving existing ideas, but when it comes to disruptive innovation, humans still hold the upper hand,” Chung notes.

Charting the Next Steps in AI and Creativity

Chung and Lee’s paper opens the door to many exciting avenues for future study. For example, researchers could explore whether ChatGPT’s creative abilities extend to more complex, high-stakes problem-solving environments. Could AI be harnessed to develop groundbreaking solutions in fields like medicine, engineering or social policy? Understanding the nuances of the collaboration between humans and AI could shape the future of education, work and even (as many people fear) art.

For professionals in creative fields like product design or marketing, the study holds especially significant implications. The ability to rapidly generate fresh ideas can be a game-changer in industries where staying ahead of trends is vital. For now, take a second before you throw out that old tennis racquet. Ask ChatGPT for inspiration — you’ll be surprised at how many ideas it comes up with, and how quickly.

-----

This article originally appeared on Rice Business Wisdom. Based on research by Jaeyeon (Jae) Chung and Byung Cheol Lee (University of Houston). Lee and Chung, “An empirical investigation of the impact of ChatGPT on creativity.” Nature Human Behavior (2024): https://doi.org/10.1038/s41562-024-01953-1.


Grocery purchase data can accurately predict credit risk for individuals without traditional credit scores, potentially broadening the pool of qualified loan applicants. Photo via Unsplash

Houston researchers find alternate data for loan qualification

houston voices

Millions of consumers who apply for a loan to buy a house or car or start a business can’t qualify — even if they’re likely to pay it back. That’s because many lack a key piece of financial information: a credit score.

The problem isn’t just isolated to emerging economies. Exclusion from the financial system is a major issue in the United States, too, where some 45 million adults may be denied access to loans because they don’t have a credit history and are “credit invisible.”

To improve access to loans and peoples’ economic mobility, lenders have started looking into alternative data sources to assess a loan applicant’s risk of defaulting. These include bank account transactions and on-time rental, utility and mobile phone payments.

A new article by Rice Business assistant professor of marketing Jung Youn Lee and colleagues from Notre Dame and Northwestern identifies an even more widespread data source that could broaden the pool of qualified applicants: grocery store receipts.

As metrics for predicting credit risk, the researchers found that the types of food, drinks and other products consumers buy, and how they buy them, are just as good as a traditional credit score.

“There could be privacy concerns when you think about it in practice,” Lee says, “so the consumer should really have the option and be empowered to do it.” One approach could be to let consumers opt in to a lender looking at their grocery data as a second chance at approval rather than automatically enrolling them and offering an opt-out.

To arrive at their findings, the researchers analyzed grocery transaction data from a multinational conglomerate headquartered in a Middle Eastern country that owns a credit card issuer and a large-scale supermarket chain. Many people in the country are unbanked. They merged the supermarket’s loyalty card data and issuer’s credit card spending and payment history numbers, resulting in data on 30,089 consumers from January 2017 to June 2019. About half had a credit score, 81% always paid their credit card bills on time, 12% missed payments periodically, and 7% defaulted.

The researchers first created a model to establish a connection between grocery purchasing behavior and credit risk. They found that people who bought healthy foods like fresh milk, yogurt and fruits and vegetables were more likely to pay their bills on time, while shoppers who purchased cigarettes, energy drinks and canned meat tended to miss payments. This held true for “observationally equivalent” individuals — those with similar income, occupation, employment status and number of dependents. In other words, when two people look demographically identical, the study still finds that they have different credit risks.

People’s grocery-buying behaviors play a factor in their likelihood to pay their bills on time, too. For example, cardholders who consistently paid their credit card bill on time were more likely to shop on the same day of the week, spend similar amounts across months and buy the same brands and product categories.

The researchers then built two credit-scoring predictive algorithms to simulate a lender’s decision of whether or not to approve a credit card applicant. One excludes grocery data inputs, and the other includes them (in addition to standard data). Incorporating grocery data into their decision-making process improved risk assessment of an applicant by a factor of 3.11% to 7.66%.

Furthermore, the lender in the simulation experienced a 1.46% profit increase when the researchers implemented a two-stage decision-making process — first, screening applicants using only standard data, then adding grocery data as an additional layer.

One caveat to these findings, Lee and her colleagues warn, is that the benefit of grocery data falls sharply as traditional credit scores or relationship-specific credit histories become available. This suggests the data could be most helpful for consumers new to credit.

Overall, however, this could be a win-win scenario for both consumers and lenders. “People excluded from the traditional credit system gain access to loans,” Lee says, “and lenders become more profitable by approving more creditworthy people.”

------

This article originally ran on Rice Business Wisdom based on research by Rice University's Jung Youn Lee, Joonhyuk Yang (Notre Dame) and Eric Anderson (Northwestern). “Using Grocery Data for Credit Decisions.” Forthcoming in Management Science. 2024: https://doi.org/10.1287/mnsc.2022.02364.


Using biased statistics in hiring makes it more difficult to predict job performance. Photo via Getty Images

Houston research finds race, gender ineffective predictors of employee productivity

houston voices

The Latin phrase scientia potentia est translates to “knowledge is power.”

In the world of business, there’s a school of thought that takes “knowledge is power” to an extreme. It’s called statistical discrimination theory. This framework suggests that companies should use all available information to make decisions and maximize profits, including the group characteristics of potential hires — such as race and gender — that correlate with (but do not cause) productivity.

Statistical discrimination theory suggests that if there's a choice between equally qualified candidates — let's say, a man and a woman — the hiring manager should use gender-based statistics to the company's benefit. If there's data showing that male employees typically have larger networks and more access to professional development opportunities, the hiring manager should select the male candidate, believing such information points to a more productive employee.

Recent research suggests otherwise.

A peer-reviewed study out of Rice Business and Michigan Ross undercuts the premise of statistical discrimination theory. According to researchers Diana Jue-Rajasingh (Rice Business), Felipe A. Csaszar (Michigan) and Michael Jensen (Michigan), hiring outcomes actually improve when decision-makers ignore statistics that correlate employee productivity with characteristics like race and gender.

Here's Why “Less is More”

Statistical discrimination theory assumes a correlation between individual productivity and group characteristics (e.g., race and gender). But Jue-Rajasingh and her colleagues highlight three factors that undercut that assumption:

  • Environmental uncertainty
  • Biased interpretations of productivity
  • Decision-maker inconsistency

This third factor plays the biggest role in the researchers' model. “For statistical discrimination theory to work,” Jue-Rajasingh says, “it must assume that managers are infallible and decision-making conditions are optimal.”

Indeed, when accounting for uncertainty, inconsistency and interpretive bias, the researchers found that using information about group characteristics actually reduces the accuracy of job performance predictions.

That’s because the more information you include in the decision-making process, the more complex that process becomes. Complex processes make it more difficult to navigate uncertain environments and create more space for managers to make mistakes. It seems counterintuitive, but when firms use less information and keep their processes simple, they are more accurate in predicting the productivity of their hires.

The less-is-more strategy is known as a “heuristic.” Heuristics are simple, efficient rules or mental shortcuts that help decision-makers navigate complex environments and make judgments more quickly and with less information. In the context of this study, published by Organization Science, the heuristic approach suggests that by focusing on fewer, more relevant cues, managers can make better hiring decisions.

Two Types of Information "Cues"

The “less is more” heuristic works better than statistical discrimination theory largely because decision makers are inconsistent in how they weight the available information. To factor for inconsistency, Jue-Rajasingh and her colleagues created a model that reflects the “noise” of external factors, such as a decision maker’s mood or the ambiguity of certain information.

The model breaks the decision-making process into two main components: the environment and the decision maker.

In the environment component, there are two types of information, or “cues,” about job candidates. First, there’s the unobservable, causal cue (e.g., programming ability), which directly relates to job performance. Second, there's the observable, discriminatory cue (e.g., race or gender), which doesn't affect how well someone can do the job but, because of how society has historically worked, might statistically seem connected to job skills.

Even if the decision maker knows they shouldn't rely too much on information like race or gender, they might still use it to predict productivity. But job descriptions change, contexts are unstable, and people don’t consistently consider all variables. Between the inconsistency of decision-makers and the environmental noise created by discriminatory cues, it’s ultimately counterproductive to consider this information.

The Bottom Line

Jue-Rajasingh and her colleagues find that avoiding gender- and race-based statistics improves the accuracy of job performance predictions. The fewer discriminatory cues decision-makers rely on, the less likely their process will lead to errors.

That said: With the advent of AI, it could become easier to justify statistical discrimination theory. The element of human inconsistency would be removed from the equation. But because AI is often rooted in biased data, its use in hiring must be carefully examined to prevent worsening inequity.

------

This article originally ran on Rice Business Wisdom based on research by Rice University's Diana Jue-Rajasingh, Felipe A. Csaszar (Michigan) and Michael Jensen (Michigan). For more, see Csaszar, et al. “When Less is More: How Statistical Discrimination Can Decrease Predictive Accuracy.”

As corporate debt markets continue to grow in importance, it will become crucial for investors and regulators to understand the nuanced factors influencing their liquidity. Photo via Getty Images

Rice research on bond and stock market differences, earnings variations

houston voices

At the end of every quarter, publicly traded companies announce their profits and losses in an earnings report. These updates provide insight into a company’s performance and, in theory, give investors and shareholders clarity on whether to buy, sell or hold. If earnings are good, the stock price may soar. If they’re down, the price might plunge.

However, the implications for the stock price may not be immediately clear to all investors. In the face of this uncertainty, sellers will ask for high prices, and buyers will offer low ones, creating a significant “bid-ask spread.” When this happens, it becomes more costly to trade, and the stock becomes less liquid.

This is a well-documented effect on equity stock markets. However, according to research by Stefan Huber (Rice Business), Chongho Kim (Seoul National University) and Edward M. Watts (Yale SOM), the corporate bond market responds differently to earnings news. This is because bond markets differ from stock markets in a significant way.

Stocks v. Bonds: What Happens When Earnings Are Announced?

Equities are usually traded on centralized exchanges (e.g., New York Stock Exchange). The exchange automatically queues up buyers and sellers according to the quote they’ve entered. Trades are executed electronically, and the parties involved are typically anonymous. A prospective buyer might purchase Microsoft shares from someone drawing down their 401(k) — or they could be buying from Bill Gates himself.

Corporate bond markets work differently. They are “over-the-counter” (OTC) markets, meaning a buyer or seller needs to find a counterparty to trade with. This involves getting quotes from and negotiating with potential counterparties. This is an inherent friction in bond trading that results in much higher costs of trading in the form of wider bid-ask spreads.

Here’s what Huber and his colleagues learned from the research: Earnings announcements prompt many investors to trade. And on OTC markets, potential buyers and sellers become easier to find and negotiate with.

A Stronger Bargaining Position for Bonds

According to Huber, “When earnings information comes out, a lot of people want to trade. In bond markets, that makes it much easier to find someone to trade with. The more options you have to trade, the stronger your bargaining position becomes, and the lower your trading costs go.”

He compares the process to shopping in a market with a flexible approach to pricing.

“Let's say you're at a farmers market and you want to buy an apple,” Huber says. “If there is only one seller, you buy the apple from that person. They can ask for whatever price they want. But if there are multiple sellers, you can ask around, and there is potential to get a better price. The price you get depends on the number of options you have in trading partners.”

What’s at Stake?

Although bonds receive less attention than equities, the stakes are high. There is about $10 trillion in outstanding corporate debt in the U.S., and more than $34 billion in average daily trading volume.

A detailed record of bond trades is available from the Financial Industry Regulatory Authority (FINRA), which requires that trades be reported via their Trade Reporting and Compliance Engine (TRACE).

The study from Huber and co-authors uses an enhanced version of TRACE to examine trades executed between 2002 and 2020. The team analyzed the thirty-day periods before and after earnings announcements to gather data about volume, bid-ask spreads and other measures of liquidity.

They find that, like on the stock market, there are more investors and broker-dealers trading bonds around earnings announcements. However, unlike on the stock market, transaction costs for bonds decrease by 6 to 7 percent in the form of bid-ask spreads.

What Sets This Research Apart?

“Taking a purely information asymmetry-based view would predict that what happens to stock liquidity would also happen to bonds,” Huber says. “A piece of information drops, and some people are better able to work with it, so others price protect, and bid-ask spreads and the cost of trading go up.”

“But if you consider the search and bargaining frictions in bond markets, you get a more nuanced picture. While information asymmetry increases, like it does on stock markets, the information prompts more investors into bond trading, which makes it easier to find counterparties and get better transaction prices. Consequently, bid-ask spreads go down. This search and bargaining friction does not really exist on equities exchanges. But we cannot ignore it in OTC markets.”

As corporate debt markets continue to grow in importance, it will become crucial for investors and regulators to understand the nuanced factors influencing their liquidity. This study provides a solid foundation for future research.

------

This article originally ran on Rice Business Wisdom. For more, see “Earnings News and Over-the-Counter Markets.” Journal of Accounting Research 62.2 (2024): 701-35.

Putting students and families at the center of strategy will optimize resources and improve academic outcomes. Photo via Getty Images

Houston expert: How adopting business strategies in the education sector can improve results

houston voices

It’s no secret: K-12 public schools in the U.S. face major challenges. Resources are shrinking. Costs are climbing. Teachers are battling burnout. Student outcomes are declining.

There are many areas of concern.

Some difficulties are intangible, inescapable and made worse by crises like the COVID-19 pandemic. Some can be fixed or alleviated by wisely allocating resources. And others — like a lack of strategic focus — can be avoided altogether.

It’s this final area, strategic focus, that researchers Vikas Mittal (Rice Business) and Jihye Jung (UT-San Antonio) address in a groundbreaking study. According to Mittal and Jung, superintendents and principals misallocate vast amounts of time and resources trying to appease their many stakeholders — students, parents, teachers, board trustees, community leaders, state evaluators, college recruiters, potential employers, etc.

Instead, Mittal and Jung show, administrators need to put their entire focus on one key stakeholder — the “customer,” i.e. students and families.

It may sound strange to call students and families “customers” in the context of public education. After all, 5th-period Spanish isn’t like buying an iPhone or fast food. The classroom is not transactional. Students and caregivers are part of a broader relational context that most directly involves teachers and peers. And students are expected to contribute to that context.

But K-12 public funds are tied to enrollment and attendance numbers. This means the success or failure of a school or school district ultimately comes down to “customer” satisfaction.

Beware the Stakeholder Appeasement Trap

Here’s what happens when students and families become dissatisfied with their school:

As conditions deteriorate, families (who can afford to) may choose to homeschool or move their children to private or better-performing public schools. As a result, enrollment revenue decreases, which forces administrators to cut costs. Cut costs lead to worsened performance and lower satisfaction among students and families. Lower satisfaction leads to further enrollment loss, which leads to more cost-cutting. And so on. (Schools need about 500-600 students to break even.)

It’s a vicious downward spiral, and it’s not unusual for schools to become trapped in it. To avoid this vortex, administrators end up adopting a “spray and pray” or “adopt and hope” approach, pursuing various stakeholder agendas in hopes that one of them will be the key to institutional success. Group A wants stronger security. Group B wants improved internet access. Group C wants better facilities. Group D wants to expand athletics.

It’s an understandable impulse to make everyone happy. However, Mittal and Jung find that the “stakeholder appeasement” approach dilutes strategic focus, wastes resources and creates a bloat of ineffective initiatives.

Initiative bloat isn't a benign problem. The labor of implementing programs inevitably falls on teachers and frontline staff, which can result in mediocre performance and burnout. As initiatives multiple over time, communication lines become strained and, distracted by the administration's efforts to please everyone, teachers and frontline staff fail to satisfy students and families.

Pay Attention to Lift Potential

Using data from administrator interviews and more than 10,000 parent surveys, Mittal and Jung find that students and families only value a few strategic areas. By far the most important is family and community engagement, followed by academics and teachers. The least important, somewhat surprisingly, is extracurriculars like athletics programs.

The assumption that athletics would be high on the list of student and family priorities raises a crucial point in the study. Mittal and Jung note that it’s a serious error to assume that the more a strategic area is mentioned the more it drives customer value.

“Conflating the two — salience and lift potential — is the single biggest factor that can mislead strategy planning,” the researchers say.

A customer-focused strategy prioritizes lift potential — meaning it allocates budgets, people and time to the areas that have the highest capacity to increase customer value, as measured by customer satisfaction. If family and community engagement is the most important strategic area, then savvy administrators will invest in the “execution levers” that improve it.

For instance, Mittal and Jung find that allowing input on school policies is the most effective lever for demonstrating family and community engagement. Another important strategic area is improving the quality of teachers, and one of the most effective ways of doing this is to emphasize their academic qualifications.

Just as important as instituting effective customer-focused initiatives is de-emphasizing those that are ineffective. It can be a difficult process to stop and de-emphasize initiatives, however ineffective. But ultimately, the benefit is that teachers and frontline staff will be able to concentrate on the execution levers that matter.

This strategic transformation can’t happen overnight. Developing the framework will require a school district 18 to 24 months, Mittal and Jung estimate. Embedding it into practice can take an additional 12 to 18 months. For example, it would involve changing the way senior administrators, school principals and teachers are held accountable. Instead of emphasizing standardized test scores, which do not add to customer satisfaction, it’s more effective to concentrate on input factors that directly impact the quality of academics and learning.

To help schools develop and implement a customer-focused strategy, future research can focus on frameworks for guiding schools to maximize the areas of value that students and families care about most.

------

This article originally ran on Rice Business Wisdom. For more, see Mittal and Jung, “Revitalizing educational institutions through customer focus.” Journal of the Academy of Marketing Science (2024): https://doi.org/10.1007/s11747-024-01007-y.

Temporary gatherings — like conferences and hackathons — are essential to attracting third-party developers. Photo via Getty Images

Rice expert: Why tech companies should sponsor hackathons

houston voices

Companies like Amazon, Microsoft and Apple depend on third-party developers to create applications that improve the user experience on their platforms. However, given the many options available, developers face a daunting task in deciding which platform to focus their efforts on.

“Developers are faced with imperfect information,” says Rice Business assistant professor Tommy Pan Fang. “They don’t have an overview of the entire technology landscape.”

A team of researchers, consisting of Fang, Andy Wu (Harvard University) and David Clough (University of British Columbia), set out to investigate how temporary gatherings like “hackathons” — in-person software development competitions — might influence a developer’s choice of software platform.

Hackathons like Rice University’s annual HackRice draw developers looking to pick up new skills and create applications with teammates. Many of these events are sponsored by software platform companies.

The research team conjectured that hackathon attendees are more likely to adopt a particular platform if any of the following conditions are true:

  • A high number of fellow attendees have already embraced it.
  • A fellow attendee has built an award-winning hackathon project on it.
  • The platform that sponsors the hackathon is already popular.

To test their theories, the researchers followed 1,302 software developers participating in 167 hackathons from January 2014 to May 2017. Twenty-nine different platforms sponsored the hackathons. Fang and his colleagues tracked developers’ platform choices before and after the in-person events.

The researchers found that temporary gatherings — like hackathons, conferences and trade fairs — make a difference.

Developers with greater technical expertise were more likely to use a platform widely embraced by fellow hackathon attendees. And with every 10% increase in the number of hackathon attendees already using a given platform, other attendees were 1.2% more likely to try out that platform themselves the following year.

They also found that platforms benefit from sponsoring temporary gatherings, like hackathons.

Developers who attended a hackathon sponsored by a particular platform were 20.4% more likely to adopt that platform in the following year, compared to developers who either did not attend any hackathon or attended one without a sponsor.

Part of the reason for the findings is that developers at hackathons exert social influence on each other, both during organized hackathon events like competitions and workshops, as well as informal ones including ping pong tournaments or nights playing video games.

“The social interaction and seeing their peers be successful with the tools and what’s fashionable impacts the tools they decide to adopt,” says Fang. “For developers trying to figure out what technology to adopt in a world with imperfect information and uncertainty, having a gathering can be a beacon.”

Interviews with hackathon organizers, sponsors and developers in the U.S. and Canada backed up the researchers’ findings. Interviewees shared how they learned from their interactions with fellow developers during hackathons.

“When I’m walking around, it becomes noticeable what technologies people are using,” said a veteran of 15 hackathons. Another noted that if more people use a certain application programming interface, “it’s lower risk because it will be usable.” They added, “Most people just follow others.”

The study has implications for both developers and software platform companies alike. Results suggest hackathons can be a valuable venue for developers, not only to pick up new skills, but also to help them identify which platforms to use in the first place. For software companies, the lesson is simple: Sponsoring hackathons can be good for business.

Future research could look at how other types of events like conferences, tournaments and world’s fairs might impact how people end up adopting technologies, especially emerging ones, Fang says. For example, a company like OpenAI could use these types of in-person events to garner support and build momentum for its products.

“Companies that may have taken a step back during Covid should reevaluate in-person events to get people excited and regain momentum for their platforms,” Fang says. “The take-home message is, go out there and sponsor these events.”

This article originally ran on Rice Business Wisdom. For more, see Fang, et al. “Platform diffusion at temporary gatherings: Social coordination and ecosystem emergence.” Strategic Management Journal 42.2 (2021): 233-272. https://doi.org/10.1002/smj.3230.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston lab develops AI tool to improve neurodevelopmental diagnoses

developing news

One of the hardest parts of any medical condition is waiting for answers. Speeding up an accurate diagnosis can be a doctor’s greatest mercy to a family. A team at Baylor College of Medicine has created technology that may do exactly that.

Led by Dr. Ryan S. Dhindsa, assistant professor of pathology and immunology at Baylor and principal investigator at the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, the scientists have developed an artificial intelligence-based approach that will help doctors to identify genes tied to neurodevelopmental disorders. Their research was recently published the American Journal of Human Genetics.

According to its website, Dhindsa Lab uses “human genomics, human stem cell models, and computational biology to advance precision medicine.” The diagnoses that stem from the new computational tool could include specific types of autism spectrum disorder, epilepsy and developmental delay, disorders that often don’t come with a genetic diagnosis.

“Although researchers have made major strides identifying different genes associated with neurodevelopmental disorders, many patients with these conditions still do not receive a genetic diagnosis, indicating that there are many more genes waiting to be discovered,” Dhindsa said in a news release.

Typically, scientists must sequence the genes of many people with a diagnosis, as well as people not affected by the disorder, to find new genes associated with a particular disease or disorder. That takes time, money, and a little bit of luck. AI minimizes the need for all three, explains Dhindsa: “We used AI to find patterns among genes already linked to neurodevelopmental diseases and predict additional genes that might also be involved in these disorders.”

The models, made using patterns expressed at the single-cell level, are augmented with north of 300 additional biological features, including data on how intolerant genes are to mutations, whether they interact with other known disease-associated genes, and their functional roles in different biological pathways.

Dhindsa says that these models have exceptionally high predictive value.

“Top-ranked genes were up to two-fold or six-fold, depending on the mode of inheritance, more enriched for high-confidence neurodevelopmental disorder risk genes compared to genic intolerance metrics alone,” he said in the release. “Additionally, some top-ranking genes were 45 to 500 times more likely to be supported by the literature than lower-ranking genes.”

That means that the models may actually validate genes that haven’t yet been proven to be involved in neurodevelopmental conditions. Gene discovery done with the help of AI could possibly become the new normal for families seeking answers beyond umbrella terms like “autism spectrum disorder.”

“We hope that our models will accelerate gene discovery and patient diagnoses, and future studies will assess this possibility,” Dhindsa added.

Texas robotics co. begins new search for missing Malaysia Airlines flight 370

International News

Malaysia’s government has given final approval for a Texas-based marine robotics company to renew the search for Malaysia Airlines Flight 370, which is believed to have crashed in the southern Indian Ocean more than a decade ago.

Cabinet ministers agreed to terms and conditions for a “no-find, no-fee” contract with Texas-based Ocean Infinity to resume the seabed search operation at a new 5,800-square-mile site in the ocean, Transport Minister Anthony Loke said in a statement Wednesday. Ocean Infinity will be paid $70 million only if wreckage is discovered.

The Boeing 777 plane vanished from radar shortly after taking off on March 8, 2014, carrying 239 people, mostly Chinese nationals, on a flight from Malaysia’s capital, Kuala Lumpur, to Beijing. Satellite data showed the plane turned from its flight path and headed south to the far-southern Indian Ocean, where it is believed to have crashed.

An expensive multinational search failed to turn up any clues to its location, although debris washed ashore on the east African coast and Indian Ocean islands. A private search in 2018 by Ocean Infinity also found nothing.

The final approval for a new search came three months after Malaysia gave the nod in principle to plans for a fresh search.

Ocean Infinity CEO Oliver Punkett earlier this year reportedly said the company had improved its technology since 2018. He has said the firm is working with many experts to analyze data and had narrowed the search area to the most likely site.

Loke said his ministry will ink a contract with Ocean Infinity soon but didn’t provide details on the terms. The firm has reportedly sent a search vessel to the site and indicated that January-April is the best period for the search.

“The government is committed to continuing the search operation and providing closure for the families of the passengers of flight MH370,” he said in a statement.

Harris County booms with 3rd biggest population in U.S.

Boomtown

Newly released U.S. Census Bureau data has revealed Harris County became the third most populous county nationwide in 2024, and it had the highest year-over-year growth rate from 2023.

The new population report, published this month, estimated year-over-year population data from 2023 to 2024 across all 3,144 U.S. counties, and 387 metro areas.

Harris County's numeric growth rate outpaced all other U.S. counties from July 1, 2023 to July 1, 2024, the report found. The Census Bureau estimated Harris County's population grew by 105,852 people year-over-year, bringing the total population to 5,009,302 residents. That's around a 2.16 percent growth rate.

Los Angeles County, California (No. 1) and Illinois' Cook County (No. 2) are the only two U.S. counties that have larger populations than Harris County. Los Angeles County now boasts a population of nearly 9.76 million, while Cook County's has increased to more than 5.18 million people.

The top 10 most populous counties in the U.S. are:

  • No. 1 – Los Angles County, California
  • No. 2 – Cook County, Illinois
  • No. 3 – Harris County, Texas
  • No. 4 – Maricopa County, Arizona
  • No. 5 – San Diego County, California
  • No. 6 – Orange County, California
  • No. 7 – Miami-Dade County, Florida
  • No. 8 – Dallas County, Texas
  • No. 9 – Kings County, New York
  • No. 10 – Riverside County, California

Montgomery County also ranked among the top 10 U.S. counties with the highest numeric growth, ranking 9th nationally after gaining 34,268 residents from 2023 to 2024. Montgomery County's population has now grown to 749,613 people.

In the report's national comparison of counties with the largest population growth by percentage, Montgomery County ranked No. 7 with a year-over-year growth rate of 4.8 percent.

Most populated U.S. metro areas

The U.S. Census Bureau additionally found Houston-Pasadena-The Woodlands nearly led the nation as the second-fastest growing metro area in 2024.

From July 2023 to July 2024, the Houston metro added 198,171 residents to bring the total population to 7,796,182.

New York-Newark-Jersey City was the only metro area to outpace Houston's growth during the one-year period. The New York-New Jersey metro added 213,403 new residents, which brought the total population to over 19.94 million last year.

Kristie Wilder, a Census Bureau Population Division demographer, said in the report that the nation's population growth in its major metros was largely impacted by international migration rather than changes in birth rates.

"While births continue to contribute to overall growth, rising net international migration is offsetting the ongoing net domestic outmigration we see in many of these areas," Wilder said.

Dallas-Fort Worth-Arlington was right behind Houston as the third-fastest growing U.S. metro in 2024. The Metroplex gained 177,922 residents last year, and now has a total population of more than 8.34 million.

The top 10 U.S. metros with the highest numeric growth from 2023 to 2024 are:

  • No. 1 – New York-Newark-Jersey City, New York-New Jersey
  • No. 2 – Houston-Pasadena-The Woodlands, Texas
  • No. 3 – Dallas-Fort Worth-Arlington, Texas
  • No. 4 – Miami-Fort Lauderdale-West Palm Beach, Florida
  • No. 5 – Washington-Arlington-Alexandria, D.C.-Virginia-Maryland-West Virginia
  • No. 6 – Phoenix-Mesa-Chandler, Arizona
  • No. 7 – Orlando-Kissimmee-Sanford, Florida
  • No. 8 – Atlanta-Sandy Springs-Roswell, Georgia
  • No. 9 – Chicago-Naperville-Elgin, Illinois-Indiana
  • No. 10 – Seattle-Tacoma-Bellevue, Washington
---

This article originally appeared on our sister site, CultureMap.com.