A map of U.S. data centers. Courtesy of Rice Businesses Wisdom

A new study shows why some facilities cluster in cities for speed and access, while others move to rural regions in search of scale and lower costs. Based on research by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard).

Key findings:

  • Third-party colocation centers are physical facilities in close proximity to firms that use them, while cloud providers operate large data centers from a distance and sell access to virtualized computing resources as on‑demand services over the internet.
  • Hospitals and financial firms often require urban third-party centers for low latency and regulatory compliance, while batch processing and many AI workloads can operate more efficiently from lower-cost cloud hubs.
  • For policymakers trying to attract data centers, access to reliable power, water and high-capacity internet matter more than tax incentives.

Recent outages and the surge in AI-driven computing have made data center siting decisions more consequential than ever, especially as energy and water constraints tighten. Communities invest public dollars on the promise of jobs and growth, while firms weigh long-term commitments to land, power and connectivity.

Against that backdrop, a critical question comes into focus: Where do data centers get built — and what actually drives those decisions?

A new study by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard Business School) provides the first large-scale statistical analysis of data center location strategies across the United States. It offers policymakers and firms a clearer starting point for understanding how different types of data centers respond to economic and strategic incentives.

Forthcoming in the journal Strategy Science, the study examines two major types of infrastructure: third-party colocation centers that lease server space to multiple firms, and hyperscale cloud centers owned by providers like Amazon, Google and Microsoft.

Two Models, Two Location Strategies

The study draws on pre-pandemic data from 2018 and 2019, a period of relative geographic stability in supply and demand. This window gives researchers a clean baseline before remote work, AI demand and new infrastructure pressures began reshaping internet traffic patterns.

The findings show that data centers follow a bifurcated geography. Third-party centers cluster in dense urban markets, where buyers prioritize proximity to customers despite higher land and operating costs. Cloud providers, by contrast, concentrate massive sites in a small number of lower-density regions, where electricity, land and construction are cheaper and economies of scale are easier to achieve.

Third-party data centers, in other words, follow demand. They locate in urban markets where firms in finance, healthcare and IT value low latency, secure storage, and compliance with regulatory standards.

Using county-level data, the researchers modeled how population density, industry mix and operating costs predict where new centers enter. Every U.S. metro with more than 700,000 residents had at least one third-party provider, while many mid-sized cities had none.

ImageThis pattern challenges common assumptions. Third-party facilities are more distributed across urban America than prevailing narratives suggest.

Customer proximity matters because some sectors cannot absorb delay. In critical operations, even slight pauses can have real consequences. For hospital systems, lag can affect performance and risk exposure. And in high-frequency trading, milliseconds can determine whether value is captured or lost in a transaction.

“For industries where speed is everything, being too far from the physical infrastructure can meaningfully affect performance and risk,” Pan Fang says. “Proximity isn’t optional for sectors that can’t absorb delay.”

The Economics of Distance

For cloud providers, the picture looks very different. Their decisions follow a logic shaped primarily by cost and scale. Because cloud services can be delivered from afar, firms tend to build enormous sites in low-density regions where power is cheap and land is abundant.

These facilities can draw hundreds of megawatts of electricity and operate with far fewer employees than urban centers. “The cloud can serve almost anywhere,” Pan Fang says, “so location is a question of cost before geography.”

The study finds that cloud infrastructure clusters around network backbones and energy economics, not talent pools. Well-known hubs like Ashburn, Virginia — often called “Data Center Alley” — reflect this logic, having benefited from early network infrastructure that made them natural convergence points for digital traffic.

Local governments often try to lure data centers with tax incentives, betting they will create high-tech jobs. But the study suggests other factors matter more to cloud providers, including construction costs, network connectivity and access to reliable, affordable electricity.

When cloud centers need a local presence, distance can sometimes become a constraint. Providers often address this by working alongside third-party operators. “Third-party centers can complement cloud firms when they need a foothold closer to customers,” Pan Fang says.

That hybrid pattern — massive regional hubs complementing strategic colocation — may define the next phase of data center growth.

Looking ahead, shifts in remote work, climate resilience, energy prices and AI-driven computing may reshape where new facilities go. Some workloads may move closer to users, while others may consolidate into large rural hubs. Emerging data-sovereignty rules could also redirect investment beyond the United States.

“The cloud feels weightless,” Pan Fang says, “but it rests on real choices about land, power and proximity.”

---

This article originally appeared on Rice Business Wisdom. Written by Scott Pett.

Pan Fang and Greenstein (2025). “Where the Cloud Rests: The Economic Geography of Data Centers,” forthcoming in Strategy Science.

There's no crystal ball, but this researcher from Rice University is trying to see if some metrics work for economic forecasting. Photo via Getty Images

Houston researcher tries to crack the code on the Fed's data to determine economic outlook

houston voices

Research by Rice Business Professor K. Ramesh shows that the Fed appears to harvest qualitative information from the accounting disclosures that all public companies must file with the Securities and Exchange Commission.

These SEC filings are typically used by creditors, investors and others to make firm-level investing and financing decisions; and while they include business leaders’ sense of economic trends, they are never intended to guide macro-level policy decisions. But in a recent paper (“Externalities of Accounting Disclosures: Evidence from the Federal Reserve”), Ramesh and his colleagues provide persuasive evidence that the Fed nonetheless uses the qualitative information in SEC filings to help forecast the growth of macroeconomic variables like GDP and unemployment.

According to Ramesh, the study was made possible thanks to a decision the SEC made several years ago. The commission stores the reports submitted by public companies in an online database called EDGAR and records the IP address of any party that accesses them. More than a decade ago, the SEC began making partially anonymized forms of those IP addresses available to the public. But researchers eventually figured out how to deanonymize the addresses, which is precisely what Ramesh and his colleagues did in this study.

"We were able to reverse engineer and identify those IP addresses that belonged to Federal Reserve staff," Ramesh says.

The team ultimately assembled a data set containing more than 169,000 filings accessed by Fed staff between 2005 and 2015. They quickly realized that the Fed was interested only in filings submitted by a select group of industry leaders and financial institutions.

But if Ramesh and his colleagues now had a better idea of precisely which bellwether firms the Fed focused on, they still had no way of knowing exactly what Fed staffers had gleaned from the material they accessed. So the team decided to employ a measure called "tone" that captures the overall sentiment of a piece of text – whether positive, negative, or neutral.

Building on previous research that had identified a set of words with negatively toned financial reports, Ramesh and his colleagues examined the tone of all the SEC filings accessed by Fed staff between one meeting of the Federal Open Markets Committee (FOMC) and the next. The FOMC sets interest rates and guides monetary policy, and its meetings provide an opportunity for Fed officials to discuss growth forecasts and announce policy decisions.

The researchers then examined the Fed's growth forecasts to see if there was a relationship between the tone of the documents that Fed staff examined in the period between FOMC meetings and the forecasts they produced in advance of those meetings.

The team found close correlations between the tone of the reports accessed by the Fed and the agency’s forecasts of GDP, unemployment, housing starts and industrial production. The more negative the filings accessed prior to an FOMC meeting, for example, the gloomier the GDP forecast; the more positive the filings, the brighter the unemployment forecast.

Ramesh and his colleagues also compared the Fed's forecasts with those of the Society of Professional Forecasters (SPF), whose members span academia and industry. Intriguingly, the researchers found that while the errors in the SPF's forecasts could be attributed to the absence of the tonal information culled from the SEC filings, the errors in the Fed’s forecasts could not. This suggests both that the Fed was collecting qualitative information that the SPF was not—and that the agency was making remarkably efficient use of it.

"They weren’t leaving anything on the table," Ramesh says.

Having solved one mystery, Ramesh would like to focus on another; namely, how does the Fed identify bellwether firms in the first place?

Unfortunately, the SEC no longer makes IP address data publicly available, which means that Ramesh and his colleagues can no longer study which companies the Fed is most interested in. Nonetheless, Ramesh hopes to use the data they have already collected to build a model that can accurately predict which firms the Fed is most likely to follow. That would allow the team to continue studying the same companies that the Fed does, and, he says, “maybe come up with a way to track those firms in order to understand how the economy is going to move.”

------

This article originally ran on Rice Business Wisdom and was based on research from K. Ramesh is Herbert S. Autrey Professor of Accounting at Jones Graduate School of Business at Rice University.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Austin company to bring AI-powered school to The Woodlands

AI education

Austin-based Alpha School, which operates AI-powered private schools, is opening its first Houston-area location in The Woodlands.

The 8,000-square-foot school, scheduled to be ready for the 2026-27 academic year, initially will serve students in kindergarten through eighth grade. Alpha says the school will offer “open workshop spaces and innovative classrooms that support personalized instruction, core academics, leadership development, and real-world life skills.”

Alpha sets aside two hours each school day for the AI-driven, self-paced study of core subjects like math, reading and science. The rest of each school day consists of life-skills workshops focusing on topics such as leadership and financial literacy.

Alpha’s school in The Woodlands has begun accepting applications for the 2026-27 school year. Annual tuition costs $40,000.

“The Woodlands is one of the most dynamic, forward-thinking communities in Texas, and Alpha is proud to bring

an innovative educational model that complements its strong academic foundation,” says Rachel Goodlad, head

of expansion for Alpha.

Founded in 2014, Alpha School combines adaptive technology-driven instruction with immersive life-skills workshops. Its model emphasizes mastery-based learning in core subjects alongside development of communication, critical thinking, financial literacy and leadership skills. It operates more than 15 schools across the country.

Elsewhere in Texas, Alpha operates schools in Austin, Brownsville, Fort Worth and Plano. Alpha also operates 12 Texas Sports Academy campuses in Texas, including locations in Houston, Pearland and Richmond, along with a NextGen Academy esports school in Austin, a school for gifted students in Georgetown, and lower-cost Nova Academy campuses in Austin and Bastrop.

Alpha has fans and critics. While supporters tout students’ high achievement rates, detractors complain about the high tuition and the AI-influenced depersonalization of education.

“Students and our country need to be in relationship with other human beings,” Randi Weingarten, president of the American Federation of Teachers, a teachers union, tells The New York Times. “When you have a school that is strictly AI, it is violating that core precept of the human endeavor and of education.”

Alpha co-founder MacKenzie Price, a podcaster and social media influencer, doesn’t share Weingarten’s views.

“Parents and teachers: We need to embrace this change,” Price wrote after President Trump signed an executive order promoting AI in schools.

The Times notes that Alpha doesn’t employ AI as a tutor or a supplement. Rather, the newspaper says, AI is “the school’s primary educational driver to move students through academic content.”

Houston researcher secures $1.7M to develop drug for aggressive form of breast cancer

cancer research

A University of Houston researcher has joined a $3.2 million effort to develop a new drug designed to attack a cancer-driving protein commonly found in triple-negative breast cancer.

Triple-negative breast cancer (TNBC) is one of the most difficult-to-treat forms of cancer and accounts for 10 percent to 15 percent of all breast cancer cases. The disease gets its name because tumors associated with it test negative for estrogen receptors, progesterone receptors and excess HER2 protein, making it difficult to target. Due to this, TNBC is often treated with general chemotherapy, which can come with negative side effects and drug resistance, according to UH.

UH College of Pharmacy research associate professor Wei Wang is developing a drug that can target the disease more specifically. The drug will target MDM2, a protein often overproduced in TNBC that also contributes to faster tumor growth.

Wang is working on a team led by Wei Li, director of the University of Tennessee Health Science Center College of Pharmacy’s Drug Discovery Center. She has received $1.7 million to support the research.

Wang and UH professor of pharmacology and toxicology Ruiwen Zhang have discovered a compound that can break down MDM2. In early laboratory models, the compound has shown the ability to shrink tumors.

Wang and Zhang will focus on understanding how the treatment works and monitoring its effectiveness in models that closely mirror human disease.

“We will study how the drug targets MDM2 and evaluate the most promising drug candidates to determine effective dosing, understand how the drug behaves in the body, compare it with existing treatments and assess early safety,” Wang said in a news release.

Li’s team at the University of Tennessee will be working on the chemistry and drug design end of the project.

“This work could lead to an entirely new class of therapies for triple-negative breast cancer,” Li added in the release. “We’re hopeful that by directly removing the MDM2 protein from cancer cells, we can help more patients respond to treatment regardless of their tumor type.”