A map of U.S. data centers. Courtesy of Rice Businesses Wisdom

A new study shows why some facilities cluster in cities for speed and access, while others move to rural regions in search of scale and lower costs. Based on research by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard).

Key findings:

  • Third-party colocation centers are physical facilities in close proximity to firms that use them, while cloud providers operate large data centers from a distance and sell access to virtualized computing resources as on‑demand services over the internet.
  • Hospitals and financial firms often require urban third-party centers for low latency and regulatory compliance, while batch processing and many AI workloads can operate more efficiently from lower-cost cloud hubs.
  • For policymakers trying to attract data centers, access to reliable power, water and high-capacity internet matter more than tax incentives.

Recent outages and the surge in AI-driven computing have made data center siting decisions more consequential than ever, especially as energy and water constraints tighten. Communities invest public dollars on the promise of jobs and growth, while firms weigh long-term commitments to land, power and connectivity.

Against that backdrop, a critical question comes into focus: Where do data centers get built — and what actually drives those decisions?

A new study by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard Business School) provides the first large-scale statistical analysis of data center location strategies across the United States. It offers policymakers and firms a clearer starting point for understanding how different types of data centers respond to economic and strategic incentives.

Forthcoming in the journal Strategy Science, the study examines two major types of infrastructure: third-party colocation centers that lease server space to multiple firms, and hyperscale cloud centers owned by providers like Amazon, Google and Microsoft.

Two Models, Two Location Strategies

The study draws on pre-pandemic data from 2018 and 2019, a period of relative geographic stability in supply and demand. This window gives researchers a clean baseline before remote work, AI demand and new infrastructure pressures began reshaping internet traffic patterns.

The findings show that data centers follow a bifurcated geography. Third-party centers cluster in dense urban markets, where buyers prioritize proximity to customers despite higher land and operating costs. Cloud providers, by contrast, concentrate massive sites in a small number of lower-density regions, where electricity, land and construction are cheaper and economies of scale are easier to achieve.

Third-party data centers, in other words, follow demand. They locate in urban markets where firms in finance, healthcare and IT value low latency, secure storage, and compliance with regulatory standards.

Using county-level data, the researchers modeled how population density, industry mix and operating costs predict where new centers enter. Every U.S. metro with more than 700,000 residents had at least one third-party provider, while many mid-sized cities had none.

ImageThis pattern challenges common assumptions. Third-party facilities are more distributed across urban America than prevailing narratives suggest.

Customer proximity matters because some sectors cannot absorb delay. In critical operations, even slight pauses can have real consequences. For hospital systems, lag can affect performance and risk exposure. And in high-frequency trading, milliseconds can determine whether value is captured or lost in a transaction.

“For industries where speed is everything, being too far from the physical infrastructure can meaningfully affect performance and risk,” Pan Fang says. “Proximity isn’t optional for sectors that can’t absorb delay.”

The Economics of Distance

For cloud providers, the picture looks very different. Their decisions follow a logic shaped primarily by cost and scale. Because cloud services can be delivered from afar, firms tend to build enormous sites in low-density regions where power is cheap and land is abundant.

These facilities can draw hundreds of megawatts of electricity and operate with far fewer employees than urban centers. “The cloud can serve almost anywhere,” Pan Fang says, “so location is a question of cost before geography.”

The study finds that cloud infrastructure clusters around network backbones and energy economics, not talent pools. Well-known hubs like Ashburn, Virginia — often called “Data Center Alley” — reflect this logic, having benefited from early network infrastructure that made them natural convergence points for digital traffic.

Local governments often try to lure data centers with tax incentives, betting they will create high-tech jobs. But the study suggests other factors matter more to cloud providers, including construction costs, network connectivity and access to reliable, affordable electricity.

When cloud centers need a local presence, distance can sometimes become a constraint. Providers often address this by working alongside third-party operators. “Third-party centers can complement cloud firms when they need a foothold closer to customers,” Pan Fang says.

That hybrid pattern — massive regional hubs complementing strategic colocation — may define the next phase of data center growth.

Looking ahead, shifts in remote work, climate resilience, energy prices and AI-driven computing may reshape where new facilities go. Some workloads may move closer to users, while others may consolidate into large rural hubs. Emerging data-sovereignty rules could also redirect investment beyond the United States.

“The cloud feels weightless,” Pan Fang says, “but it rests on real choices about land, power and proximity.”

---

This article originally appeared on Rice Business Wisdom. Written by Scott Pett.

Pan Fang and Greenstein (2025). “Where the Cloud Rests: The Economic Geography of Data Centers,” forthcoming in Strategy Science.

There's no crystal ball, but this researcher from Rice University is trying to see if some metrics work for economic forecasting. Photo via Getty Images

Houston researcher tries to crack the code on the Fed's data to determine economic outlook

houston voices

Research by Rice Business Professor K. Ramesh shows that the Fed appears to harvest qualitative information from the accounting disclosures that all public companies must file with the Securities and Exchange Commission.

These SEC filings are typically used by creditors, investors and others to make firm-level investing and financing decisions; and while they include business leaders’ sense of economic trends, they are never intended to guide macro-level policy decisions. But in a recent paper (“Externalities of Accounting Disclosures: Evidence from the Federal Reserve”), Ramesh and his colleagues provide persuasive evidence that the Fed nonetheless uses the qualitative information in SEC filings to help forecast the growth of macroeconomic variables like GDP and unemployment.

According to Ramesh, the study was made possible thanks to a decision the SEC made several years ago. The commission stores the reports submitted by public companies in an online database called EDGAR and records the IP address of any party that accesses them. More than a decade ago, the SEC began making partially anonymized forms of those IP addresses available to the public. But researchers eventually figured out how to deanonymize the addresses, which is precisely what Ramesh and his colleagues did in this study.

"We were able to reverse engineer and identify those IP addresses that belonged to Federal Reserve staff," Ramesh says.

The team ultimately assembled a data set containing more than 169,000 filings accessed by Fed staff between 2005 and 2015. They quickly realized that the Fed was interested only in filings submitted by a select group of industry leaders and financial institutions.

But if Ramesh and his colleagues now had a better idea of precisely which bellwether firms the Fed focused on, they still had no way of knowing exactly what Fed staffers had gleaned from the material they accessed. So the team decided to employ a measure called "tone" that captures the overall sentiment of a piece of text – whether positive, negative, or neutral.

Building on previous research that had identified a set of words with negatively toned financial reports, Ramesh and his colleagues examined the tone of all the SEC filings accessed by Fed staff between one meeting of the Federal Open Markets Committee (FOMC) and the next. The FOMC sets interest rates and guides monetary policy, and its meetings provide an opportunity for Fed officials to discuss growth forecasts and announce policy decisions.

The researchers then examined the Fed's growth forecasts to see if there was a relationship between the tone of the documents that Fed staff examined in the period between FOMC meetings and the forecasts they produced in advance of those meetings.

The team found close correlations between the tone of the reports accessed by the Fed and the agency’s forecasts of GDP, unemployment, housing starts and industrial production. The more negative the filings accessed prior to an FOMC meeting, for example, the gloomier the GDP forecast; the more positive the filings, the brighter the unemployment forecast.

Ramesh and his colleagues also compared the Fed's forecasts with those of the Society of Professional Forecasters (SPF), whose members span academia and industry. Intriguingly, the researchers found that while the errors in the SPF's forecasts could be attributed to the absence of the tonal information culled from the SEC filings, the errors in the Fed’s forecasts could not. This suggests both that the Fed was collecting qualitative information that the SPF was not—and that the agency was making remarkably efficient use of it.

"They weren’t leaving anything on the table," Ramesh says.

Having solved one mystery, Ramesh would like to focus on another; namely, how does the Fed identify bellwether firms in the first place?

Unfortunately, the SEC no longer makes IP address data publicly available, which means that Ramesh and his colleagues can no longer study which companies the Fed is most interested in. Nonetheless, Ramesh hopes to use the data they have already collected to build a model that can accurately predict which firms the Fed is most likely to follow. That would allow the team to continue studying the same companies that the Fed does, and, he says, “maybe come up with a way to track those firms in order to understand how the economy is going to move.”

------

This article originally ran on Rice Business Wisdom and was based on research from K. Ramesh is Herbert S. Autrey Professor of Accounting at Jones Graduate School of Business at Rice University.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston researcher builds radar to make self-driving cars safer

eyes on the road

A Rice University researcher is giving autonomous vehicles an “extra set of eyes.”

Current autonomous vehicles (AVs) can have an incomplete view of their surroundings, and challenges like pedestrian movement, low-light conditions and adverse weather only compound these visibility limitations.

Kun Woo Cho, a postdoctoral researcher in the lab of Rice professor of electrical and computer engineering Ashutosh Sabharwal, has developed EyeDAR to help address such issues and enhance the vehicles’ sensing accuracy. Her research was supported in part by the National Science Foundation.

The EyeDAR is an orange-sized, low-power, millimeter-wave radar that could be placed at streetlights and intersections. Its design was inspired by that of the human eye. Researchers envision that the low-cost sensors could help ensure that AVs always pick up on emergent obstacles, even when the vehicles are not within proper range for their onboard sensors and when visibility is limited.

“Current automotive sensor systems like cameras and lidar struggle with poor visibility such as you would encounter due to rain or fog or in low-lighting conditions,” Cho said in a news release. “Radar, on the other hand, operates reliably in all weather and lighting conditions and can even see through obstacles.”

Signals from a typical radar system scatter when they encounter an obstacle. Some of the signal is reflected back to the source, but most of it is often lost. In the case of AVs, this means that "pedestrians emerging from behind large vehicles, cars creeping forward at intersections or cyclists approaching at odd angles can easily go unnoticed," according to Rice.

EyeDAR, however, works to capture lost radar reflections, determine their direction and report them back to the AV in a sequence of 0s and 1s.

“Like blinking Morse code,” Cho added. “EyeDAR is a talking sensor⎯it is a first instance of integrating radar sensing and communication functionality in a single design.”

After testing, EyeDAR was able to resolve target directions 200 times faster than conventional radar designs.

While EyeDAR currently targets risks associated with AVs, particularly in high-traffic urban areas, researchers also believe the technology behind it could complement artificial intelligence efforts and be integrated into robots, drones and wearable platforms.

“EyeDAR is an example of what I like to call ‘analog computing,’” Cho added in the release. “Over the past two decades, people have been focusing on the digital and software side of computation, and the analog, hardware side has been lagging behind. I want to explore this overlooked analog design space.”

12 winners named at CERAWeek clean tech pitch competition in Houston

top teams

Twelve teams from around the country, including several from Houston, took home top honors at this year's Energy Venture Day and Pitch Competition at CERAWeek.

The fast-paced event, held March 25, put on by Rice Alliance, Houston Energy Transition Initiative and TEX-E, invited 36 industry startups and five Texas-based student teams focused on driving efficiency and advancements in the energy transition to present 3.5-minute pitches before investors and industry partners during CERAWeek's Agora program.

The competition is a qualifying event for the Startup World Cup, where teams compete for a $1 million investment prize.

PolyJoule won in the Track C competition and was named the overall winner of the pitch event. The Boston-based company will go on to compete in the Startup World Cup held this fall in San Francisco.

PolyJoule was spun out of MIT and is developing conductive polymer battery technology for energy storage.

Rice University's Resonant Thermal Systems won the second-place prize and $15,000 in the student track, known as TEX-E. The team's STREED solution converts high-salinity water into fresh water while recovering valuable minerals.

Teams from the University of Texas won first and second place in the TEX-E competition, bringing home $25,000 and $10,000, respectively. The student winners were:

Companies that pitched in the three industry tracts competed for non-monetary awards. Here are the companies named "most-promising" by the judges:

Track A | Industrial Efficiency & Decarbonization

Track B | Advanced Manufacturing, Materials, & Other Advanced Technologies

  • First: Licube, based in Houston
  • Second: ZettaJoule, based in Houston and Maryland
  • Third: Oleo

Track C | Innovations for Traditional Energy, Electricity, & the Grid

The teams at this year's Energy Venture Day have collectively raised $707 million in funding, according to Rice. They represent six countries and 12 states. See the full list of companies and investor groups that participated here.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.