Elon Musk announced that both SpaceX and X will relocate headquarters to two Texas cities. Photo via Getty Images

Elon Musk vowed this week to upend another industry just as he did with cars and rockets — and once again he's taking on long odds.

The world's richest man said he wants to put as many as a million satellites into orbit to form vast, solar-powered data centers in space — a move to allow expanded use of artificial intelligence and chatbots without triggering blackouts and sending utility bills soaring.

To finance that effort, Musk combined SpaceX with his AI business on Monday, February 2, and plans a big initial public offering of the combined company.

“Space-based AI is obviously the only way to scale,” Musk wrote on SpaceX’s website, adding about his solar ambitions, “It’s always sunny in space!”

But scientists and industry experts say even Musk — who outsmarted Detroit to turn Tesla into the world’s most valuable automaker — faces formidable technical, financial and environmental obstacles.

Feeling the heat

Capturing the sun’s energy from space to run chatbots and other AI tools would ease pressure on power grids and cut demand for sprawling computing warehouses that are consuming farms and forests and vast amounts of water to cool.

But space presents its own set of problems.

Data centers generate enormous heat. Space seems to offer a solution because it is cold. But it is also a vacuum, trapping heat inside objects in the same way that a Thermos keeps coffee hot using double walls with no air between them.

“An uncooled computer chip in space would overheat and melt much faster than one on Earth,” said Josep Jornet, a computer and electrical engineering professor at Northeastern University.

One fix is to build giant radiator panels that glow in infrared light to push the heat “out into the dark void,” says Jornet, noting that the technology has worked on a small scale, including on the International Space Station. But for Musk's data centers, he says, it would require an array of “massive, fragile structures that have never been built before.”

Floating debris

Then there is space junk.

A single malfunctioning satellite breaking down or losing orbit could trigger a cascade of collisions, potentially disrupting emergency communications, weather forecasting and other services.

Musk noted in a recent regulatory filing that he has had only one “low-velocity debris generating event" in seven years running Starlink, his satellite communications network. Starlink has operated about 10,000 satellites — but that's a fraction of the million or so he now plans to put in space.

“We could reach a tipping point where the chance of collision is going to be too great," said University at Buffalo's John Crassidis, a former NASA engineer. “And these objects are going fast -- 17,500 miles per hour. There could be very violent collisions."

No repair crews

Even without collisions, satellites fail, chips degrade, parts break.

Special GPU graphics chips used by AI companies, for instance, can become damaged and need to be replaced.

“On Earth, what you would do is send someone down to the data center," said Baiju Bhatt, CEO of Aetherflux, a space-based solar energy company. "You replace the server, you replace the GPU, you’d do some surgery on that thing and you’d slide it back in.”

But no such repair crew exists in orbit, and those GPUs in space could get damaged due to their exposure to high-energy particles from the sun.

Bhatt says one workaround is to overprovision the satellite with extra chips to replace the ones that fail. But that’s an expensive proposition given they are likely to cost tens of thousands of dollars each, and current Starlink satellites only have a lifespan of about five years.

Competition — and leverage

Musk is not alone trying to solve these problems.

A company in Redmond, Washington, called Starcloud, launched a satellite in November carrying a single Nvidia-made AI computer chip to test out how it would fare in space. Google is exploring orbital data centers in a venture it calls Project Suncatcher. And Jeff Bezos’ Blue Origin announced plans in January for a constellation of more than 5,000 satellites to start launching late next year, though its focus has been more on communications than AI.

Still, Musk has an edge: He's got rockets.

Starcloud had to use one of his Falcon rockets to put its chip in space last year. Aetherflux plans to send a set of chips it calls a Galactic Brain to space on a SpaceX rocket later this year. And Google may also need to turn to Musk to get its first two planned prototype satellites off the ground by early next year.

Pierre Lionnet, a research director at the trade association Eurospace, says Musk routinely charges rivals far more than he charges himself —- as much as $20,000 per kilo of payload versus $2,000 internally.

He said Musk’s announcements this week signal that he plans to use that advantage to win this new space race.

“When he says we are going to put these data centers in space, it’s a way of telling the others we will keep these low launch costs for myself,” said Lionnet. “It’s a kind of powerplay.”

A map of U.S. data centers. Courtesy of Rice Businesses Wisdom

Your data center is either closer than you think or much farther away

houston voices

A new study shows why some facilities cluster in cities for speed and access, while others move to rural regions in search of scale and lower costs. Based on research by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard).

Key findings:

  • Third-party colocation centers are physical facilities in close proximity to firms that use them, while cloud providers operate large data centers from a distance and sell access to virtualized computing resources as on‑demand services over the internet.
  • Hospitals and financial firms often require urban third-party centers for low latency and regulatory compliance, while batch processing and many AI workloads can operate more efficiently from lower-cost cloud hubs.
  • For policymakers trying to attract data centers, access to reliable power, water and high-capacity internet matter more than tax incentives.

Recent outages and the surge in AI-driven computing have made data center siting decisions more consequential than ever, especially as energy and water constraints tighten. Communities invest public dollars on the promise of jobs and growth, while firms weigh long-term commitments to land, power and connectivity.

Against that backdrop, a critical question comes into focus: Where do data centers get built — and what actually drives those decisions?

A new study by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard Business School) provides the first large-scale statistical analysis of data center location strategies across the United States. It offers policymakers and firms a clearer starting point for understanding how different types of data centers respond to economic and strategic incentives.

Forthcoming in the journal Strategy Science, the study examines two major types of infrastructure: third-party colocation centers that lease server space to multiple firms, and hyperscale cloud centers owned by providers like Amazon, Google and Microsoft.

Two Models, Two Location Strategies

The study draws on pre-pandemic data from 2018 and 2019, a period of relative geographic stability in supply and demand. This window gives researchers a clean baseline before remote work, AI demand and new infrastructure pressures began reshaping internet traffic patterns.

The findings show that data centers follow a bifurcated geography. Third-party centers cluster in dense urban markets, where buyers prioritize proximity to customers despite higher land and operating costs. Cloud providers, by contrast, concentrate massive sites in a small number of lower-density regions, where electricity, land and construction are cheaper and economies of scale are easier to achieve.

Third-party data centers, in other words, follow demand. They locate in urban markets where firms in finance, healthcare and IT value low latency, secure storage, and compliance with regulatory standards.

Using county-level data, the researchers modeled how population density, industry mix and operating costs predict where new centers enter. Every U.S. metro with more than 700,000 residents had at least one third-party provider, while many mid-sized cities had none.

ImageThis pattern challenges common assumptions. Third-party facilities are more distributed across urban America than prevailing narratives suggest.

Customer proximity matters because some sectors cannot absorb delay. In critical operations, even slight pauses can have real consequences. For hospital systems, lag can affect performance and risk exposure. And in high-frequency trading, milliseconds can determine whether value is captured or lost in a transaction.

“For industries where speed is everything, being too far from the physical infrastructure can meaningfully affect performance and risk,” Pan Fang says. “Proximity isn’t optional for sectors that can’t absorb delay.”

The Economics of Distance

For cloud providers, the picture looks very different. Their decisions follow a logic shaped primarily by cost and scale. Because cloud services can be delivered from afar, firms tend to build enormous sites in low-density regions where power is cheap and land is abundant.

These facilities can draw hundreds of megawatts of electricity and operate with far fewer employees than urban centers. “The cloud can serve almost anywhere,” Pan Fang says, “so location is a question of cost before geography.”

The study finds that cloud infrastructure clusters around network backbones and energy economics, not talent pools. Well-known hubs like Ashburn, Virginia — often called “Data Center Alley” — reflect this logic, having benefited from early network infrastructure that made them natural convergence points for digital traffic.

Local governments often try to lure data centers with tax incentives, betting they will create high-tech jobs. But the study suggests other factors matter more to cloud providers, including construction costs, network connectivity and access to reliable, affordable electricity.

When cloud centers need a local presence, distance can sometimes become a constraint. Providers often address this by working alongside third-party operators. “Third-party centers can complement cloud firms when they need a foothold closer to customers,” Pan Fang says.

That hybrid pattern — massive regional hubs complementing strategic colocation — may define the next phase of data center growth.

Looking ahead, shifts in remote work, climate resilience, energy prices and AI-driven computing may reshape where new facilities go. Some workloads may move closer to users, while others may consolidate into large rural hubs. Emerging data-sovereignty rules could also redirect investment beyond the United States.

“The cloud feels weightless,” Pan Fang says, “but it rests on real choices about land, power and proximity.”

---

This article originally appeared on Rice Business Wisdom. Written by Scott Pett.

Pan Fang and Greenstein (2025). “Where the Cloud Rests: The Economic Geography of Data Centers,” forthcoming in Strategy Science.

CenterPoint, NVIDIA and Palantir have formed Chain Reaction. Photo via Getty Images

CenterPoint and partners launch AI initiative to stabilize the power grid

AI infrastructure

Houston-based utility company CenterPoint Energy is one of the founding partners of a new AI infrastructure initiative called Chain Reaction.

Software companies NVIDIA and Palantir have joined CenterPoint in forming Chain Reaction, which is aimed at speeding up AI buildouts for energy producers and distributors, data centers and infrastructure builders. Among the initiative’s goals are to stabilize and expand the power grid to meet growing demand from data centers, and to design and develop large data centers that can support AI activity.

“The energy infrastructure buildout is the industrial challenge of our generation,” Tristan Gruska, Palantir’s head of energy and infrastructure, says in a news release. “But the software that the sector relies on was not built for this moment. We have spent years quietly deploying systems that keep power plants running and grids reliable. Chain Reaction is the result of building from the ground up for the demands of AI.”

CenterPoint serves about 7 million customers in Texas, Indiana, Minnesota and Ohio. After Hurricane Beryl struck Houston in July 2024, CenterPoint committed to building a resilient power grid for the region and chose Palantir as its “software backbone.”

“Never before have technology and energy been so intertwined in determining the future course of American innovation, commercial growth, and economic security,” Jason Wells, chairman, president and CEO of CenterPoint, added in the release.

In November, the utility company got the go-ahead from the Public Utility Commission of Texas for a $2.9 billion upgrade of its Houston-area power grid. CenterPoint serves 2.9 million customers in a 12-county territory anchored by Houston.

A month earlier, CenterPoint launched a $65 billion, 10-year capital improvement plan to support rising demand for power across all of its service territories.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

HPE will supply distributed hybrid multicloud technology to the DIS Agency. Photo courtesy of HPE

Houston-based HPE wins $931M contract to upgrade military data centers

defense data centers

Hewlett Packard Enterprise (HPE), based in Spring, Texas, which provides AI, cloud, and networking products and services, has received a $931 million contract to modernize data centers run by the federal Defense Information Systems Agency.

HPE says it will supply distributed hybrid multicloud technology to the federal agency, which provides combat support for U.S. troops. The project will feature HPE’s Private Cloud Enterprise and GreenLake offerings. It will allow DISA to scale and accelerate communications, improve AI and data analytics, boost IT efficiencies, reduce costs and more, according to a news release from HPE.

The contract comes after the completion of HPE’s test of distributed hybrid multicloud technology at Defense Information Systems Agency (DISA) data centers in Mechanicsburg, Pennsylvania, and Ogden, Utah. This technology is aimed at managing DISA’s IT infrastructure and resources across public and private clouds through one hybrid multicloud platform, according to Data Center Dynamics.

Fidelma Russo, executive vice president and general manager of hybrid cloud at HPE, said in a news release that the project will enable DISA to “deliver innovative, future-ready managed services to the agencies it supports that are operating across the globe.”

The platform being developed for DISA “is designed to mirror the look and feel of a public cloud, replicating many of the key features” offered by cloud computing businesses such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, according to The Register.

In the 1990s, DISA consolidated 194 data centers into 16. According to The Register, these are the U.S. military’s most sensitive data centers.

More recently, in 2024, the Fort Meade, Maryland-based agency laid out a five-year strategy to “simplify the network globally with large-scale adoption of command IT environments,” according to Data Center Dynamics.

Google is investing in Texas. Photo courtesy of Google

Google to invest $40 billion in AI data centers in Texas

Google is investing a huge chunk of money in Texas: According to a release, the company will invest $40 billion on cloud and artificial intelligence (AI) infrastructure, with the development of new data centers in Armstrong and Haskell counties.

The company announced its intentions at a meeting on November 14 attended by federal, state, and local leaders including Gov. Greg Abbott who called it "a Texas-sized investment."

Google will open two new data center campuses in Haskell County and a data center campus in Armstrong County.

Additionally, the first building at the company’s Red Oak campus in Ellis County is now operational. Google is continuing to invest in its existing Midlothian campus and Dallas cloud region, which are part of the company’s global network of 42 cloud regions that deliver high-performance, low-latency services that businesses and organizations use to build and scale their own AI-powered solutions.

Energy demands

Google is committed to responsibly growing its infrastructure by bringing new energy resources onto the grid, paying for costs associated with its operations, and supporting community energy efficiency initiatives.

One of the new Haskell data centers will be co-located with — or built directly alongside — a new solar and battery energy storage plant, creating the first industrial park to be developed through Google’s partnership with Intersect and TPG Rise Climate announced last year.

Google has contracted to add more than 6,200 megawatts (MW) of net new energy generation and capacity to the Texas electricity grid through power purchase agreements (PPAs) with energy developers such as AES Corporation, Enel North America, Intersect, Clearway, ENGIE, SB Energy, Ørsted, and X-Elio.

Water demands

Google’s three new facilities in Armstrong and Haskell counties will use air-cooling technology, limiting water use to site operations like kitchens. The company is also contributing $2.6 million to help Texas Water Trade create and enhance up to 1,000 acres of wetlands along the Trinity-San Jacinto Estuary. Google is also sponsoring a regenerative agriculture program with Indigo Ag in the Dallas-Fort Worth area and an irrigation efficiency project with N-Drip in the Texas High Plains.

In addition to the data centers, Google is committing $7 million in grants to support AI-related initiatives in healthcare, energy, and education across the state. This includes helping CareMessage enhance rural healthcare access; enabling the University of Texas at Austin and Texas Tech University to address energy challenges that will arise with AI, and expanding AI training for Texas educators and students through support to Houston City College.

---

This article originally appeared on CultureMap.com.

An aerial view of Stargate’s AI data center in Abilene. Photo courtesy OpenAI.

Abbott highlights Texas AI boom, with Houston projects on the horizon

AI investments are booming in Texas, Gov. Greg Abbott says. And Houston is poised to benefit from this surge.

At a recent Texas Economic Development Corp. gathering in the Dallas-Fort Worth area, Abbott said AI projects on the horizon in the Lone Star State would be bigger than the $500 billion multistate Project Stargate, according to the Dallas Business Journal. So far, Stargate includes three AI data centers in Texas.

Stargate, a new partnership among OpenAI, Oracle, Softbank, and the federal government, is building AI infrastructure around the country. The project’s first data center is in Abilene, and the center’s second phase is underway. Once the second phase is finished, the 875-acre site will host eight buildings totaling about 4 million square feet with a power capacity of 1.2 gigawatts. An additional 600 megawatts of capacity might be added later.

On Sept. 23, Stargate announced the development of another five AI data centers in the U.S., including a new facility in Shackelford County, Texas, near Abilene. That facility is likely a roughly $25 billion, 1.4-gigawatt AI data center that Vantage Data Centers is building on a 1,200-acre site in Shackelford County.

Another will be in Milam County, between Waco and Austin. In conjunction with Stargate, OpenAI plans to occupy the more than $3 billion center, which will be situated on a nearly 600-acre site, the Austin Business Journal reported. OpenAI has teamed up with Softbank-backed SB Energy Global to build the facility.

Abbott said several unannounced AI projects in Texas — namely, data centers — will be larger than Stargate.

“Bottom line is ... when you look at diversification, the hottest thing going on right now is artificial intelligence,” Abbott said.

The Houston area almost certainly stands to attract some of the projects teased by the governor.

In Houston, Taiwanese tech manufacturer Foxconn already is investing $450 million to make AI servers at the 100-acre Fairbanks Logistics Park, which Foxconn recently purchased for a reported $142 million. The park features four industrial buildings totaling one million square feet. It appears Foxconn will manufacture the servers for Apple and Nvidia, both of which have announced they’ll open server factories in Houston.

The Foxconn, Apple, and Nvidia initiatives are high-profile examples of Houston’s ascent in the AI economy. A report released in July by the Brookings Institution identified Houston as one of the country’s 28 “star” hubs for AI.

The Greater Houston Partnership says the Houston area is undergoing an "AI-driven data revolution."

“As Houston rapidly evolves into a hub for AI, cloud computing, and data infrastructure, the city is experiencing a surge in data center investments driven by its unique position at the intersection of energy, technology, and innovation,” the partnership says.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston geothermal unicorn Fervo officially files for IPO

going public

Fervo Energy has officially filed for IPO.

The Houston-based geothermal unicorn filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission on April 17 to list its Class A common stock on the Nasdaq exchange. Fervo intends to be listed under the ticker symbol "FRVO."

The number and price of the shares have not yet been determined, according to a news release from Fervo. J.P. Morgan, BofA Securities, RBC Capital Markets and Barclays are leading the offering.

The highly anticipated filing comes as Fervo readies its flagship Cape Station geothermal project to deliver its first power later this year

"Today, miles-long lines for gasoline have been replaced by lines for electricity. Tech companies compete for megawatts to claim AI market share. Manufacturers jockey for power to strengthen American industry. Utilities demand clean, firm electricity to stabilize the grid," Fervo CEO Tim Latimer shared in the filing. "Fervo is prepared to serve all of these customers. Not with complex, idiosyncratic projects but with a simplified, standardized product capable of delivering around-the-clock, carbon-free power using proven oil and gas technology."

Fervo has been preparing to file for IPO for months. Axios Pro first reported that the company "quietly" filed for an IPO in January and estimated it would be valued between $2 billion and $3 billion.

Fervo also closed $421 million in non-recourse debt financing for the first phase of Cape Station last month and raised a $462 million Series E in December. The company also announced the addition of four heavyweights to its board of directors last week, including Meg Whitman, former CEO of eBay, Hewlett-Packard, and Spring-based HPE.

Fervo reported a net loss of $70.5 million for the 2025 fiscal year in the S-1 filing and a loss of $41.1 million in 2024.

Tracxn.com estimates that Fervo has raised $1.12 billion over 12 funding rounds. The company was founded in 2017 by Latimer and CTO Jack Norbeck.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

New UT Austin med center, anchored by MD Anderson, gets $1 billion gift

Future of Health

A donation announced Tuesday, April 21, breaks a major record at the University of Texas at Austin. Michael and Susan Dell are now UT Austin's first supporters to give $1 billion. In response, the university will create the UT Dell Campus for Advanced Research and the UT Dell Medical Center to "advance human health," per a press release.

The release also records "significant support" for undergraduate scholarships, student housing, and the Texas Advanced Computing Center for supercomputing research.

Both the new research campus and the UT Dell Medical Center will integrate advanced computing into their research and practices. At the medical center, the university hopes that will lead to "earlier detection, more precise and personalized care, and better health outcomes." The University of Texas MD Anderson Cancer Center will also be integrated into the new medical center.

That comes with a numeric goal measured in 10s: raise $10 billion and rank among the top 10 medical centers in the U.S., both in the next decade.

In the shorter term, the university will break ground on the medical center with architecture firm Skidmore, Owings & Merrill (SOM) "later this year."

“UT Austin, where Dell Technologies was founded from a dorm room, has always been a place where bold ideas become real-world impact,” said Michael and Susan Dell in a joint statement.

They continued, “What makes this moment so meaningful is the opportunity to build something that brings every part of the journey together — from how students learn, to how discoveries are made, to how care reaches families. By bringing together medicine, science and computing in one campus designed for the AI era, UT can create more opportunity, deliver better outcomes, and build a stronger future for communities across Texas and beyond.”

This is the second major gift this year for the planned multibillion-dollar medical center. In January, Tench Coxe, a former venture capitalist who’s a major shareholder in chipmaking giant Nvidia, and Simone Coxe, co-founder and former CEO of the Blanc & Otus PR firm, contributed $100 million$100 million.

Baylor scientist lands $2M grant to explore links between viruses and Alzheimer’s

Alzheimer’s research

A Baylor College of Medicine scientist will begin exploring the possible link between Alzheimer’s disease and viral infections thanks to a $2 million grant awarded in March.

Dr. Ryan S. Dhindsa is an assistant professor of pathology & immunology at Baylor and a principal investigator at Texas Children’s Duncan Neurological Research Institute (Duncan NRI). He hypothesizes that Alzheimer’s may have some link to previous viral infections contracted by the patient. To study this intriguing possibility, the American Brain Foundation has gifted him the Cure One, Cure Many award in neuroinflammation.

“It is an honor to receive this support from the Cure One, Cure Many Award. Viral infections are emerging as a major, underappreciated driver of Alzheimer's disease, and this award will allow our team to conduct the most comprehensive screen of viral exposures and host genetics in Alzheimer's to date, spanning over a million individuals,” Dhindsa said in a news release. “Our goal is to identify which viruses matter most, why some people are more vulnerable than others, and ultimately move the field closer to new therapeutic strategies for patients.”

Roughly 150 million people worldwide will suffer from Alzheimer’s by 2050, making it the most common cause of dementia in the world. Despite this, scientists are still at a loss as to what exactly causes it.

Dhindsa’s research is part of a new range of theories that certain viral infections may trigger Alzheimer’s. His team will take a two-fold approach. First, they will analyze the medical records of more than a million individuals looking for patterns. Second, they will analyze viral DNA in stem cell-derived brain cells to see how the infections could contribute to neurological decay. The scale of the genomic data gathering is unprecedented and may highlight a link that traditional studies have missed.

Also joining the project are Dr. Caleb Lareau of Memorial Sloan Kettering Cancer Center and Dr. Artem Babaian of the University of Toronto. Should a link be found, it would open the door to using anti-virals to prevent or treat Alzheimer’s.