Elon Musk announced that both SpaceX and X will relocate headquarters to two Texas cities. Photo via Getty Images

Elon Musk vowed this week to upend another industry just as he did with cars and rockets — and once again he's taking on long odds.

The world's richest man said he wants to put as many as a million satellites into orbit to form vast, solar-powered data centers in space — a move to allow expanded use of artificial intelligence and chatbots without triggering blackouts and sending utility bills soaring.

To finance that effort, Musk combined SpaceX with his AI business on Monday, February 2, and plans a big initial public offering of the combined company.

“Space-based AI is obviously the only way to scale,” Musk wrote on SpaceX’s website, adding about his solar ambitions, “It’s always sunny in space!”

But scientists and industry experts say even Musk — who outsmarted Detroit to turn Tesla into the world’s most valuable automaker — faces formidable technical, financial and environmental obstacles.

Feeling the heat

Capturing the sun’s energy from space to run chatbots and other AI tools would ease pressure on power grids and cut demand for sprawling computing warehouses that are consuming farms and forests and vast amounts of water to cool.

But space presents its own set of problems.

Data centers generate enormous heat. Space seems to offer a solution because it is cold. But it is also a vacuum, trapping heat inside objects in the same way that a Thermos keeps coffee hot using double walls with no air between them.

“An uncooled computer chip in space would overheat and melt much faster than one on Earth,” said Josep Jornet, a computer and electrical engineering professor at Northeastern University.

One fix is to build giant radiator panels that glow in infrared light to push the heat “out into the dark void,” says Jornet, noting that the technology has worked on a small scale, including on the International Space Station. But for Musk's data centers, he says, it would require an array of “massive, fragile structures that have never been built before.”

Floating debris

Then there is space junk.

A single malfunctioning satellite breaking down or losing orbit could trigger a cascade of collisions, potentially disrupting emergency communications, weather forecasting and other services.

Musk noted in a recent regulatory filing that he has had only one “low-velocity debris generating event" in seven years running Starlink, his satellite communications network. Starlink has operated about 10,000 satellites — but that's a fraction of the million or so he now plans to put in space.

“We could reach a tipping point where the chance of collision is going to be too great," said University at Buffalo's John Crassidis, a former NASA engineer. “And these objects are going fast -- 17,500 miles per hour. There could be very violent collisions."

No repair crews

Even without collisions, satellites fail, chips degrade, parts break.

Special GPU graphics chips used by AI companies, for instance, can become damaged and need to be replaced.

“On Earth, what you would do is send someone down to the data center," said Baiju Bhatt, CEO of Aetherflux, a space-based solar energy company. "You replace the server, you replace the GPU, you’d do some surgery on that thing and you’d slide it back in.”

But no such repair crew exists in orbit, and those GPUs in space could get damaged due to their exposure to high-energy particles from the sun.

Bhatt says one workaround is to overprovision the satellite with extra chips to replace the ones that fail. But that’s an expensive proposition given they are likely to cost tens of thousands of dollars each, and current Starlink satellites only have a lifespan of about five years.

Competition — and leverage

Musk is not alone trying to solve these problems.

A company in Redmond, Washington, called Starcloud, launched a satellite in November carrying a single Nvidia-made AI computer chip to test out how it would fare in space. Google is exploring orbital data centers in a venture it calls Project Suncatcher. And Jeff Bezos’ Blue Origin announced plans in January for a constellation of more than 5,000 satellites to start launching late next year, though its focus has been more on communications than AI.

Still, Musk has an edge: He's got rockets.

Starcloud had to use one of his Falcon rockets to put its chip in space last year. Aetherflux plans to send a set of chips it calls a Galactic Brain to space on a SpaceX rocket later this year. And Google may also need to turn to Musk to get its first two planned prototype satellites off the ground by early next year.

Pierre Lionnet, a research director at the trade association Eurospace, says Musk routinely charges rivals far more than he charges himself —- as much as $20,000 per kilo of payload versus $2,000 internally.

He said Musk’s announcements this week signal that he plans to use that advantage to win this new space race.

“When he says we are going to put these data centers in space, it’s a way of telling the others we will keep these low launch costs for myself,” said Lionnet. “It’s a kind of powerplay.”

A map of U.S. data centers. Courtesy of Rice Businesses Wisdom

Your data center is either closer than you think or much farther away

houston voices

A new study shows why some facilities cluster in cities for speed and access, while others move to rural regions in search of scale and lower costs. Based on research by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard).

Key findings:

  • Third-party colocation centers are physical facilities in close proximity to firms that use them, while cloud providers operate large data centers from a distance and sell access to virtualized computing resources as on‑demand services over the internet.
  • Hospitals and financial firms often require urban third-party centers for low latency and regulatory compliance, while batch processing and many AI workloads can operate more efficiently from lower-cost cloud hubs.
  • For policymakers trying to attract data centers, access to reliable power, water and high-capacity internet matter more than tax incentives.

Recent outages and the surge in AI-driven computing have made data center siting decisions more consequential than ever, especially as energy and water constraints tighten. Communities invest public dollars on the promise of jobs and growth, while firms weigh long-term commitments to land, power and connectivity.

Against that backdrop, a critical question comes into focus: Where do data centers get built — and what actually drives those decisions?

A new study by Tommy Pan Fang (Rice Business) and Shane Greenstein (Harvard Business School) provides the first large-scale statistical analysis of data center location strategies across the United States. It offers policymakers and firms a clearer starting point for understanding how different types of data centers respond to economic and strategic incentives.

Forthcoming in the journal Strategy Science, the study examines two major types of infrastructure: third-party colocation centers that lease server space to multiple firms, and hyperscale cloud centers owned by providers like Amazon, Google and Microsoft.

Two Models, Two Location Strategies

The study draws on pre-pandemic data from 2018 and 2019, a period of relative geographic stability in supply and demand. This window gives researchers a clean baseline before remote work, AI demand and new infrastructure pressures began reshaping internet traffic patterns.

The findings show that data centers follow a bifurcated geography. Third-party centers cluster in dense urban markets, where buyers prioritize proximity to customers despite higher land and operating costs. Cloud providers, by contrast, concentrate massive sites in a small number of lower-density regions, where electricity, land and construction are cheaper and economies of scale are easier to achieve.

Third-party data centers, in other words, follow demand. They locate in urban markets where firms in finance, healthcare and IT value low latency, secure storage, and compliance with regulatory standards.

Using county-level data, the researchers modeled how population density, industry mix and operating costs predict where new centers enter. Every U.S. metro with more than 700,000 residents had at least one third-party provider, while many mid-sized cities had none.

ImageThis pattern challenges common assumptions. Third-party facilities are more distributed across urban America than prevailing narratives suggest.

Customer proximity matters because some sectors cannot absorb delay. In critical operations, even slight pauses can have real consequences. For hospital systems, lag can affect performance and risk exposure. And in high-frequency trading, milliseconds can determine whether value is captured or lost in a transaction.

“For industries where speed is everything, being too far from the physical infrastructure can meaningfully affect performance and risk,” Pan Fang says. “Proximity isn’t optional for sectors that can’t absorb delay.”

The Economics of Distance

For cloud providers, the picture looks very different. Their decisions follow a logic shaped primarily by cost and scale. Because cloud services can be delivered from afar, firms tend to build enormous sites in low-density regions where power is cheap and land is abundant.

These facilities can draw hundreds of megawatts of electricity and operate with far fewer employees than urban centers. “The cloud can serve almost anywhere,” Pan Fang says, “so location is a question of cost before geography.”

The study finds that cloud infrastructure clusters around network backbones and energy economics, not talent pools. Well-known hubs like Ashburn, Virginia — often called “Data Center Alley” — reflect this logic, having benefited from early network infrastructure that made them natural convergence points for digital traffic.

Local governments often try to lure data centers with tax incentives, betting they will create high-tech jobs. But the study suggests other factors matter more to cloud providers, including construction costs, network connectivity and access to reliable, affordable electricity.

When cloud centers need a local presence, distance can sometimes become a constraint. Providers often address this by working alongside third-party operators. “Third-party centers can complement cloud firms when they need a foothold closer to customers,” Pan Fang says.

That hybrid pattern — massive regional hubs complementing strategic colocation — may define the next phase of data center growth.

Looking ahead, shifts in remote work, climate resilience, energy prices and AI-driven computing may reshape where new facilities go. Some workloads may move closer to users, while others may consolidate into large rural hubs. Emerging data-sovereignty rules could also redirect investment beyond the United States.

“The cloud feels weightless,” Pan Fang says, “but it rests on real choices about land, power and proximity.”

---

This article originally appeared on Rice Business Wisdom. Written by Scott Pett.

Pan Fang and Greenstein (2025). “Where the Cloud Rests: The Economic Geography of Data Centers,” forthcoming in Strategy Science.

CenterPoint, NVIDIA and Palantir have formed Chain Reaction. Photo via Getty Images

CenterPoint and partners launch AI initiative to stabilize the power grid

AI infrastructure

Houston-based utility company CenterPoint Energy is one of the founding partners of a new AI infrastructure initiative called Chain Reaction.

Software companies NVIDIA and Palantir have joined CenterPoint in forming Chain Reaction, which is aimed at speeding up AI buildouts for energy producers and distributors, data centers and infrastructure builders. Among the initiative’s goals are to stabilize and expand the power grid to meet growing demand from data centers, and to design and develop large data centers that can support AI activity.

“The energy infrastructure buildout is the industrial challenge of our generation,” Tristan Gruska, Palantir’s head of energy and infrastructure, says in a news release. “But the software that the sector relies on was not built for this moment. We have spent years quietly deploying systems that keep power plants running and grids reliable. Chain Reaction is the result of building from the ground up for the demands of AI.”

CenterPoint serves about 7 million customers in Texas, Indiana, Minnesota and Ohio. After Hurricane Beryl struck Houston in July 2024, CenterPoint committed to building a resilient power grid for the region and chose Palantir as its “software backbone.”

“Never before have technology and energy been so intertwined in determining the future course of American innovation, commercial growth, and economic security,” Jason Wells, chairman, president and CEO of CenterPoint, added in the release.

In November, the utility company got the go-ahead from the Public Utility Commission of Texas for a $2.9 billion upgrade of its Houston-area power grid. CenterPoint serves 2.9 million customers in a 12-county territory anchored by Houston.

A month earlier, CenterPoint launched a $65 billion, 10-year capital improvement plan to support rising demand for power across all of its service territories.

---

This article originally appeared on our sister site, EnergyCapitalHTX.com.

HPE will supply distributed hybrid multicloud technology to the DIS Agency. Photo courtesy of HPE

Houston-based HPE wins $931M contract to upgrade military data centers

defense data centers

Hewlett Packard Enterprise (HPE), based in Spring, Texas, which provides AI, cloud, and networking products and services, has received a $931 million contract to modernize data centers run by the federal Defense Information Systems Agency.

HPE says it will supply distributed hybrid multicloud technology to the federal agency, which provides combat support for U.S. troops. The project will feature HPE’s Private Cloud Enterprise and GreenLake offerings. It will allow DISA to scale and accelerate communications, improve AI and data analytics, boost IT efficiencies, reduce costs and more, according to a news release from HPE.

The contract comes after the completion of HPE’s test of distributed hybrid multicloud technology at Defense Information Systems Agency (DISA) data centers in Mechanicsburg, Pennsylvania, and Ogden, Utah. This technology is aimed at managing DISA’s IT infrastructure and resources across public and private clouds through one hybrid multicloud platform, according to Data Center Dynamics.

Fidelma Russo, executive vice president and general manager of hybrid cloud at HPE, said in a news release that the project will enable DISA to “deliver innovative, future-ready managed services to the agencies it supports that are operating across the globe.”

The platform being developed for DISA “is designed to mirror the look and feel of a public cloud, replicating many of the key features” offered by cloud computing businesses such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform, according to The Register.

In the 1990s, DISA consolidated 194 data centers into 16. According to The Register, these are the U.S. military’s most sensitive data centers.

More recently, in 2024, the Fort Meade, Maryland-based agency laid out a five-year strategy to “simplify the network globally with large-scale adoption of command IT environments,” according to Data Center Dynamics.

Google is investing in Texas. Photo courtesy of Google

Google to invest $40 billion in AI data centers in Texas

Google is investing a huge chunk of money in Texas: According to a release, the company will invest $40 billion on cloud and artificial intelligence (AI) infrastructure, with the development of new data centers in Armstrong and Haskell counties.

The company announced its intentions at a meeting on November 14 attended by federal, state, and local leaders including Gov. Greg Abbott who called it "a Texas-sized investment."

Google will open two new data center campuses in Haskell County and a data center campus in Armstrong County.

Additionally, the first building at the company’s Red Oak campus in Ellis County is now operational. Google is continuing to invest in its existing Midlothian campus and Dallas cloud region, which are part of the company’s global network of 42 cloud regions that deliver high-performance, low-latency services that businesses and organizations use to build and scale their own AI-powered solutions.

Energy demands

Google is committed to responsibly growing its infrastructure by bringing new energy resources onto the grid, paying for costs associated with its operations, and supporting community energy efficiency initiatives.

One of the new Haskell data centers will be co-located with — or built directly alongside — a new solar and battery energy storage plant, creating the first industrial park to be developed through Google’s partnership with Intersect and TPG Rise Climate announced last year.

Google has contracted to add more than 6,200 megawatts (MW) of net new energy generation and capacity to the Texas electricity grid through power purchase agreements (PPAs) with energy developers such as AES Corporation, Enel North America, Intersect, Clearway, ENGIE, SB Energy, Ørsted, and X-Elio.

Water demands

Google’s three new facilities in Armstrong and Haskell counties will use air-cooling technology, limiting water use to site operations like kitchens. The company is also contributing $2.6 million to help Texas Water Trade create and enhance up to 1,000 acres of wetlands along the Trinity-San Jacinto Estuary. Google is also sponsoring a regenerative agriculture program with Indigo Ag in the Dallas-Fort Worth area and an irrigation efficiency project with N-Drip in the Texas High Plains.

In addition to the data centers, Google is committing $7 million in grants to support AI-related initiatives in healthcare, energy, and education across the state. This includes helping CareMessage enhance rural healthcare access; enabling the University of Texas at Austin and Texas Tech University to address energy challenges that will arise with AI, and expanding AI training for Texas educators and students through support to Houston City College.

---

This article originally appeared on CultureMap.com.

An aerial view of Stargate’s AI data center in Abilene. Photo courtesy OpenAI.

Abbott highlights Texas AI boom, with Houston projects on the horizon

AI investments are booming in Texas, Gov. Greg Abbott says. And Houston is poised to benefit from this surge.

At a recent Texas Economic Development Corp. gathering in the Dallas-Fort Worth area, Abbott said AI projects on the horizon in the Lone Star State would be bigger than the $500 billion multistate Project Stargate, according to the Dallas Business Journal. So far, Stargate includes three AI data centers in Texas.

Stargate, a new partnership among OpenAI, Oracle, Softbank, and the federal government, is building AI infrastructure around the country. The project’s first data center is in Abilene, and the center’s second phase is underway. Once the second phase is finished, the 875-acre site will host eight buildings totaling about 4 million square feet with a power capacity of 1.2 gigawatts. An additional 600 megawatts of capacity might be added later.

On Sept. 23, Stargate announced the development of another five AI data centers in the U.S., including a new facility in Shackelford County, Texas, near Abilene. That facility is likely a roughly $25 billion, 1.4-gigawatt AI data center that Vantage Data Centers is building on a 1,200-acre site in Shackelford County.

Another will be in Milam County, between Waco and Austin. In conjunction with Stargate, OpenAI plans to occupy the more than $3 billion center, which will be situated on a nearly 600-acre site, the Austin Business Journal reported. OpenAI has teamed up with Softbank-backed SB Energy Global to build the facility.

Abbott said several unannounced AI projects in Texas — namely, data centers — will be larger than Stargate.

“Bottom line is ... when you look at diversification, the hottest thing going on right now is artificial intelligence,” Abbott said.

The Houston area almost certainly stands to attract some of the projects teased by the governor.

In Houston, Taiwanese tech manufacturer Foxconn already is investing $450 million to make AI servers at the 100-acre Fairbanks Logistics Park, which Foxconn recently purchased for a reported $142 million. The park features four industrial buildings totaling one million square feet. It appears Foxconn will manufacture the servers for Apple and Nvidia, both of which have announced they’ll open server factories in Houston.

The Foxconn, Apple, and Nvidia initiatives are high-profile examples of Houston’s ascent in the AI economy. A report released in July by the Brookings Institution identified Houston as one of the country’s 28 “star” hubs for AI.

The Greater Houston Partnership says the Houston area is undergoing an "AI-driven data revolution."

“As Houston rapidly evolves into a hub for AI, cloud computing, and data infrastructure, the city is experiencing a surge in data center investments driven by its unique position at the intersection of energy, technology, and innovation,” the partnership says.

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Houston unicorn closes $421M to fuel first phase of flagship energy project

Heating Up

Houston geothermal unicorn Fervo Energy has closed $421 million in non-recourse debt financing for the first phase of its flagship Cape Station project in Beaver County, Utah.

Fervo believes Cape Station can meet the needs of surging power demand from data centers, domestic manufacturing and an energy market aiming to use clean and reliable power. According to the company, Cape Station will begin delivering its first power to the grid this year and is expected to reach approximately 100 megwatts of operating capacity by early 2027. Fervo added that it plans to scale to 500 megawatts.

The $421 million financing package includes a $309 million construction-to-term loan, a $61 million tax credit bridge loan, and a $51 million letter of credit facility. The facilities will fund the remaining construction costs for the first phase of Cape Station, and will also support the project’s counterparty credit support requirements.

Coordinating lead arrangers include Barclays, BBVA, HSBC, MUFG, RBC and Société Générale, with additional participation from Bank of America, J.P. Morgan and Sumitomo Mitsui Trust Bank, Limited, New York Branch.

“As demand for firm, clean, affordable power accelerates, EGS (Enhanced Geothermal Systems) is set to become a core energy asset class for infrastructure lenders,” Sean Pollock, managing director, project Finance at RBC Capital Markets, said in a news release. “Fervo is pioneering this step change with Cape Station, a vital contribution to American energy security that RBC is proud to support.”

The oversubscribed financing marks Cape Station’s shift from early-stage and bridge funding to a long-term, non-recourse capital structure, according to the news release.

“Non-recourse financing has historically been considered out of reach for first-of-a-kind projects,” David Ulrey, CFO of Fervo Energy, said in a news release. “Cape Station disrupts that narrative. With proven oil and gas technology paired with AI-enabled drilling and exploration, robust commercial offtake, operational consistency, and an unrelenting focus on health and safety, we have shown that EGS is a highly bankable asset class.”

Fervo continues to be one of the top-funded startups in the Houston area. The company has raised about $1.5 billion prior to the latest $421 million. It also closed a $462 million Series E in December.

According to Axios Pro, Fervo filed for an IPO that would value the company between $2 billion and $3 billion in January.

---

This article first appeared on EnergyCapitalHTX.com.

Houston food giant Sysco to acquire competitor in $29 billion deal

Mergers & Acquisitions

Sysco, the nation's largest food distributor, will acquire supplier Restaurant Depot in a deal worth more than $29 billion.

The acquisition would create a closer link between Sysco and its customers that right now turn to Restaurant Depot for supplies needed quickly in an industry segment known as “cash-and-carry wholesale.”

Sysco, based in Houston, serves more than 700,000 restaurants, hospitals, schools, and hotels, supplying them with everything from butter and eggs to napkins. Those goods are typically acquired ahead of time based on how much traffic that restaurants typically see.

Restaurant Depot offers memberships to mom-and-pop restaurants and other businesses, giving them access to warehouses stocked with supplies for when they run short of what they've purchased from suppliers like Sysco.

It is a fast growing and high-margin segment that will likely mean thousands of restaurants will rely increasingly on Sysco for day-to-day needs.

Restaurant Depot shareholders will receive $21.6 billion in cash and 91.5 million Sysco shares. Based on Sysco’s closing share price of $81.80 as of March 27, 2026, the deal has an enterprise value of about $29.1 billion.

Restaurant Depot was founded in Brooklyn in 1976. The family-run business then known as Jetro Restaurant Depot, has become the nation's largest cash-and-carry wholesaler.

The boards of both companies have approved the acquisition, but it would still need regulatory approval.

Shares of Sysco Corp. tumbled 13% Monday to $71.26, an initial decline some industry analysts expected given the cost of the deal.

Houston researcher builds radar to make self-driving cars safer

eyes on the road

A Rice University researcher is giving autonomous vehicles an “extra set of eyes.”

Current autonomous vehicles (AVs) can have an incomplete view of their surroundings, and challenges like pedestrian movement, low-light conditions and adverse weather only compound these visibility limitations.

Kun Woo Cho, a postdoctoral researcher in the lab of Rice professor of electrical and computer engineering Ashutosh Sabharwal, has developed EyeDAR to help address such issues and enhance the vehicles’ sensing accuracy. Her research was supported in part by the National Science Foundation.

The EyeDAR is an orange-sized, low-power, millimeter-wave radar that could be placed at streetlights and intersections. Its design was inspired by that of the human eye. Researchers envision that the low-cost sensors could help ensure that AVs always pick up on emergent obstacles, even when the vehicles are not within proper range for their onboard sensors and when visibility is limited.

“Current automotive sensor systems like cameras and lidar struggle with poor visibility such as you would encounter due to rain or fog or in low-lighting conditions,” Cho said in a news release. “Radar, on the other hand, operates reliably in all weather and lighting conditions and can even see through obstacles.”

Signals from a typical radar system scatter when they encounter an obstacle. Some of the signal is reflected back to the source, but most of it is often lost. In the case of AVs, this means that "pedestrians emerging from behind large vehicles, cars creeping forward at intersections or cyclists approaching at odd angles can easily go unnoticed," according to Rice.

EyeDAR, however, works to capture lost radar reflections, determine their direction and report them back to the AV in a sequence of 0s and 1s.

“Like blinking Morse code,” Cho added. “EyeDAR is a talking sensor⎯it is a first instance of integrating radar sensing and communication functionality in a single design.”

After testing, EyeDAR was able to resolve target directions 200 times faster than conventional radar designs.

While EyeDAR currently targets risks associated with AVs, particularly in high-traffic urban areas, researchers also believe the technology behind it could complement artificial intelligence efforts and be integrated into robots, drones and wearable platforms.

“EyeDAR is an example of what I like to call ‘analog computing,’” Cho added in the release. “Over the past two decades, people have been focusing on the digital and software side of computation, and the analog, hardware side has been lagging behind. I want to explore this overlooked analog design space.”