AI in Space: Start at the Edge, Build for the Mission
Apr 27, 2026
I started my career working on the space shuttle program at IBM and thought my life endeavors would center on space. Instead, my interest turned to compute devices and the technology that can bring computation to the masses. Those interests are now aligning with the realities of AI in space, for both edge computation in satellites and spacecraft today, and the future plans for massive data centers in space.
For years, AMD has built for “edge reality” – where power is constrained, connectivity isn’t guaranteed, and success is measured in real-time decisions, not theoretical peak performance. We’ve helped bring AI into PCs, industrial systems and embedded deployments by combining heterogeneous compute (CPUs, GPUs and adaptive compute), along with a strong software foundation. This "edge playbook" centers on a relentless focus on performance-per-watt and mission-critical reliability, allowing our partners to right-size performance for their specific needs.
We see space as the next and most demanding frontier for edge computing. The same fundamentals apply; they’re just amplified: strict power and thermal budgets, intermittent communications, expected long service lives, and a premium on reliability and autonomy. We are taking what we’ve learned enabling AI at the edge and extending it to space workloads with holistic co-design across hardware, software and systems so that on-board intelligence can be deployed efficiently, updated responsibly, and scaled across missions and form factors.
Orbiting data centers are emerging. As they do, AMD’s focus on adaptive, scalable platforms and an open ecosystem will help partners build robust, efficient end-to-end systems.
Space is the Ultimate Edge Environment
The immediate opportunity is on-board intelligence that senses, decides and acts as the mission happens. Space makes edge processing not just beneficial, but often necessary with local AI becoming the backbone of operations in which every downlink is constrained, every millisecond of latency matters and connectivity can’t be assumed.
Intelligence at the Point of Action
By moving AI from the terrestrial data center to the on-board system, the spacecraft shifts from a passive sensor to an autonomous decision-maker that can act even when the downlink is dark.
Downlink is limited by bandwidth, power and communication windows, so sending everything to a terrestrial data center is inefficient and slow. On-board AI can discard low-value data (like cloudy frames in Earth observation), can surface urgent events (like early wildfire signatures) and can enable resilient autonomy when connectivity is intermittent.
Edge processing helps spacecraft and satellites interpret data locally and act on it. Instead of treating the platform as a sensor that just collects raw data for Earth, AI in space turns it into a system that prioritizes, compresses and decides at the point of capture with agentic AI workflows.
And this AI can be adjusted across use cases and workflows, whether for a planetary rover navigating hazards, or a spacecraft flagging telemetry anomalies before they cascade and create failure.
The Intrigue of Data Centers in Space
Looking further out, success will be about making orbital compute a reality. With the challenge of insatiable demand for more AI computing in data centers, there are several efforts to deliver mass-scale computation in space to tap into solar power and leverage cooler temperatures.
Large-scale orbital compute will ultimately be limited by power, thermal dissipation, radiation resilience and communications. Many concepts assume sun-synchronous “dawn-dusk” orbits to maximize solar availability and reduce thermal cycling, with low Earth orbit helping limit latency and radiation exposure. One of the most difficult problems to solve is how to eliminate heat from large-scale compute deployments. Space is a vacuum, so excess heat must be conducted to radiators.
The Vacuum Catalyst
In space, there’s no air to carry heat away, so thermal management becomes a first-principles problem. The only way to shed the heat generated by electronics is to conduct it to radiators for heat dissipation. This unique constraint transforms performance-per-watt from a metric into a mandate that drives the architectural innovations making massive-scale AI in orbit a reality.
At meaningful scale, that reality drives architectural thinking toward modular, serviceable systems rather than the monolithic “data center in a box.” It will be many elements operating together, each managing its own power generation and thermal dissipation while communicating through high-throughput links.
At large scale, that likely implies:
- Modular deployments that can reach multimegawatt-class capabilities over time.
- High-speed, low-latency interconnect between elements (including optical links at substantially higher data rates and lower energy consumption than what’s commonly deployed today).
- Reliability and replacement models that assume modules may have limited lifetimes and can be de-orbited and replaced, more like fleet operations than traditional one-off spacecraft.
AMD Offers the Building Blocks for What’s Next
AMD adaptive computing has supported space exploration for decades, including image processing and navigation acceleration for NASA’s Mars rovers and the Artemis II mission. (Learn more about AMD’s proven expertise in space in "AMD in Space: Proven Expertise, Products Support Missions".)
AMD’s approach is to make space AI buildable – not as a one-off engineering project, but as a repeatable platform journey. That starts with adaptive, scalable compute building blocks that can be right sized to the mission: CPUs, GPUs, FPGAs and accelerator options where they make sense, paired with modular design philosophies.
This approach extends our established edge playbook to the stars. By providing the same platform consistency we’ve delivered for terrestrial deployments, we enable a repeatable journey where partners can evolve capabilities over time without re-architecting from scratch.
Just as important is openness. Space missions are assembled from many specialized suppliers, and no single vendor can (or should) dictate the full solution.
Mission Resilience Through Openness
Space missions are complex, multi-vendor ecosystems. Utilizing AMD ROCm™ open software stack ensures that developers can tune and validate systems across diverse hardware, preventing proprietary lock-in and fostering a more resilient, collaborative frontier.
AMD is investing in open software and open standards so partners can integrate, tune and validate end-to-end systems with more choice and less friction. On the software side, AMD ROCm™ software is part of the open software stack for AI and HPC, designed to help developers move from kernels to applications on AMD accelerators. On the systems side, AMD is helping drive standards for open security, interconnect and infrastructure to ensure high-performance AI systems can scale without lock-in.
New Frontier: Scaling AI from Earth to Orbit
The most exciting part of this conversation is that AI is expanding where compute can create impact, including environments that are remote, constrained and mission critical. By putting intelligence closer to where data is generated, we reduce latency, save bandwidth, and improve mission outcomes. That’s true in factories, hospitals, and vehicles – and it’s true in space.
At AMD, we’ll keep doing what we do best: Engineer for reality, co-optimize the full system and build technologies that scale efficiently – from Earth to orbit and beyond.