This article is published in Aviation Week & Space Technology and is free to read until May 03, 2025. If you want to read more articles from this publication, please click the link to subscribe.

NASA Tests AI’s Ability To Engineer A Spaceship

man wearing virtual reality headset

AI may enable engineers to cut system development time radically.

Credit: Demaerre/Getty Images

Speaking a spaceship into existence using artificial intelligence is something out of a Marvel movie. But NASA thinks that could happen soon.

A small research group is testing the idea that, with a couple sentences of text, teams of artificial intelligence (AI) agents can be dispatched to develop a spacecraft in a few hours, or even minutes.

  • Large language models could upend the roles of human aerospace engineers
  • AI agents’ hierarchies would mirror corporate ladders

The goal for NASA’s Text-to-Spaceship project is “Jarvis,” the AI program developed by Tony Stark, the fictional genius behind the Iron Man superhero in the Marvel movie franchise. That AI program was commanded by speaking, typing prompts or hand-manipulating 3D designs. NASA sees a similar program as possible.

“This is pretty well expressed by Iron Man’s Jarvis,” Ryan McClelland, research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland and leader of the Text-to-Spaceship initiative, tells Aviation Week. “You’re working in a mixed reality or virtual reality with your peers and with [AI] agents; you’re building in real time.”

If NASA and its software partners succeed, they could upend the way aircraft and spacecraft are designed and built, unleashing a new form of creative destruction that changes scores of engineering jobs. An open question is: What would those engineers do for work instead?

The road to this moment is paved by the rise of large language models (LLM), a type of AI that predicts the most likely next word in a sequence based on patterns learned from massive amounts of text data. After OpenAI’s ChatGPT came out shortly after Thanksgiving in 2022, some engineers started to wonder if the text-fluent AI could be infused into the engineering process.

“That was kind of a light-bulb moment for me, because I realized that all of these problems we had—with the tooling, with the amount of the scale of the data that we had to move between different models or just transcribe—could all be solved,” says Jared Fuchs, CEO of Celedon Solutions, a software partner in NASA’s Text-to-Spaceship project.

These engineers observed that despite its math-intensive reputation, much of the profession is driven by text or human speech. Engineers document, review and validate their vehicle designs in text within Microsoft Word, Excel or PDF documents, if not aloud in meetings. Human engineers also sometimes transfer data by hand between various computer programs to advance the engineering process.

As an intern on NASA’s Space Launch System program, Celedon Chief Technology Officer Chris Helmerich noticed that manual processes were consuming a lot of time. “Eighty percent of the time was just reviewing the Excel sheets [and] discussing requirements in docs,” he said.

Text information is the dietary staple of LLMs, of course. With some guiding structure, could LLM models take an engineer’s text prompt, cascade that intent through a series of connected software programs to produce a fully designed vehicle at the other end? The answer appears to be “yes.”

Take, for example, Celedon’s Davinci AI program. It made a computer-aided-design (CAD) drawing of a spacecraft, starting with the prompt: “Please make a set of requirements and parts for an Earth-observation satellite that can detect wildfires in the U.S. from low Earth orbit” (see graphic).

With about a dozen subsequent text prompts from an engineer speaking to the AI program, Davinci created a parts tree, calculated the total mass of the spacecraft, wrote Python code to simulate the satellite’s average power generation over several orbital periods, ran the code and created a summary document describing the mission architecture. The time it took Davinci to do all that—plus build a CAD model—was about 8 min.

“Davinci is best suited to the early concept stages, where the objective is to define the requirements, the major systems and key interfaces,” Fuchs says. “You can take these processes we’re doing already and make them literally 100 times faster.”

Davinci knows how to do all of this because the LLMs it uses are trained on text scraped from the public internet as well as digitized books, academic papers, code repositories like GitHub and Wikipedia, among other sources.

“We also supplement that model with a lot of custom data on how to do engineering, systems engineering, what parts should look like,” Helmerich says. Companies can add proprietary design, standards and requirements data as well in the form of PDFs, images and databases, he adds.

Davinci sits atop several LLMs, such as OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini. “We use a medley of models, based on their capabilities [and] based on the price points,” Fuchs says. Davinci takes a user prompt and decides what actions should be taken, including possibly executing hundreds of LLM calls to gather or generate data. Certain LLMs have strengths and weaknesses and so are used for different purposes, Helmerich says.

Although using LLMs can be expensive, Celedon contends that cost should be considered in context. “You, sitting, typing out or speaking to Davinci, your labor hours are more valuable than the large language model costs or the server costs,” Helmerich says. The program’s pricing is $20-200 per month, or possibly more for custom licensing.

Davinci is designed to give engineers feedback quickly as an assistant, but Celedon imagines a system that would rely less on humans, Fuchs says.

“We imagine it being more and more autonomous in its capability, where it will come up with a design, then instance itself to review it, critique it and realize, ‘Hey, maybe the transmission is missing, or this attribute doesn’t make sense. Now let’s update that and then send that final result back to the user,’” he says.

Synera is another partner on NASA’s Text-to-Spaceship project working toward that autonomous design vision. The German software developer has built a series of low-code connectors to automate the movement of data across computer-aided engineering tools. For example, it exchanges data between CAD and simulation programs, allowing an engineer to see the effects of design changes quickly. Clients for Synera’s connectors include Airbus, Arianespace and Safran.

“Airbus, they have roughly 150 different software tools in this virtual software development process,” Synera co-CEO Moritz Maier says. “If you want to develop an airplane, you have to somehow connect all of these tools.”

By connecting these workflows, Synera says it has wired a metaphorical circuit for AI to follow and access points for AI to run various types of software.

It is not just one AI program that Synera envisions running engineering software; the company is working toward a whole team of AI agents.

AI agents are an emerging concept in the software industry. Whereas AI programs like ChatGPT tend to be human-prompted step-by-step, agents can be given a goal and then sent off to achieve their objective autonomously. AI agents observe their environment, make decisions based on their goals, context or rules and then act.

When connected to a workflow and software suite, the systems execute tasks with near-human intuition, Maier says, describing how the AI agents become a self-learning organization, setting up simulations, adjusting parameters and optimizing designs.

“An agent itself has memory, and then also the whole organization has a memory,” he says. “Whenever there’s a failure, it remembers the failure. Once started, it only gets better.”

Like a corporate engineering hierarchy, AI agents would work together, with a management agent supervising and various specialized agents on lower levels collaborating, Maier says.

How deep could the AI rabbit hole go? Pretty far. One AI agent manager might direct a team, including an AI CAD engineer-agent that runs another AI generative design program. The combined speed and performance of the AI agents would be hard—if not impossible—for humans to match.

Consider that in 2023, McClelland and a human teammate faced off against AI generative design program Autodesk Fusion 360 in a demonstration to design an aluminum bracket for a NASA high-altitude balloon mission.

The AI program produced 31-part design iterations in 1 hr., while the two engineers—despite working rapidly—produced four designs over two days. The parts produced in the generative engineering program were stronger and lighter, too, with alien-looking bone structures that evolved via rapid iterations into their final form.

Celedon and Synera plan to release AI-agent versions of their software to the public in the coming months.

McClelland says the NASA Text-to-Spaceship team is working toward using AI software to design a suborbital technology demonstration, tentatively scheduled to fly in August or September.

What this means for aerospace engineers is unclear. In the software industry, AI is driving double-digit increases in coding productivity. However, that boost in productivity has softened demand for hiring new information technology (IT) workers.

An estimated 70,900 jobs were eliminated from the IT job market in 2023-24, according to an analysis of U.S. Bureau of Labor Statistics data by consulting firm Janco Associates. AI also is halting the growth of entry-level positions within IT, the consulting firm says in its January report.

A case in point are the hiring plans of Salesforce, a company that offers cloud-based software for sales representatives, service tickets, marketing divisions and e-commerce.

“We’re not going to hire any new engineers this year,” Salesforce CEO Marc Benioff said on an earnings call in February. “We’re seeing 30% productivity increase on engineering, and we’re going to really continue to ride that up.”

Maier is optimistic about the role of human engineers. “I don’t see a risk that there’s less work at the end, because building the systems and working with them is actually more work,” he says. “Once you start with this, there’s so much work to do building up workflows and creating these agents and stuff.”

Engineers may pivot to using AI to automate and optimize manual engineering processes, McClelland says.

“Everything that is rote gets automated, [and then] engineers work to expand what can be automated but also push on the edges of technology,” he says. “People like pushing on the edges of technology. They like solving hard, multidisciplinary problems.”

In some ways, using AI to automate engineering is the latest division of labor within an advanced economy. “A couple generations ago, [for] most engineers, part of their job was to do math in their head and be great at it,” McClelland says. “Someone still needs to know how to do all the math in the background, but we largely don’t have to do that anymore. . . . The individual engineer can just climb these levels of abstractions.”

The basic job of engineering is not going to change, Fuchs says. “Our fundamental job in engineering, at almost every level, is to look at this bit of code and say, ‘This is good.’ And, at a high level, [ask]: ‘Is this entire mission worth doing?’” he explains. “All we are doing is clarifying that fundamental importance of knowing what you want to do and why you want to do it.”

Garrett Reim

Based in the Seattle area, Garrett covers the space sector and advanced technologies that are shaping the future of aerospace and defense, including space startups, advanced air mobility and artificial intelligence.