Will we ever be able to trust an autonomous aircraft? Mark Skoog, the principal investigator for autonomy at NASA Armstrong Flight Research Center, joins Aviation Week’s Graham Warwick and Guy Norris to discuss an effort to enable autonomous aircraft to be programmed with internal values and rules of behavior to ensure they are safe. Listen in.
Don't miss a single episode. Subscribe to Aviation Week's Check 6 podcast in iTunes, Stitcher, Spotify and Google Play. Please leave us a review.
Here is a rush transcript of the Check 6 podcast for May 6, 2021.
Graham Warwick: Hello and welcome to this week's Check 6 podcast. I am Graham Warwick, Aviation Week's executive editor for technology. And today we are going to ask, “Will we ever be able to trust an autonomous aircraft?” NASA is working to ensure that we can, and joining us to find out how are my colleague Guy Norris, Aviation Week's Western U.S. bureau chief, and our special guest Mark Skoog, the principal investigator for autonomy at NASA Armstrong Flight Research Center in California.
We're hearing a lot about autonomy and artificial intelligence in aviation, particularly in the context of urban air taxis and unmanned cargo aircraft. Autonomy and AI are not the same thing, but they are potentially very complimentary. Using machine learning to train algorithms to automate takeoff and landing, autonomously plan optimum flight paths, recognize obstacles and avoid collisions and identify safe landing sites along a route has tremendous potential to make aviation safer, but there is a problem.
And here I will grossly oversimplify, in part to avoid showing my own lack of real knowledge, that the software used in today's avionics, such as autopilots and digital flight controls, is deterministic. That means the same input always produces the same output, and through rigorous analysis and testing, we can prove to the regulators like the FAA, that our system will always be safe. Machine learning algorithms are non-deterministic. The same input doesn't always produce the same output. Because of some change in the environment in or around an aircraft it might decide to turn left and not right. And because we can't fully understand what goes on inside a machine learning algorithm, no amount of testing can guarantee to the regulator that the system will always behave safely.
So how do we safely unlock all those great capabilities that autonomy and AI promise? How do we enable an autonomous aircraft to not only handle all the functions that a pilot has to today, but also safely make all the decisions a pilot has to make? That's what NASA is working on. I'm now going to hand over to Guy who'll give us a bit of background to where NASA's work came out, and he'll introduce our special guest, Mark. Guy.
Guy Norris: Thanks Graham. Well, we're looking at this subject today because of a lot of research goes back more than 20 years, really, to a program that's now called the automatic ground collision avoidance system. It's part of Auto GCAS. And of course, we're very familiar with this. Any of our readers who've been following this for years will know that Aviation Week has been lucky enough at the invitation of NASA and the Air Force to follow the development of Auto GCAS and the subsequent Auto ACAS, the airborne collision avoidance system through the decades. Now, this work that we're talking with Mark today, it really stems out of all of that early work. These are safety upgrades that were developed principally with Lockheed Martin and the Air Force Research Laboratory for the F-16 and subsequently the F-35 and soon to be other aircraft, too. So it's already saved in service 10 aircraft and 11 pilots, for sure. So, that's a remarkable success story.
And the success of that in general prompted NASA to begin looking at the wider potential applications to general aviation, unmanned air systems, and potentially even more applications beyond that. Now, this work really dovetailed into something called a joint capability technology demonstration, which with this program was called resilient autonomy. And that was a joint effort between NASA, the OSD office -- or the secretary of defense -- and the FAA. And it's specifically targeting an architecture and a method of potential certification, as you mentioned Graham, for these highly autonomous aircraft. So I know that's compressed 20-plus years into just a minute or two, but Mark, perhaps you could illustrate for us a little bit of where this has come from and really where you're going now, because it's really... this is rubber meets the road, as it were, now. And I think flight tests are just around the corner.
Mark Skoog: Yes. Well, thank you. And first off, thanks for having me today. I really appreciate the opportunity to be able to share our work that we're doing, and hopefully it will help others achieve similar results as to what our goals are. Specifically, the way we're approaching this is it's a discipline called run-time assurance. It is a discipline that, a technique that began quite a few years ago, I think primarily within core flight, internal flight control systems, we've kind of expanded that concept. What run-time assurance does in the portion of it that we're leveraging here is to monitor a boundary, a safety boundary. And if you think of GCAS, you're talking about the hitting the ground. Where is the point that safety needs to take priority over the mission conduct, as opposed to executing the mission?
And if you can define that boundary in software and monitor it, you can then allow the mission to take place until now that the safety party needs to control the mission. And we then take over from the mission execution, whether it's the pilot or it's an autonomous aircraft, and make sure we avoid penetrating that boundary. As soon as we've avoided the boundary, we're sure we're no longer going to imminently breach it, we then give control back to whatever it was that was controlling it. So if you think about our GCAS, we let the pilot fly, conduct low level missions, we make sure it's nuisance free. If they get too close to the ground and they're about to hit the ground, we then take over briefly, lock the pilot out, perform the avoidance maneuver, and then immediately give control back to the pilot. Well, that concept is, at a top level, the way run-time assurance architectures are developed.
Now, as far as complexity goes, there could become an issue if you now say, okay, well, we'll use that technique and we'll protect against all the possible hazards. Well, if you lump all of that logic into one single boundary monitoring algorithm, it, too, becomes complex and just as difficult to certify as an artificial intelligence system. So what's unique to our approach is deciding to not combine all of those monitors of the boundary into one single algorithm. We functionally separate them into individual tasks, not unlike the way a pilot flies.
If you're concerned about the ground, you take a moment to observe your surroundings, taking your situational awareness and say, ah, I'm either okay, or I'm not. Let me then take a look at the air traffic around me. You have to do a scan pattern. All pilots are taught this sort of thing. And it is a natural human thing. You do need to kind of look at each task individually and assess that. Taking in the broader context of things is your strategic mission planning, but it's not tenable when you're reacting to a very time sensitive, safety critical task.
Guy Norris: Now, the good thing about this, or the remarkable thing about this, Mark, as you've mentioned, is the idea of having multiple sensors and being able to put this into an architecture. And I think one of the intriguing aspects is then, well, okay, well, if you look at any warnings from these different sensors, which one is the most important and which one, which warning are you like, yeah, you're going to hit the ground or you're going to hit another aircraft? Could you describe how you decided, and what's really unique about the system, the way that you were able to architect that system and this idea of almost like the moral compass as you call it? But anyway, I don't want to steal your thunder. Go ahead.
Mark Skoog: No, no. Not at all. And so, just in basic flying, if you have a potential of multiple controls two-seater fighter or a normal aircraft where you have a pilot and co-pilot, there are very well established rules for who is controlling the aircraft at a certain time. It's just, you've got to do it that way. Even in any organization, you have to have a very well-defined chain of command, if you will. So similarly, we need to embed that within our logic. And like you say, okay, well, you don't want to have a force fight between these different monitors saying, well, we're about to hit this airplane so I'm going to go here and you've got ground collision avoidance going like, well, wait a minute, there're rocks over there. I want to go the opposite direction towards that airplane. So you have to make a decision.
And when humans are faced with that, think of driving down a road and a tumbleweed rolls out in front of you, or let's say a cat jumps out in front of you, a little less antelope valley specific analogy, you have to then decide, well, okay, what are my surroundings? Is it safe enough for me to avoid hitting that animal without jeopardizing my own life or others? So you make this decision and act accordingly. And although we may not all have the same morals, or if you will, I'm using air quotes on ‘morals’ there, our decision logic and how we weight the various consequences, we do go through that process. So what's important to understand is we're trying to develop a structure that does that.
And so in a run-time assurance, RTA network, everything comes down to a switch that is deciding who has control of the aircraft. And when I mentioned the simple, if you think of GCAS, the pilot controls it until you're about to reach the boundary, the monitor says, nope, you no longer should be controlling the aircraft. I'm going to do a recovery maneuver and give it over to the flight controls. It's who does that switch? We have multiple monitors going into that switch. So because we have a multiple multi-monitor run-time assurance architecture, we have to govern which switch position we go to say which monitor has control of the aircraft. So each monitor has to make an assessment as to what the consequences will be if it is not allowed to control the aircraft. So in the case of the cat versus a ditch or a cliff that you're going to fly off, you would apply some weighting factor to running over a cat versus plunging off of a cliff. And you would most likely give it your judgment. Pick the one that has the least consequence.
It's an unfortunate outcome, but there it is where we start to get into a different area, into having a deterministic system, as you mentioned earlier, Graham, you have a very specific architecture how you do it. In this case, you now can bring in the rules of behavior that provide the weighting of consequences. So our architecture does not, and I want to emphasize, does not dictate what that weighting should be. It provides a framework that allows you to plug in those rules of behavior that provide the weighting for making these decisions. And each monitor goes to those rules of behavior to calculate its consequence, and then share that to the moral compass and the moral compass purely selects what is the highest consequence. And that's who's going to control the vehicle.
Guy Norris: And Mark, you mentioned to me when we talked about this, and I should mention, by the way, this system is called the expandable variable autonomy architecture, or EVAA, and you mentioned to me with a really great analogy about how EVAA really is trying to teach sort of the rules of behavior almost, to the system, as it goes along. A bit in a way like a parent with a child. And you kind of brought this analogy to the ultimate point where you give it the car keys when it gets to 18 years old or something.
Mark Skoog: Yeah, exactly.
Guy Norris: Would you mind kind of running through that a little bit?
Mark Skoog: As I talked to the public, it's good to give an analogy. A lot of us have gone through parenting and raising children. As you're a parent and you have a little toddler, and if you've got a front yard, you may let the child play in the front yard, but there's very likely a street right next to that yard. And so you've got to be very careful and watch because you don't want that child to run out in the street. You've got to monitor. If you're in a city and crossing a street, what do we do? We take the pilot, we take the child's hand in ours and we walk across the street with that child hand in ours, we're holding that child, constraining its behavior so that it can't run up... Because that child... Why do we do that? Because the child doesn't know what could hurt it. And so we have to do that.
And that's exactly the way we treat unmanned aircraft today. We have that hand-holding goes through a command and control link between the vehicle and the pilot. And the pilot basically has that UAVs controls in his hand and is making sure it doesn't run off into the street through something bad. But a day comes to every parent's life, ideally, where they toss the car keys to the kid and say, "Be back by 10." Well, what transpired between having to hold the child's hand crossing the street and now you're going to let them drive with this one ton plus vehicle through the streets with other similar hazards running around? Well, what happened is we, as parents, ideally taught that child the rules of behavior, the consequences of actions, what can hurt them, how they can hurt others in using that vehicle, and that through their actions have proved to us that they can observe those rules on their own. And therefore we have the trust to hand them the keys to the family car and let them go out for the evening.
So what we've done is, in the EVAA architecture, we've created this ability to embed rules of behavior as defined by a regulator or whoever has authority for where this vehicle is going to operate. And then through our flight tests and our simulation tests, we send the vehicle through many hazardous situations so that it can show us that it will observe those rules of behavior. And with sufficient data, whenever we get to that point, we should have a body of data that we could bring forward and say, we can trust this as much as a human to be able to observe these rules of behavior. And therefore is that sufficient, is society willing to accept now that vehicle to go out? And there's a proof case, there's going to be a burden of proof case on anybody bringing a vehicle for this that's going want to really fly it out there in the air, close to the public, or anywhere out there, that it can officially be trusted to do that. And that'll come with data, just like that we typically gather during flight tests.
Graham Warwick: So I would say that the day you throw the car keys at the kid is when they finally wear you down with begging. But I've got to say, I'm going to jump in here and say that one of the things that's fascinating for me when I read Guy's story about EVAA and your description of the system is the flexibility of this to set the rules. These are not just a standard set of rules that you might give a pilot. You can actually, these are situational rules. You make the point that you can actually have one set of rules when this vehicle is flying out in the desert. You say it's out somewhere out in the wildlands of Oklahoma, as it's doing its mission, say it's delivering something or something like that, it flies into the airspace over Tulsa, it brings in the rules that the local authority says, for privacy and noise and all these things.
The flexibility of the system is extraordinary to me because, I think the key to trusting autonomy... In fact, actually you make a really good point in the story where in your own flight testing, there was a part of the constituency was worried about disturbance to some sort of wild animals or whatever. And you said, we should care about that. We should build that into the vehicles' rules of behavior. And to me, that is exactly type of approach that we need to build trust, because then people will see that these vehicles are a being a set of rules that they could understand, that somebody on the ground could see this vehicle is... we know when it's flying over doing its deliveries, they can see it's behaving in a way that minimizes the disturbance, it's staying away from the schools and all that sort of stuff. And I think that's really the key ultimately, to gaining this acceptance of autonomy, autonomous vehicles.
Mark Skoog: If I could carry that thread a little bit further. The specifics of what we're trying to embed within our architecture, EVAA, to enable that and make that usable within a certification system and as a developer, the ability to manifest this kind of a thing, is you bring each into a rule like, well, wait a minute, this is going to become a complex system, how do they conflict and...? So it's like, well, okay. Fortunately, our group, our work going back to Auto GCAS stems from a core group that was involved with bringing digital computing into aviation back to the very early '80s on the very first high-performance digital aircraft. And so we've seen the struggles with software VNB architectures in effectively complex systems as we developed them. And so what we chose to do on, I'll say the NASA side a bit more, we really, really emphasize and focus, we have to be really careful about how we developed the software architectures, it was going to remain understandable, and folks will be able to know this is a deterministic outcome, given these situations.
So we've created a software structure and architecture that strongly emphasize minimal software code, common structures of how you develop algorithms. In fact, our latest version that we're running just as of the last month uses a common collision avoidance architecture. They were going to call it generic collision avoidance system, but I said, "Well, that acronym's already taken." So we haven't got a name for it yet, stay tuned. And so we use the very same code to avoid aircraft as we do terrain, as we do obstacles, as we do airspace, as we do weather. We can bring whether in, we don't.
And in fact, just yesterday I was on a call, somebody was presenting some information on the weather, they were very eager to, is there a way we could team and I said to myself, gosh, what you've got there we could probably plug in, and if we had a flying aircraft that was flying this right now, but we could plug it into SIM, probably have it running and checked out in less than four weeks. We don't have weather avoidance in our system now, but just knowing what you've got, you share your interface with us, we could probably have it up and going in four weeks, I believe.
And we've seen this time and time... That's a very bold statement in the software world, but in truth, we've had folks come in and it's like, "Well, what about this? And what about this? Can you add that? Can you add this?" And truly, we found that with our architecture, it literally is only a few lines of code to bring in whole new capabilities. We can easily layer on more and more systems. So the architecture, the framework behind all of this that we're talking about is very flexible. Now, this is a challenging thing in the aviation world to have accepted, because our industry, unfortunately, is extremely stovepiped in its development. And especially software. It's like, well, who wants to take ownership of somebody else's code? Good grief. That's just crazy. You can't build a business case on that. So we're really trying to break that barrier. And we do have a number of folks that are boldly considering that. I'm actually surprised that we have some semi-major player, relatively major players considering it and evaluating it for that purpose right now.
Graham Warwick: Okay. Well, we have to wrap up here. Guy, any very, very brief closing thoughts before I wrap up?
Guy Norris: No, just I'm looking forward to seeing some of the flight testing that I believe is going to be maybe later this year, so that'll be good to see.
Graham Warwick: Okay. We'll bring you, we'll bring our audience back up to date later in the year and we'll get Mark back to talk about how it all works. So I am just scared. That's it, that's a wrap up for this week's Check 6, which is now available for download on iTunes, Stitcher, Google Play, and Spotify, whatever they are. A special thanks to our producer in Washington, DC, Donna Thomas. And thank you for your time, and join us again next week for another Check 6.
Comments