Spacecraft Digital Twins?
Digital Twin technology has been revolutionizing many industries and processes globally over the last few years. I first learned of the concept only a couple of years ago. Wondering about its applicability to the space enterprise, I quickly realized that while digital twins may face some very unique challenges that the space application poses on any future implementation of the technology, they can bring tremendous value and completely transform the entire industry.
But first, what is a digital twin? A digital twin is a faithful digital representation of a physical object or system. By “faithful digital representation” I do not just mean a mathematical model and a simulation environment for the object or system that are detached from the current state of the actual physical instance of the object. A digital twin replicates the physical object or system using near-real-time streaming sensor data. In today’s Cloud Era, a digital twin typically lives and is analyzed in the cloud, and is combined with other data related to the object or system such as its environment. The twin is then made available to people in a variety of roles, so they can remotely understand its status, its history, its needs, and interact with it to perform any number of functions. One of the key functions of digital twins is the prediction and detection of anomalous system behaviors, where an artificial intelligence or a machine learning algorithm compares the streaming data against system design performance expectations, model simulations, and, very importantly, how other actual instances of the object or system, via their aggregated digital twin behaviors, to establish normal operation, are behaving.
Applications of digital twins range from manufacturing, to Airbus’ use of the technology in the design and development phases of its aircraft, to General Electric’s real-time jet engine early degradation detection, to the construction, maintenance and analysis of smart buildings, to Thornton Tomasetti’s use of the technology for the monitoring of medical devices and patients in healthcare. Digital twins are now also coming to a United States Department of Defense agency near you! The Navy’s Office of Naval Research (ONR) is investing nearly $10M in the technology to enhance the “resiliency, efficiency, adaptability, and autonomy of naval power systems and platforms” and to “accelerate transition of health monitoring and predictive maintenance technology for costly electrical components aboard Navy ships”. The Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, Dr. Will Roper, has been advocating for the use of digital twin technology across the US Air Force for the design of jets (Boeing’s T-7 Red Hawk Trainer aircraft is already leveraging digital twin technology), missiles and satellites.
Note, especially for satellites, that Dr. Roper specifically used the term design and not operation. While digital twins are applicable to satellite design and manufacturing processes pre-launch —manufacturing is a natural place for the technology— its use in satellite operations, as is done by, say, GE for jet engines, will face challenges that are unique to the operational space environment.
(I note here that the broader technology which connects data flows and produces a holistic view of an asset's data across its entire lifecycle is known as a digital thread. Digital twins are an integral part of the digital thread, but the latter encompasses a broader set of digital concepts and technologies beyond digital twins. I will return to the concept of digital threads in a future post. Both Digital threads and digital twins are integral to the Model Based Enterprise and Model Based Systems Engineering.)
The unique challenges facing digital twin technology for an operational space system are not difficult to guess. In the pre-Cloud Era, one would have started with the latency problem. But with technologies like Amazon’s AWS and Microsoft’s Azure Space, having to wait for ground contact for data transmission may become unnecessary, and where the analytics themselves, digital twins included, may be moved closer to the operational edge. There will always be a latency challenge, but it will not be as bad as having to wait until the satellite’s next pass over a ground station. Two bigger challenges, though, loom, especially for the today’s small satellite revolution. First, there is bandwidth. For a twin to be a faithful replica of an operational spacecraft, we would need to transmit far more data about the satellite’s state than is typically contained in telemetry data today. Second, and of even more relevance to small satellites, to get to that data volume level needed to realize a faithful digital twin, we would need to add far many more sensors with many more sensor types onboard the spacecraft. However, more sensors means more weight. And, in space, weight matters, especially for small satellites.
With these challenges, it is tempting to just dismiss the concept of digital twins for operational space systems. As in the examples I mention above for terrestrial and aerial applications, there is tremendous value that digital twin technology can bring to any enterprise. With its predictive analytics capabilities, for example, digital twins can result in the prolongation of the operational life of a spacecraft, or aid in the prevention or, if prevention is impossible, in the preparation for and mitigating the impact of an impending catastrophic event that may result in the spacecraft’s total loss. Would the added weight and cost justify the value gained from employing a digital twin technology? This question presents us with a rich field for research and development that will also have to touch on questions related to latency, bandwidth, and what other sensors, and how many, do we need to add to the current suite of onboard sensors in order to generate enough data for a digital twin to deliver on its promises.
Exploring that trade space seems to me to be unavoidable since the use of digital twins for operational space systems is, somewhat, underway. In a March, 2020 article, for example, Air Force Magazine mentions that the US Air Force’s Space and Missile Systems Center (SMC) has long been pervasively employing the Model Based Systems Engineering framework on which digital twin technology is typically based. In fact, as mentioned in the article, Booz Allan Hamilton created a digital replica of an an-orbit Lockheed Martin’s Block IIR GPS satellite to detect system cybersecurity vulnerabilities.
One can even argue that, as Prof. Moriba Jah did recently, any one of the myriad commercial, government and academic Space Situational Awareness (SSA) and Space Traffic Management (STM) services, such as University of Texas’ ASTRIAGraph, is a de facto SSA/STM nascent digital twin of the space object population. For SSA/STM, however, the replica is limited by the relative lack of diversity in sensor types, geometric perspective and distance from the “objects”. The SSA/STM application, however, benefits from the fact that it is heavily dominated by the physics. Therefore, it is both simulation- and data-heavy, but with low data diversity. Typical terrestrial or aerial digital twin applications, on the other hand, are usually data-heavy, and have high data diversity, and these two properties dominate the need for a heavy reliance on simulations.
So digital twins are not really all that new. We are simply just embarking on unleashing what I think will be a revolutionary transformation of how we design, manufacture, analyze and operate space systems across the entire enterprise. And that is a very exciting prospect!
Agile Systems Engineering
Hailing from an organization that endorsed and practiced the Agile Methodology not just in software, but in several other aspects of the business outside of software development, agile has become my proverbial hammer for every nail that has come my way.
One particular nail that has been on my mind for a while has been systems engineering with its now standardized and formalized processes and methods. I’m not a systems engineer by training, but much of what I know about the process I have learned by reverse osmosis, having had the opportunity to work alongside some of the most experienced systems engineers, both in small business settings working on smaller scale projects, as well as in large business settings working on large-scale projects.
Recently, however, because of a customer need for modernizing their processes, I have had the opportunity to explore and flesh out the relatively nascent paradigm of agile systems engineering. Two documents in particular stood out:
As the INCOSE briefing points out, agility as a framework predates Agile Development for software. As a paradigm, Agile is a general one that can touch many other application areas, including but not limited to systems engineering, hence, now we have Agile Systems Engineering. The framework is the same, but the outcomes are different: while Agile software development has implementation as a main outcome, Agile Systems Engineering has specifications as the main outcome. With that view, then, all established best practices to implementing Agile for software can directly apply to systems engineering.
Another key, and very exciting, aspect of the Agile Systems Engineering paradigm is the reliance on model-based systems engineering (MBSE), which shifts the focus away from document-based specifications and information exchange, to the specification and exchange of domain models. The model-based aspect then ties in agile software development with systems engineering in an overall Agile mindset.
These developments over the last several years are very much welcome evolutions to the long standing, and rather rigid, approaches to systems engineering.
The Fragility of Resiliency
Resilient systems, natural or engineered, do not usually evoke fragility, neither in the layperson’s nor, typically, in the practitioner’s mind. While many natural systems are resilient yet not fragile, all engineered systems I am aware of are resilient yet still are very fragile.
Resilient systems, natural or engineered, do not usually evoke fragility, neither in the layperson’s nor, typically, in the practitioner’s mind. While some natural systems are resilient and not fragile, all engineered systems I am aware of are resilient yet fragile.
How so? The answer lies in that odd term that appears in the title of this blog: antifragility. Explaining this term, that was coined by Nassim Nicholas Taleb in his 2012 book Antifragile, is the reason I wanted to have this article be one of the first I post on this blog. I will get back to antifragility in a moment, but first let’s review what resiliency and, more basically, robustness are.
Robustness is the ability of a system to function in the presence of internal and external variations. A robust system does not, however, learn about or from these external variations. It is indifferent to such variations . And it does not adapt internally to these variations either. Robust systems, despite their name, are brittle —there’s always a limit to how much variation they can handle. With determination, one can always create conditions beyond the robust system’s design envelope that will result in system failure. Drop that high-quality Danuta rocks glass from high enough an altitude, on a hard enough a surface enough times, and it will eventually break. Every time it survives an impact, that rocks glass neither learns from nor adapts to the severity of every subsequent impact.
A resilient system, on the other hand, both learns from and adapts to internal and external changes and variations. Adaptive NeuroControl is a good example of a resilient engineering capability. Taleb’s book has many examples of resilient biological, social and political systems. The immune system is an excellent example of a resilient natural system. Going back to our rocks glass, it would be resilient, not just robust, if with every impact it learns something about its abilities and limitations and adapts accordingly to improve its chances at surviving the next impact.
Yet, there will always be a limit. A resilient system, if it is not learning and adapting quickly enough, eventually it will break. However fast our rocks glass is learning and adapting, if it is not doing so rapidly enough, it is bound to meet its unfortunate fate. In this sense, then, even resilient systems have an inherent amount of fragility.
Then we have antifragility. Antifragility is not, however, about the speed of learning and adaptation. An antifragile system, simply, is one that thrives on stressors. An antifragile rocks glass is one that thrives on and seeks stressors. A good socio-biological example is the athlete who is constantly seeking a new, untried challenge that help reveal her weaknesses in order to overcome them. She’s learning, adapting and seeking new stressors in order to become more resilient. For engineered systems, this almost necessarily implies autonomy. Learning and adaptation are both needed in any autonomous system. But these two alone do not make an autonomous system antifragile. For the latter, an autonomous system also needs to seek inputs, variations, stressors, etc., that reveal its inherent deficiencies. Nothing is new here, in some sense. It seems to just be how we design our autonomous system’s objective functions. For an autonomous system to be antifragile, the driving force behind its decisions needs to include seeking conditions under-which it learns about its own weaknesses and ignorances about its environment, and then learn from and adapt to what it discovers. Like Taleb puts it, an antifragile package would not have a “Fragile: Handle With Care” label on its box. The label would say, instead, “Antifragile: Please, Mishandle”.
We can thus break down robust, resilient and antifragile systems according to the diagram shown below, where the two main defining parameters are: ability to adapt and kind of reaction to stressors.
I will close with two remarks. The first remark relates to the question of where are space systems, today, on this diagram? In my opinion, they are solidly sitting in the top right corner of the “Robust” block. With programs like DARPA’s Blackjack Pit Boss Program, they are heading towards the resilient. We are still far from the antifragile, but we are heading in that direction.
My second remark is that antifragility has its limits. Like fragile resilient systems, if they are not learning and adapting rapidly enough, or if they seek stressors too aggressively beyond what they can handle, just like the over-zealous athlete, they can break.
In that sense, even antifragile systems can be fragile.
Robustness, Resiliency and Antifragility as a function of adaptation and response to stressors.
Welcome to The Autonomyst
Thank you for visiting! I created this website as a place to exchange ideas, in an informal setting, on everything related to four properties of space systems and the space ecosystem as a whole: resiliency, agility, the novel notion of antifragility, and the degree of a system’s autonomy, and how these concepts interplay and influence sustainability of the space ecosystem.
Thank you for visiting!
I created this website as a place to exchange ideas, in an informal setting, on everything related to four properties of space systems and the space ecosystem as a whole: resiliency, agility, the novel notion of antifragility, and the degree of a system’s autonomy, and how these concepts interplay and influence sustainability of the space ecosystem.
Here, I use resiliency in the broadest sense possible: the ability of a system to sustain and recover from failure. Agility relates to time: the ability of a system to change quickly (enough).
Another motivating concept central to this project is Nassim Nicholas Taleb’s notion of antifragility. An antifragile system is one that benefits from and thrives on stressors, shocks, volatility, attacks and failures. A resilient system, as a future post will discuss, is not necessarily antifragile in that it can heal itself in the face of adversity but does not thrive on it. For a system to benefit from adversity, autonomy, enabled by artificial intelligence, is almost necessarily needed to make a resilient system antifragile.
These are the kinds of ideas that will be the subject of this page. We will cover how technology, innovative concepts (e.g., fractionation and disaggregation), economics, politics and government, the market process and its mechanisms, among others, influence space systems resiliency and, ultimately, sustainability.
I very much look forward to our conversations!
Islam Hussein, February 3, 2020