Existential threats and control theory

On the recent “80,000 Hours Podcast” Rob Wiblin interviewed Owen Cotton-Barratt about their statistical analysis of existential threats, such as pandemics, nuclear war and super-human AI. It is a theme that I find exiting and their discussion was thought provoking.

A central thing I learned on this podcast was that it is highly unlikely that a single threat would cause the extinction of all humans. For example, it would surely be worth trying to avoid a pandemic with a death rate of 20 %, but it does not alone come anywhere near wiping out all humans since 80 % would survive. Most catastrophic threats are like this. Central to most existential threats is therefore that they are combinations or chains of events. For example, a large natural disaster such as the nuclear winter caused by the impact of an asteroid, in combination with a pandemic would be much more likely to cause extinction. For surviving such a combination, it is therefore sufficient that we win at least one out of these two struggles. In fact, it is cost-efficient to focus on such actions which would cut such chains of events. For instance, finding ways of producing food, also during nuclear winters, would protect us from a large proportion of existential threats and therefore we should concentrate efforts on such food production.

There were however a couple of areas in the discussion where I would have things to add. The first one is a minor addition; Namely, Owen had an argument that in such chains of events, it would often be useful to focus efforts on components which are already of low likelihood (say 20 % chance of running out of food). The argument was that often such components can be easily halved (to 10 %) such that consequentially, the overall risk is also halved. Similarly, his argument was that an almost-certain event (say 99.9 % chance of a pandemic spreading), is much harder to halve, but typically we would only be able to reduce some digits off the original percentage (say to 99.7 %). To agree with this argument, I would need to see some data about threats and their solutions, because I can easily think of opposite examples. For examples, introducing frequent hand-washing, lock-downs and mandatory face-masks etc. can demonstrably and easily, reduce the spread of a pandemic by more than a half.

My main addition to their arguments and the reason to writing this blog, was however their discussion about the likelihood of an extinction level event in the next 100 years. Their discussion was focused on linear relationships of causality, say we have this new thing X now, it has these effects Y, and what effects do these have on existential threats? With my background in control theory, I believe that feedback-loops have a significant effect in this, amplifying the likelihood of threats to the point that linear causalities likely become negligible in comparison to non-linear effects.

To explain, think of the following example; consider a vehicle, a carriage of some sort, which can be pushed forward to transport goods. When technology and understanding develops, we can introduce mechanisms for steering the carriage. To better see where we’re going, we’d furthermore need to have a place on top where we can see better. As technology further improves, we can put horses to pull the carriage and later an engine to drive it. To avoid driving too fast and too dangerously, we’d need a speedometer, and other measurements for avoiding running out of fuel or destroying the engine. That is, with every new control mechanism, we also need better awareness and new observations to avoid destruction. With every improvement we also find more spectacular ways of destruction.

Together, the mechanisms for control and observation make up a feedback loop. We see how our speed changes and moderate our speed accordingly. With every new mechanism, the system becomes better in achieving its original task, transporting goods, but the system also becomes more complex and more difficult to operate. The more complicated system requires more complicated control mechanisms to avoid catastrophic destruction. Equivalently in terms of control theory, feedback-loops can cause the system to become unstable, causing rapid unscheduled disassembly. More knowledge causes more pain.

Moreover, I see such feedback-loops as the defining element of intellect and awareness. The better we are aware of things and the better we operate with that knowledge (see the feedback?), the more intelligent we are. The better a society is aware of (existential) threats against itself, and the better it operates with that information, the more intelligent the society is. Our ability to prevent existential threats is thus a measure of how intelligent the humanity-as-a-whole is.

This makes it difficult to predict the likelihood of our demise. Both our awareness of threats (better communication, better research), our exposure to threats (pandemics, better weapons), but also our ability to counter such threats (better research, better technologies) have greatly increased. This does not mean in itself that we would be better or worse off. The question is whether our society can manage the information and tools in a stable way? Unquestionably society and its internal feed-backs is already so complex that it is a chaotic system. Whether a system is stable or unstable, is a question of chaos theory, and most likely inherently near-impossible to answer with 100 % certainty. Even prediction of the likelihood of a possible outcome will be as difficult as predicting the weather accurately in 100 years. We can only try to prevent the worst outcomes. Using weather forecasting is here more than just a metaphor; weather forecasters have, for long and with high accuracy, predicted events of chaotic weather systems from noisy and sparse data. My educated guess is that studies in existential threats would therefore benefit from studying the methods of weather forecasters.

--

--

Tom Bäckström

An excited researcher of life and everything. Associate Professor in Speech and Language Technology at Aalto University, Finland.