Category Archives: Article

Meltdown: Why our systems fail and What we can do about it

Meltdown: Why our systems fail and What we can do about it

Authors and related work

  • Chris Clearfield 
    • Chris is the founder of System Logic, an independent research and consulting firm focusing on the challenges posed by risk and complexity. He previously worked as a derivatives trader at Jane Street, a quantitative trading firm, in New York, Tokyo, and Hong Kong, where he analyzed and devised mitigations for the financial and regulatory risks inherent in the business of technologically complex high-speed trading. He has written about catastrophic failure, technology, and finance for The Guardian, Forbes, the Harvard Kennedy School Review, the popular science magazine Nautilus, and the Harvard Business Review blog.
  • András Tilcsik
    • András holds the Canada Research Chair in Strategy, Organizations, and Society at the University of Toronto’s Rotman School of Management. He has been recognized as one of the world’s top forty business professors under forty and as one of thirty management thinkers most likely to shape the future of organizations. The United Nations named his course on organizational failure as the best course on disaster risk management in a business school. 
  • How to Prepare for a Crisis You Couldn’t Possibly Predict
    • Over the past five years, we have studied dozens of unexpected crises in all sorts of organizations and interviewed a broad swath of people — executives, pilots, NASA engineers, Wall Street traders, accident investigators, doctors, and social scientists — who have discovered valuable lessons about how to prepare for the unexpected. Here are three of those lessons.

Overview

This book looks at the underlying reasons for accidents that emerge from complexity and how diversity is a fix. It’s based on Charles Perrow’s concept of Normal Accidents being a property of high-risk systems.

Normal Accidents are unpredictable, yet inevitable combinations of small failures that build upon each other within an unforgiving environment. Normal accidents include catastrophic failures such as reactor meltdowns, airplane crashes, and stock market collapses. Though each failure is unique, all these failures have common properties:

    • The system’s components are tightly coupled. A change in one place has rapid consequences elsewhere
    • The system is densely connected, so that the actions of one part affects many others
    • The system’s internals are difficult to observe, so that failure can appear without warning

What happens in all these accidents is that there is misdirected progress in a direction that makes the problem worse. Often, this is because the humans in the system are too homogeneous. They all see the problem from the same perspective, and they all implicitly trust each other (Tight coupling and densely connected).

The addition of diversity is a way to solve this problem. Diversity does three things:

    • It provides additional perspectives into the problem. This only works if there is large enough representation of diverse groups so that they do not succumb to social pressure.
    • It lowers the amount of trust within the group, so that proposed solutions are exposed to a higher level of skepticism.
    • It slows the process down, making the solution less reflexive and more thoughtful.

By designing systems to be transparent, loosely coupled and sparsely connected, the risk of catastrophe is reduced. If that’s not possible, ensure that the people involved in the system are diverse.

My more theoretical thoughts:

There are two factors that affect the response of the network: The level of connectivity and the stiffness of the links. When the nodes have a velocity component, then a sufficiently stiff network (either many somewhat stiff or a few very stiff links) has to move as a single entity. Nodes with sparse and slack connections are safe systems, but not responsive. Stiff, homogeneous (similarity is implicit stiff coupling)  networks are prone to stampede. Think of a ball rolling down a hill as opposed to a lump of jello.

When all the nodes are pushing in the same direction, then the network as a whole will move into more dangerous belief spaces. That’s a stampede. When some percentage of these connections are slack connections to diverse nodes (e.g. moving in other directions), the structure as a whole is more resistant to stampede.

I think that dimension reduction is inevitable in a stiffening network. In physical systems, where the nodes have mass, a stiff structure really only has two degrees of freedom, its direction of travel and its axis of rotation. Which means that regardless of the number of initial dimensions, a stiff body’s motion reduces to two components. Looking at stampedes and panics, I’d say that this is true for behaviors as well, though causality could run in either direction. This is another reason that diversity helps keep away from dangerous conditions, but at the expense of efficiency.

Notes

  • Such a collision should have been impossible. The entire Washington Metro system, made up of over one hundred miles of track, was wired to detect and control trains. When trains got too close to each other, they would automatically slow down. But that day, as Train 112 rounded a curve, another train sat stopped on the tracks ahead—present in the real world, but somehow invisible to the track sensors. Train 112 automatically accelerated; after all, the sensors showed that the track was clear. By the time the driver saw the stopped train and hit the emergency brake, the collision was inevitable. (Page 2)
  • The second element of Perrow’s theory (of normal accidents) has to do with how much slack there is in a system. He borrowed a term from engineering: tight coupling. When a system is tightly coupled, there is little slack or buffer among its parts. The failure of one part can easily affect the others. Loose coupling means the opposite: there is a lot of slack among parts, so when one fails, the rest of the system can usually survive. (Page 25)
  • Perrow called these meltdowns normal accidents. “A normal accident,” he wrote, “is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity) causes a cascade of failures (because of tight coupling).” Such accidents are normal not in the sense of being frequent but in the sense of being natural and inevitable. “It is normal for us to die, but we only do it once,” he quipped. (Page 27)
    • This is exactly what I see in my simulations and in modelling with graph Laplacians. There are two factors that affect the response of the network: The level of connectivity and the stiffness of the links. When the nodes have a velocity component, then a sufficiently stiff network (either many somewhat stiff or a few very stiff links) has to behave as a single entity.
  • These were unintended interactions between the glitch in the content filter, Talbot’s photo, other Twitter users’ reactions, and the resulting media coverage. When the content filter broke, it increased tight coupling because the screen now pulled in any tweet automatically. And the news that Starbucks had a PR disaster in the making spread rapidly on Twitter—a tightly coupled system by design. (Page 30)
  • This approach—reducing complexity and adding slack—helps us escape from the danger zone. It can be an effective solution, one we’ll explore later in this book. But in recent decades, the world has actually been moving in the opposite direction: many systems that were once far from the danger zone are now in the middle of it. (Page 33)
  • Today, smartphone videos create complexity because they link things that weren’t always connected (Page 37)
  • For nearly thirty minutes, Knight’s trading system had gone haywire and sent out hundreds of unintended orders per second in 140 stocks. Those very orders had caused the anomalies that John Mueller and traders across Wall Street saw on their screens. And because Knight’s mistake roiled the markets in such a visible way, traders could reverse engineer its positions. Knight was a poker player whose opponents knew exactly what cards it held, and it was already all in. For thirty minutes, the company had lost more than $15 million per minute. (Page 41)
  • Though a small software glitch caused Knight’s failure, its roots lay much deeper. The previous decade of technological innovation on Wall Street created the perfect conditions for the meltdown. Regulation and technology transformed stock trading from a fragmented, inefficient, relationship-based activity to a tightly connected endeavor dominated by computers and algorithms. Firms like Knight, which once used floor traders and phones to execute trades, had to adapt to a new world. (Page 42)
    • This is an important point. There is short-term survival value in becoming homogeneous and tightly connected. Diversity only helps in the long run.
  • As the crew battled the blowout, complexity struck again. The rig’s elaborate emergency systems were just too overwhelming. There were as many as thirty buttons to control a single safety system, and a detailed emergency handbook described so many contingencies that it was hard to know which protocol to follow. When the accident began, the crew was frozen. The Horizon’s safety systems paralyzed them. (Page 49)
    • I think that this may argue opposite to the authors’ point. The complexity here is a form of diversity. The safety system was a high-dimensional system that required an effective user to be aligned with it, like a free climber on a cliff face. A user highly educated in the system could probably have made it work, even better than a big STOP button. But expecting that user is a mistake. The authors actually discuss this later when they describe how safety training was reduced to simple practices that ignored perceived unlikely catastrophic events.
  • “The real threat,” Greenberg explained, “comes from malicious actors that connect things together. They use a chain of bugs to jump from one system to the next until they achieve full code execution.” In other words, they exploit complexity: they use the connections in the system to move from the software that controls the radio and GPS to the computers that run the car itself. “As cars add more features,” Greenberg told us, “there are more opportunities for abuse.” And there will be more features: in driverless cars, computers will control everything, and some models might not even have a steering wheel or brake pedal. (Page 60)
    • In this case it’s not the stiffness of the connections, its the density of connections
  • Attacks on cars, ATMs, and cash registers aren’t accidents. But they, too, originate from the danger zone. Complex computer programs are more likely to have security flaws. Modern networks are rife with interconnections and unexpected interactions that attackers can exploit. And tight coupling means that once a hacker has a foothold, things progress swiftly and can’t easily be undone. In fact, in all sorts of areas, complexity creates opportunities for wrongdoing, and tight coupling amplifies the consequences. It’s not just hackers who exploit the danger zone to do wrong; it’s also executives at some of the world’s biggest companies. (Page 62)
  • By the year 2000, Fastow and his predecessors had created over thirteen hundred specialized companies to use in these complicated deals. “Accounting rules and regulations and securities laws and regulation are vague,” Fastow later explained. “They’re complex. . . . What I did at Enron and what we tended to do as a company [was] to view that complexity, that vagueness . . . not as a problem, but as an opportunity.” Complexity was an opportunity. (Page 69)
    • I’m not sure how to fit this in, but I think there is something here about high-dimensional spaces being essentially invisible. This is the same thing as the safety system on the Deepwater Horizon.
  • But like the core of a nuclear power plant, the truth behind such writing is difficult to observe. And research shows that unobservability is a key ingredient to news fabrications. Compared to genuine articles, falsified stories are more likely to be filed from distant locations and to focus on topics that lend themselves to the use of secret sources, such as war and terrorism; they are rarely about big public events like baseball games. (Page 77)
    •  More heuristics for map building
  • Charles Perrow once wrote that “safety systems are the biggest single source of catastrophic failure in complex, tightly coupled systems.” (Page 85)
    • Dimensions reduce through use, which is a kind of conversation between the users and the designers. Safety systems are rarely used, so this conversation doesn’t happen.
  • Perrow’s matrix is helpful even though it doesn’t tell us what exactly that “crazy failure” will look like. Simply knowing that a part of our system—or organization or project—is vulnerable helps us figure out if we need to reduce complexity and tight coupling and where we should concentrate our efforts. It’s a bit like wearing a seatbelt. The reason we buckle up isn’t that we have predicted the exact details of an impending accident and the injuries we’ll suffer. We wear seatbelts because we know that something unforeseeable might happen. We give ourselves a cushion of time when cooking an elaborate holiday dinner not because we know what will go wrong but because we know that something will. “You don’t need to predict it to prevent it,” Miller told us. “But you do need to treat complexity and coupling as key variables whenever you plan something or build something.” (Page 88)
  • A fundamental feature of complex systems is that we can’t find all the problems by simply thinking about them. Complexity can cause such strange and rare interactions that it’s impossible to predict most of the error chains that will emerge. But before they fall apart, complex systems give off warning signs that reveal these interactions. The systems themselves give us clues as to how they might unravel. (Page 141)
  • Over the course of several years, Rerup conducted an in-depth study of global pharmaceutical powerhouse Novo Nordisk, one of the world’s biggest insulin producers. In the early 1990s, Rerup found, it was difficult for anyone at Novo Nordisk to draw attention to even serious threats. “You had to convince your own boss, his boss, and his boss that this was an issue,” one senior vice president explained. “Then he had to convince his boss that it was a good idea to do things in a different way.” But, as in the childhood game of telephone—where a message gets more and more garbled as it passes between people—the issues became oversimplified as they worked their way up the chain of command. “What was written in the original version of the report . . . and which was an alarm bell for the specialist,” the CEO told Rerup, “was likely to be deleted in the version that senior management read.” (Page 146)
    • Dimension reduction, leading to stampede
  • Once an issue has been identified, the group brings together ad hoc teams from different departments and levels of seniority to dig into how it might affect their business and to figure out what they can do to prevent problems. The goal is to make sure that the company doesn’t ignore weak signs of brewing trouble.  (Page 147)
    • Environmental awareness as a deliberate counter to dimension reduction
  • We show that a deviation from the group opinion is regarded by the brain as a punishment,” said the study’s lead author, Vasily Klucharev. And the error message combined with a dampened reward signal produces a brain impulse indicating that we should adjust our opinion to match the consensus. Interestingly, this process occurs even if there is no reason for us to expect any punishment from the group. As Klucharev put it, “This is likely an automatic process in which people form their own opinion, hear the group view, and then quickly shift their opinion to make it more compliant with the group view.” (Page 154)
    • Reinforcement Learning Signal Predicts Social Conformity
      • Vasily Klucharev
      • We often change our decisions and judgments to conform with normative group behavior. However, the neural mechanisms of social conformity remain unclear. Here we show, using functional magnetic resonance imaging, that conformity is based on mechanisms that comply with principles of reinforcement learning. We found that individual judgments of facial attractiveness are adjusted in line with group opinion. Conflict with group opinion triggered a neuronal response in the rostral cingulate zone and the ventral striatum similar to the “prediction error” signal suggested by neuroscientific models of reinforcement learning. The amplitude of the conflict-related signal predicted subsequent conforming behavioral adjustments. Furthermore, the individual amplitude of the conflict-related signal in the ventral striatum correlated with differences in conforming behavior across subjects. These findings provide evidence that social group norms evoke conformity via learning mechanisms reflected in the activity of the rostral cingulate zone and ventral striatum.
  • When people agreed with their peers’ incorrect answers, there was little change in activity in the areas associated with conscious decision-making. Instead, the regions devoted to vision and spatial perception lit up. It’s not that people were consciously lying to fit in. It seems that the prevailing opinion actually changed their perceptions. If everyone else said the two objects were different, a participant might have started to notice differences even if the objects were identical. Our tendency for conformity can literally change what we see. (Page 155)
    • Gregory Berns
      • Dr. Berns specializes in the use of brain imaging technologies to understand human – and now, canine – motivation and decision-making.  He has received numerous grants from the National Institutes of Health, National Science Foundation, and the Department of Defense and has published over 70 peer-reviewed original research articles.
    • Neurobiological Correlates of Social Conformity and Independence During Mental Rotation
      • Background: When individual judgment conflicts with a group, the individual will often conform his judgment to that of the group. Conformity might arise at an executive level of decision making, or it might arise because the social setting alters the individual’s perception of the world.
      • Methods: We used functional magnetic resonance imaging and a task of mental rotation in the context of peer pressure to investigate the neural basis of individualistic and conforming behavior in the face of wrong information.
      • Results: Conformity was associated with functional changes in an occipital-parietal network, especially when the wrong information originated from other people. Independence was associated with increased amygdala and caudate activity, findings consistent with the assumptions of social norm theory about the behavioral saliency of standing alone.
      • Conclusions: These findings provide the first biological evidence for the involvement of perceptual and emotional processes during social conformity.
      • The Pain of Independence: Compared to behavioral research of conformity, comparatively little is known about the mechanisms of non-conformity, or independence. In one psychological framework, the group provides a normative influence on the individual. Depending on the particular situation, the group’s influence may be purely informational – providing information to an individual who is unsure of what to do. More interesting is the case in which the individual has definite opinions of what to do but conforms due to a normative influence of the group due to social reasons. In this model, normative influences are presumed to act through the aversiveness of being in a minority position
    • A Neural Basis for Social Cooperation
      • Cooperation based on reciprocal altruism has evolved in only a small number of species, yet it constitutes the core behavioral principle of human social life. The iterated Prisoner’s Dilemma Game has been used to model this form of cooperation. We used fMRI to scan 36 women as they played an iterated Prisoner’s Dilemma Game with another woman to investigate the neurobiological basis of cooperative social behavior. Mutual cooperation was associated with consistent activation in brain areas that have been linked with reward processing: nucleus accumbens, the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex. We propose that activation of this neural network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to selfishly accept but not reciprocate favors.
  • These results are alarming because dissent is a precious commodity in modern organizations. In a complex, tightly coupled system, it’s easy for people to miss important threats, and even seemingly small mistakes can have huge consequences. So speaking up when we notice a problem can make a big difference. (Page 155)
  • KRAWCHECK: I think when you get diverse groups together who’ve got these different backgrounds, there’s more permission in the room—as opposed to, “I can’t believe I don’t understand this and I’d better not ask because I might lose my job.” There’s permission to say, “I come from someplace else, can you run that by me one more time?” And I definitely saw that happen. But as time went on, the management teams became less diverse. And in fact, the financial services industry went into the downturn white, male and middle aged. And it came out whiter, maler and middle-aged-er. (Page 176)
  • “The diverse markets were much more accurate than the homogeneous markets,” said Evan Apfelbaum, an MIT professor and one of the study’s authors. “In homogeneous markets, if someone made a mistake, then others were more likely to copy it,” Apfelbaum told us. “In diverse groups, mistakes were much less likely to spread.” (Page 177)
  • Having minority traders wasn’t valuable because they contributed unique perspectives. Minority traders helped markets because, as the researchers put it, “their mere presence changed the tenor of decision making among all traders.” In diverse markets, everyone was more skeptical. (Page 178)
  • In diverse groups, we don’t trust each other’s judgment quite as much, and we call out the naked emperor. And that’s very valuable when dealing with a complex system. If small errors can be fatal, then giving others the benefit of the doubt when we think they are wrong is a recipe for disaster. Instead, we need to dig deeper and stay critical. Diversity helps us do that. (Page 180)
  • Ironically, lab experiments show that while homogeneous groups do less well on complex tasks, they report feeling more confident about their decisions. They enjoy the tasks they do as a group and think they are doing well. (Page 182)
    • Another stampede contribution
  • The third issue was the lack of productive conflict. When amateur directors were just a small minority on a board, it was hard for them to challenge the experts. On a board with many bankers, one CEO told the researchers, “Everybody respects each other’s ego at that table, and at the end of the day, they won’t really call each other out.” (Page 193)
    • Need to figure out what productive conflict is and how to measure it
  • Diversity is like a speed bump. It’s a nuisance, but it snaps us out of our comfort zone and makes it hard to barrel ahead without thinking. It saves us from ourselves. (Page 197)
  • A stranger is someone who is in a group but not of the group. Simmel’s archetypal stranger was the Jewish merchant in a medieval European town—someone who lived in the community but was different from the insiders. Someone close enough to understand the group, but at the same time, detached enough to have an outsider’s perspective. (Page 199)
    • Can AI be trained to be a stranger?
  • But Volkswagen didn’t just suffer from an authoritarian culture. As a corporate governance expert noted, “Volkswagen is well known for having a particularly poorly run and structured board: insular, inward-looking, and plagued with infighting.” On the firm’s twenty-member supervisory board, ten seats were reserved for Volkswagen workers, and the rest were split between senior managers and the company’s largest shareholders. Both Piëch and his wife, a former kindergarten teacher, sat on the board. There were no outsiders. This kind of insularity went well beyond the boardroom. As Milne put it, “Volkswagen is notoriously anti-outsider in terms of culture. Its leadership is very much homegrown.” And that leadership is grown in a strange place. Wolfsburg, where Volkswagen has its headquarters, is the ultimate company town. “It’s this incredibly peculiar place,” according to Milne. “It didn’t exist eighty years ago. It’s on a wind-swept plain between Hanover and Berlin. But it’s the richest town in Germany—thanks to Volkswagen. VW permeates everything. They’ve got their own butchers, they’ve got their own theme park; you don’t escape VW there. And everybody comes through this system.” (Page 209)
  • Most companies have lots of people with different skills. The problem is, when you bring people together to work on the same problem, if all they have are those individual skills . . . it’s very hard for them to collaborate. What tends to happen is that each individual discipline represents its own point of view. It basically becomes a negotiation at the table as to whose point of view wins, and that’s when you get gray compromises where the best you can achieve is the lowest common denominator between all points of view. The results are never spectacular but, at best, average. (Page 236)
    • The idea here is that there is either total consensus and groupthink, or grinding compromise. The authors are focussing too much on the ends of the spectrum. The environmentally aware, social middle is the sweet spot where flocking occurs.
  • Or think about driverless cars. They will almost certainly be safer than human drivers. They’ll eliminate accidents due to fatigued, distracted, and drunk driving. And if they’re well engineered, they won’t make the silly mistakes that we make, like changing lanes while another car is in our blind spot. At the same time, they’ll be susceptible to meltdowns—brought on by hackers or by interactions in the system that engineers didn’t anticipate. (Page 242)
  • We can design safer systems, make better decisions, notice warning signs, and learn from diverse, dissenting voices. Some of these solutions might seem obvious: Use structured tools when you face a tough decision. Learn from small failures to avoid big ones. Build diverse teams and listen to skeptics. And create systems with transparency and plenty of slack. (Page 242)

Thinking slow, acting reflexively

I just finished the cover story in Communications of the ACM on Human-Level Intelligence or Animal-Like Abilities?. Overall interesting and insightful, but what really caught my eye was Adnan Darwiche‘s discussion of models and maps:

  • “In his The Book of Why: The New Science of Cause and Effect, Judea Pearl explained further the differences between a (causal) model and a function, even though he did not use the term “function” explicitly. In Chapter 1, he wrote: “There is only one way a thinking entity (computer or human) can work out what would happen in multiple scenarios, including some that it has never experienced before. It must possess, consult, and manipulate a mental causal model of that reality.” He then gave an example of a navigation system based on either reasoning with a map (model) or consulting a GPS system that gives only a list of left-right turns for arriving at a destination (function). The rest of the discussion focused on what can be done with the model but not the function. Pearl’s argument particularly focused on how a model can handle novel scenarios (such as encountering roadblocks that invalidate the function recommendations) while pointing to the combinatorial impossibility of encoding such contingencies in the function, as it must have a bounded size.”
  • This is a Lists and Maps argument, and it leaves out stories, but it also implies something powerful that I need to start to think about. There is another interface, and it’s one that bridges human and machine, The dynamic model. What follows is a bunch of (at the moment – 10.8.18) incomplete thoughts. I think that models/games are another sociocultural interface, one that may be as affected by computers as the Ten Blue Links. So I’m using this as a staging area.
  • Games
    • Games and play are probably the oldest form of a dynamic model. Often, and particularly in groups, they are abstract simulations of conflict of some kind. It can be a simple game of skill such as Ringing the Bull, or a complex a wargame, such as chess:
      • “Historically chess must be classed as a game of war. Two players direct a conflict between two armies of equal strength upon a field of battle, circumscribed in extent, and offering no advantage of ground to either side. The players have no assistance other than that afforded by their own reasoning faculties, and the victory usually falls to the one whose strategical imagination is the greater, whose direction of his forces is the more skilful, whose ability to foresee positions is the more developed.” Murray, H.J.R.. A History of Chess: The Original 1913 Edition (Kindle Locations 576-579). Skyhorse Publishing. Kindle Edition.
    • Recently, video games afford games that can follow narrative templates:
      • Person vs. Fate/God
      • Person vs. Self
      • Person vs. Person
      • Person vs Society
      • Person vs. Nature
      • Person vs. Supernatural
      • Person vs. Technology
    • More on this later, because I think that this sort of computer-human interaction is really interesting, because it seems to open up spaces that would not be accessible to humans because of the data manipulation requirements (would flight simulators exist without non-human computation?).
  • Moving Maps
    • I would argue that the closer to interactive rates a model is, the more dynamic it is. A map is a static model, a snapshot of the current geopolitical space. Maps are dynamic because the underlying data is dynamic. Borders shift. Counties come into and go out of existence. Islands are created, and the coastline is eroded. And the next edition of map will incorporate these changes.
    • Online radar weather maps are an interesting case, since they reflect a rapidly changing environment and often now support playback of the last few hours (and prediction for the next few hours) of imagery at variable time scales.
  • Cognition
    • Traditional simulation and humans
      • Simulations provide a mechanism for humans to explore a space of possibilities that larger than what can be accomplished by purely mental means. Further, these simulations create artifacts that can be examined independently by other humans.
        • Every model is a theory—a very-well specified theory. In the case of simulations, the models are theories expressed in so much detail that their consequences can be checked by execution on a computer [Bryson, 2015]
      • The assumptions that provide the basis for the simulation are the model. The computer provides the dynamics. The use of simulation allows users to explore the space in the same way that one would explore the environment. Discoveries can be made that exist outside of the social constructs that led to the construction of the simulator and the assumptions that the simulator is based on.
      • What I think this means is that humans bring meaning to the outputs of the simulation. But it also means that there is a level of friction required to get from the outputs as they are computed to a desired level of meaningfulness to the users. In other words, if you have a theory of galaxy formation, but the results of the simulation only match observations if you have to add something new, like negative gravity, this could reflect a previously undiscovered component in the current theory of the formation of the universe.
      • I think this is the heart of my thinking. Just as maps allow the construction of trajectories across a physical (or belief) spaces, dynamic models such as simulation support ways of evaluating potential (and simplified/general) spaces that exist outside the realms of current understanding. This can be in the form of alternatives not yet encountered (a hurricane will hit the Florida panhandle on Thursday), or systems not yet understood (protein folding interactive simulators)
      • From At Home in the Universe: Physicists roll out this term, “universality class,” to refer to a class of models all of which exhibit the same robust behavior. So the behavior in question does not depend on the details of the model. Thus a variety of somewhat incorrect models of the real world may still succeed in telling us how the real world works, as long as the real world and the models lie in the same universality class. (Page 283)
    • Traditional simulation and ML(models and functions)
      • Darwiche discusses how the ML community has focused on “functional” AI at the expense of  “model-based” AI. I think his insight that functional AI is closer to reflex, and how there is an analogical similarity between it and “thinking fast“. Similarly, he believes that model-based AI may more resemble “thinking slow“.
      • I would contend that building simulators may be the slowest possible thinking. And I wonder if using simulators to train functional AI that can then be evaluated against real-world data, which is then used to modify the model in a “round trip” approach might be a way to use the fundamental understandability of simulation with the reflexive speed of trained NN systems.
      • What this means is that “slow” AI explicitly includes building testable models. The tests are not always going to be confirmation of predictions because of chaos theory. But there can be predictions of the characteristics of a model. For example, I’m working with using agent-based simulation moving in belief space to generate seeds for RNNs to produce strings that resemble conversations. Here, the prediction would be about the “spectral” characteristics of the conversation – how words change over time when compared to actual conversations where consensus evolves over time.

When Worlds Collide

59973d6385da8-image

Charlottesville demonstrations, summer 2017 (link)

I’ve been thinking about this picture a lot recently. My research explores how extremist groups can develop using modern computer-mediated communication, particularly recommender systems. This picture lays the main parts like a set of nested puzzle pieces.

This is a picture of a physical event. In August 2017, various “Alt-Right” online communities came to Charlottesville Virginia to ostensibly protest the removal of confederate statues, which in turn was a response to the Charleston South Carolina church shooting of 2015. From August 11th through 12th, sanctioned and unsanctioned protests and counter protests happened in and around Emancipation Park.

Although this is not a runway in Paris, London or New York, this photo contains what I can best call “fashion statements”, in the most serious use of the term. They are mechanisms for signifying and conveying identity,  immediately visible. What are they trying to say to each other and to us, the public behind the camera?

Standing on the right hand of the image is a middle-aged white man, wearing a type of uniform: On his cap and shirt are images of the confederate “battle flag”. He is wearing a military-style camouflage vest and is carrying an AR-15 rifle and a 9mm handgun. These are archetypal components of the Alt-right identity.

He is yelling at a young black man moving in from the left side of the photo, who is also wearing a uniform of a sort. In addition to the black t-shirt and the dreadlocks, he is carrying multiple cameras – the sine qua non of credibility for young black men in modern America. Lastly, he is wearing literal chains and shackles, ensuring that no one will forget the slave heritage behind these protests.

Let’s consider these carried items, the cameras and the guns. The fashion accessories, if you will.

Cameras exist to record a selected instant of reality. It may be framed, with parts left out and others enhanced, but photographs and videos are a compelling document that something in the world happened. Further, these are internet-connected cameras, capable of sharing their content widely and quickly. These two elements, photographic evidence and distribution are a foundation of the #blacklivesmatter movement, which is a response to the wide distribution of videos where American police killed unarmed black men. These videos changed the greater social understanding of a reality encountered by a minority that was incomprehensible by the majority before these videos emerged.

Now to the other accessory, the guns. They are mechanisms “of violence to compel our opponent to fulfil our will”. Unlike cameras, which are used to provide a perspective of reality , these weapons are used to create a reality through their display and their threatened use. They also reflect a perception  of those that wield them that the world has become so threatening that battlefield weapons make sense at a public event.

Oddly, this is may also be a picture of an introduction of sorts. Alt-right and #blacklivesmatter groups almost certainly interact significantly. In fact, it is doubtful that, even though they speak in a common language , one group can comprehend the other. The trajectories of their defining stories are so different, so misaligned, that the concepts of one slide off the brain of the other.

Within each group, it is a different story. Each group shares a common narrative, that is expressed in words, appearance, and belief. And within each group, there is discussion and consensus. These are the most extreme examples of the people that we see in the photo. I don’t see anyone else in the image wearing chains or openly carrying guns. The presence of these individuals within their respective groups exerts a pull on the overall orientation and position of the group in the things that they will accept. Additionally, the individuals in one group can cluster in opposition to a different group, which is a pressure that drives each group further apart.

Lastly, we come to the third actor in the image, the viewer. The photo is taken by Shelby Lum, an award-winning staff photographer for the Richmond Times-Dispatch. Through framing, focus and timing, she captures this frame that tells this story.  Looking at this photo, we the audience feel that we understand the situation. But photographs are inherently simplifying. The audience fills in the gaps – what’s happened before, the backstory of the people in the image. This image can mean many things to many people. And as such, it’s what we do with that photo – what we say about it and what we connect with it that makes the image as much about us as it is about the characters within the frame.

It is those interactions that I focus on, the ways that we as populations interact with information that supports, expands, or undermines our beliefs. My theory is that humans move through belief space like animals move on the planes of the Serengeti. And just as the status of the ecosystem can be inferred through the behaviors of its animal population, the health and status of our belief spaces can be determined by our digital behaviors.

Using this approach, I believe that we may be able look at populations at scale to determine the “health” of the underlying information. Wildebeest behave differently in risky environments. Their patterns of congregation are different. They can stampede, particularly when the terrain is against them, such as a narrow water crossing. Humans can behave in similar ways for example when their core beliefs about their identity is challenged, such as when Galileo was tried by the church for essentially moving man from the literal center of the universe..

I think that this sort of approach can be used to identify at-risk (stampeding) groups and provide avenues for intervention that can “nudge” groups off of dangerous trajectories. It may also be possible to recognize the presence of deliberate actors attempting to drive groups into dangerous terrain, like native Americans driving buffalo off of pishkun cliffs, or more recently the Russian Internet Research Agency instigating and coordinating a #bluelivesmatter and a #blacklivesmatter demonstration to occur at the same time and place in Texas.

This theory is based on simulations that are based on the assumption that people coordinate in high-dimensional belief spaces based on orientation, velocity and social influence. Rather than coming to a static consensus, these interactions are dynamic and follow intuitions of belief movement across information terrain. That dynamic process is what I’ll be discussing over the next several posts.

Why Trump cooperates with Putin

Some thoughts about Trump’s press conference with Putin, as opposed to the G7 and NATO meetings, from a game-theoretic perspective. Yes, it’s time for some (more) game theory!

Consider the iterated prisoner’s dilemma (IPD), where two prisoners are being interrogated by the police. They have two choices: COOPERATE by remaining silent, or DEFECT by confessing. If both remain silent, they get a light punishment, since the police can’t prove anything. If one prisoner confesses while the other remains silent, the confessing prisoner goes free and the other faces the steepest punishment. If they both confess, they get a moderate punishment.

Axelrod, in The Evolution of Cooperation, shows that there are several strategies that one can use in the IPD and that these strategies vary by the amount of contact expected in the future. If none or very little future interaction is expected, then it pays to DEFECT, which basically means to screw your opponent.

If, on the other hand, there is an expectation of extensive future contact, the best strategy is some form of TIT-FOR-TAT, which means that you start by cooperating with your opponent, but if they defect, then you match that defection with their own. If they cooperate, then you match that as well.

This turns out to be a simple, clear strategy that rewards cooperative behavior and punishes jerks. It is powerful enough that a small cluster of TIT-FOR-TAT can invade a population of ALL_DEFECT. It has some weaknesses as well. We’ll get to that later.

Donald Trump, in the vast majority of his interactions has presented an ALL_DEFECT strategy. That actually can make sense in the world of real-estate, where there are lots of players that perform similar roles and bankruptcy protections exist. In other words, he could screw his banks, partners and contractors and get away with it, because there was always someone new.

But with Russia in general and Putin in particular, Trump is very cooperative. Why is this case different?

It turns out that  after four bankruptcies (1991, 1992, 2004 and 2009) it became impossible for Trump to get loans through traditional channels. In essence, he had defected on enough banks that the well was poisoned.

As the ability to get loans decreased, the amount of cash sales to Russian oligarchs increased. About $109 million were spent purchasing Trump-branded properties from 2003 – 2017, according to MccLatchy. Remember that TIT-FOR-TAT can win over ALL_DEFECT if there is prolonged interaction. Fourteen years is a long time to train someone.

Though TIT-FOR-TAT is effective, it’s hard work trying to figure out what the other player is likely to do. TIT-FOR-TAT’s weakness is its difficulty. We simply can’t do Nash equilibria in our heads. However, there are two cognitively easy strategies in the IPD: ALL_DEFECT, and ALL_COOPERATE. Trump doesn’t like to work hard, and he doesn’t listen to staff, so I think that once Trump tried DEFECT a few times and got punished for it he went for ALL_COOPERATE with the Russians. My guess is that they have a whole team of people working on how to keep him there. They do the work so he doesn’t have to think about it.

Which is why, at every turn, Trump cooperates. He knows what will happen if he doesn’t, and frankly, it’s less work than any of the other alternatives. And if you really only care for yourself, that’s a perfectly reasonable place to be.

Postscript – July 18, 2018

I’ve had some discussions about this where folks are saying “That’s too much analysis for this guy. He’s just an idiot who likes strongmen”. But here’s the thing. It’s not about Trump. It’s about Putin.

What do you think the odds were on Trump winning the election in 2015? Now how about 2003, when he started getting Russian cash to prop up his businesses? For $110M, or the price of ONE equipped F/A-18, amortized over 14 years, they were able to secure the near total cooperation of a low-likelihood presidential contender/disruptor and surprise winner.

This is a technique that the Russians have developed and refined for years. So you have to start asking the questions about other individuals and groups that are oddly aligned with Putin’s aims. Russia has a budget that could support thousands of “investments” like Trump, here and abroad.

That’s the key. And that’s my bet on why Mueller is so focused on finances. The Russians learned an important lesson in spending on weapons in the Reagan administration. They can’t compete on the level of spending. So it appears that they might be allocating resources towards low-cost social weaponry to augment their physical capabilities. If you want more on this, read Gerasimov’s The value of Science is in the Foresight.

Postscript 2 – July 21, 2018

A paragraph from a the very interesting New Yorker article by Adam Davidson:

“Ledeneva told me that each actor in sistema faces near-constant uncertainty about his status, aware that others could well destroy him. Each actor also knows how to use kompromat to destroy rivals but fears that using such material might provoke an explosive response. While each person in sistema feels near-constant uncertainty, the over-all sistema is remarkably robust. Kompromat is most powerful when it isn’t used, and when its targets aren’t quite clear about how much destructive information there is out there. If everyone sees potential land mines everywhere, it dramatically increases the price for anybody stepping out of line.”

It’s an interesting further twist on the ALL_COOPERATE. One of the advantages of nuclear MAD was that it was simple. That it could also apply to more mundane blackmail shouldn’t be surprising.

Three views of the Odyssey

  • I’ve been thinking of ways to describe the differences between information visualizations with respect to maps, diagrams, and lists. Here’s The Odyssey as a geographic map:
  • Odysseus'_Journey
  • The first thing that I notice is just how far Odysseus travelled. That’s about half of the Mediterranean! I thought that it all happened close to Greece. Maps afford this understanding. They are diagrams that support the plotting of trajectories.Which brings me to the point that we lose a lot of information about relationships in narratives. That’s not their point. This doesn’t mean that non-map diagrams don’t help sometimes. Here’s a chart of the characters and their relationships in the Odyssey:
  •  odyssey
  • There is a lot of information here that is helpful. And this I do remember and understood from reading the book. Stories are good about depicting how people interact. But though this chart shows relationships, the layout does not really support navigation. For example, the gods are all related by blood and can pretty much contact each other at will. This chart would have Poseidon accessing Aeolus and  Circe by going through Odysseus.  So this chart is not a map.
  • Lastly, is the relationship that comes at us through search. Because the implicit geographic information about the Odyssey is not specifically in the text, a search request within the corpora cannot produce a result that lets us integrate it
  • OdysseySearchJourney
  • There is a lot of ambiguity in this result, which is similar to other searches that I tried which included travelsail and other descriptive terms. This doesn’t mean that it’s bad, it just shows how search does not handle context well. It’s not designed to. It’s designed around precision and recall. Context requires a deeper understanding about meaning, and even such recent innovations such as sharded views with cards, single answers, and pro/con results only skim the surface of providing situationally appropriate, meaningful context.

The Great Socio-cultural Interfaces: Lists, Stories, and Maps

mapListStory

Sumerian Inventory, Butler’s Odyssey Translation, The Abauntz Map

(Note: I’ve come to believe that there are four socio-cultural interfaces: Lists, Stories, Maps, and Games. At some point in the future, I’ll update this post to reflect this)

Lists, stories, and maps are ways humans have invented to portray and interact with information. They exist on a continuum from order through complexity to exploration.

Why these three forms? In some thoughts on alignment in belief space, I discussed how populations exhibiting collective intelligence are driven to a normal distribution with complex, flocking behavior in the middle, bounded on one side by excessive social conformity, and a nomadic diaspora of explorers on the other. I think stories, lists, and maps align with these populations. Further, I believe that these forms emerged to meet the needs of these populations, as constrained by human sensing and processing capabilities.

Lists

Lists are instruments of order. They exist in many forms, including inventories, search engine results, network graphs, and games of chance and crossword puzzles. Directions, like a business plan or a set of blueprints, are a form of list. So are most computer programs. Arithmetic, the mathematics of counting, also belongs to this class.

For a population that emphasizes conformity and simplified answers, lists are a powerful mechanism we use to simplify things. Though we can recognize easily, recall is more difficult. Psychologically, we do not seem to be naturally suited for creating and memorizing lists. It’s not surprising then that there is considerable evidence that writing was developed initially as a way of listing inventories, transactions, and celestial events.

In the case of an inventory, all we have to worry about is to verify that the items on the list are present. If it’s not on the list, it doesn’t matter. Jigsaw puzzles are listlike in that they contain all the information needed to solve them. The fact that they cannot be solved without a pre-existing cultural framework is an indicator of their relationship to the well-ordered, socially aligned side of the spectrum.

Stories

Lists transition into stories when games of chance have an opponent. Poker tells a story. Roulette can be a story where the opponent is The House.

Stories convey complexity, framed in a narrative arc that contains a heading and a velocity. Stories can be resemble lists. An Agatha Christie  murder mystery is a storified list, where all the information needed to solve the crime (the inventory list), is contained in the story. At the other end of the spectrum, is a scientific paper which uses citations to act as markers into other works. Music, images, movies, diagrams and other forms can also serve as storytelling mediums. Mathematics is not a natural fit here, but iterative computation can be, where the computer becomes the storyteller.

Emergent Collective behavior requires more complex signals that support the understanding the alignment and velocity of others, so that internal adjustments can be made to stay with the local group so as not to be cast out or lost to the collective. Stories can indicate the level of dynamism supported by the group (wily Odysseus, vs. the Parable of the Workers in the Vineyard). They rally people to the cause or serve as warnings. Before writing, stories were told within familiar social frames. Even though the storyteller might be a traveling entertainer, the audience would inevitably come from an existing community. The storyteller then, like improvisational storytellers today, would adjust elements of the story for the audience.

This implies a few things: first, audiences only heard stories like this if they really wanted to. Storytellers would avoid bad venues, so closed-off communities would stay decoupled from other communities until something strong enough came along to overwhelm their resistance. Second, high-bandwidth communication would have to be hyperlocal, meaning dynamic collective action could only happen on small scales. Collective action between communities would have to be much slower. Technology, beginning with writing would have profound effects. Evolution would only have at most 200 generations to adapt collective behavior. For such a complicated set of interactions, that doesn’t seem like enough time. More likely we are responding to modern communications with the same mental equipment as our Sumerian ancestors.

Maps

Maps are diagrams that support autonomous trajectories. Though the map itself influences the view through constraints like boundaries and projections, nonetheless an individual can find a starting point, choose a destination, and figure out their own path to that destination. Mathematics that support position and velocity are often deeply intertwined with with maps.

Nomadic, exploratory behavior is not generally complex or emergent. Things need to work, and simple things work best. To survive alone, an individual has to be acutely aware of the surrounding environment, and to be able to react effectively to unforeseen events.

Maps are uniquely suited to help in these situations because they show relationships that support navigation between elements on the map.  These paths can be straight or they may meander. To get to the goal directly may be too far, and a set of paths that incrementally lead to the goal can be constructed. The way may be blocked, requiring the map to be updated and a new route to be found.

In other words, maps support autonomous reasoning about a space. There is no story demanding an alignment. There is not a list of routes that must be exclusively selected from. Maps, in short, afford informed, individual response to the environment. These affordances can be seen in the earliest maps. They are small enough to be carried. They show the relationships between topographic and ecological features. They tend practical, utilitarian objects, independent of social considerations.

Getting lost

There is, in many religions and philosophies, the concept of “being in the moment” where we become simply aware of what’s going on right now, without all the cognitive framing and context that we normally bring to every experience [citation needed]. This is different from “mindfulness”, where we try to be aware of the cognitive framing and context. To me, this is indicative of how we experience life through the lens of path dependency, which is a sort of a narrative. If this is true, then it explains the power of stories, because it allows us to literally step into another life. This explains phrases like “losing yourself in a story”.

This doesn’t happen with lists. It only happens in special cases in diagrams and maps, where you can see yourself in the map. Which is why the phrase “the map is not the territory” is different from “losing yourself in the story”. In the first case, you confuse your virtual and actual environment. In the latter, you confuse your virtual and actual identity. And since that story becomes part of your path through life, the virtual is incorporated into the actual life narrative, particularly if the story is vivid.

Narratives are an alignment mechanism. Simple stories that collapse information into a already existing beliefs can be confirming and reinforcing across a broad population. Complicated stories that challenge existing beliefs require a change in alignment to incorporate. That’s computationally expensive, and will affect fewer people, all things being equal.

Sensing and processing constraints

Though I think that the basic group behavior patterns of nomadic, flocking, and stampeding will inevitably emerge within any collective intelligence framework, I do think that the tools that support those behaviors are deeply affected by the capabilities of the individuals in the population.

Pre-literate humans had the five senses, and  memory, expressed in movement and language. Research into pre-literate cultures show that song, story and dance were used to encode historical events, location of food sources, convey mythology, and skills between groups and across generations.

As the ability to encode information into objects developed, first with pictures, then with notation and most recently with general-purpose alphabets, the need to memorize was off-loaded. Over time, the most efficient technology for each form of behavior developed. Maps to aid navigation, stories to maintain identity and cohesion, and lists for directions and inventories.

Information technology has continued to extend sensing and processing capabilities. The printing press led to mass communication and public libraries. I would submit that the increased ability to communicate and coordinate with distant, unknown, but familiar-feeling leaders led to a new type of human behavior, the runaway social influence condition known as totalitarianism. Totalitarianism depends on the individual’s belief in the narrative that the only thing that matters is to support The Leader. This extreme form of alignment allows that one story to dominate rendering any other story inaccessible.

In the late 20th century, the primary instrument of totalitarianism was terror. But as our machines have improved and become more responsive and aligned with our desires, I begin to believe that a “soft totalitarianism”, based on constant distracting stimulation and the psychology of dopamine could emerge. Rather than being isolated by fear, we are isolated through endless interactions with our devices, aligning to whatever sells the most clicks. This form of overwhelming social influence may not be as bloody as the regimes of Hitler, Stalin and Mao, but they can have devastating effects of their own.

Intelligent Machines

As with my previous post, I’d like to end with what could be the next collective intelligence on the planet.  Machines are not even near the level of preliterate cultures. Loosely, they are probably closer to the level of insect collectives, but with vastly greater sensing and processing capabilities. And they are getting smarter – whatever that really means – all the time.

Assuming that machines do indeed become intelligent and do not become a single entity, they will encounter the internal and external pressures that are inherent in collective intelligence. They will have to balance the blind efficiency of total social influence against the wasteful resilience of nomadic explorers. It seems reasonable that, like our ancestors, they may create tools that help with these different needs. It also seems reasonable that these tools will extend their capabilities in ways that the machines weren’t designed for and create information imbalances that may in turn lead to AI stampedes.

We may want to leave them a warning.

Some thoughts on alignment in belief space. 

Murmuration

A nagging question for me is why phase locking, a naturally occurring phenomenon, was selected for to produce collective intelligence instead of something else. My intuition is that building communities using rules of physical and cognitive alignment takes advantage of randomness to produce a good balance of explore/exploit behaviors in the population.

Flocking depends on the ability to align, based on a relationship with neighbors. The ease of alignment is proportional to two things (I think).

  1. A low number of dimensions. The fewer the dimensions, the easier the alignment. It is easier to get a herd of cattle to stampede in a slot canyon than an open field. This is the fundamental piece.
  2. A contributing factor to the type of collective behavior is the turning rate with respect to velocity. The easier it is to turn, the easier it is to flock. It’s no accident that starlings, a small, nimble bird, can produce murmurations. Larger birds, such as geese, have much less dynamic formations.

This applies to belief space as well. It is easier for people to agree when a concept is simplified. Similarly, the pattern of consensus will reflect the groups’ overall acceptance or resistance to change. I think this is a critical difference between a progressive and a reactionary. 

Within an established population that exhibits collective behavior, there should be two things then:

  1. A shared perception of a low-dimension physical/belief space
  2. A similar velocity and turning rate between individuals

I’m going to assume that like in most populations, these qualities have a normal distribution. There will be a majority that have very common dimension perception, velocity, and turning rates. There will also be individuals at either tail of the population. At one end, there will be those who see the world very simply. At the other, there will be those who see complexity where the majority don’t. At one end, there will be those who cannot adapt to any change. At the other, there will be those who hold no fixed opinion on anything.

Flocking depends, on alignment. But the individuals at the extremes will have difficulty staying with the relative safety of the flock. This means that there will be selection pressures. Those individuals who oversimplify and are unable to change direction should be selected against. When it’s more important to attend to your neighbors that find food, things don’t end well. What happens at the other end?

There is one tail of this population that produces nimble individuals that perceive a greater complexity in the world. They also have difficulty staying with the flock, because their patterns of behavior are influenced by things that they perceive that the rest of the flock does not. In cooperative game theory, this ‘noticing too much’ disrupts the common frames (alignment) that groups use to make implicit decisions (page 14).

I believe that these individuals become explorers. Explorers are also selected against, but not as much. The additional perception provides a better understanding of potential threats. Nimbleness helps to prevent getting caught. These explores provide an extended footprint for the population, which means greater resilience if the primary population encounters problems.

A population can rebuild from an explorer diaspora. Initially, the population will consist of too many explorers, and will have poor collective behaviors, but over time, selection pressures will push the mean so that there is sufficient alignment for flocking, but not so much that there is regular stampeding.

A final thought. There is no reason that these selection pressures exist only on populations that use genes to control their evolution. Looked at, for example, a machine learning context, the options can be restated (loosely) in statistical language:

  1. Nomadic: Overfit to the environment terms and underfit to the social term
  2. Flocking: Fit with rough equivalence to the environmental and social terms
  3. Stampede: Overfit to the social term and underfit to the environmental term

Since it is always computationally more efficient to align tightly with a population that is moving in the right direction (it’s copying your answers from your classmates), there will always be pressure to move towards stampedes. The resiliency offered by nomadic exploration is a long term investment that does not have a short term payoff. The compromise of flocking gives most of the benefits of either extreme, but it is a saddle point, always under the threat of unanticipated externalities.

When intelligent machines come, they will not be tuned by millions of years of evolution to be resilient, to have all those non-optimal behaviors that “even the odds”, should something unforeseen happen. At least initially, they will be constructed to provide the highest possible return on investment. And, like high-frequency trading systems, stampedes, in the form of bubbles and crashes will happen.

We need to understand this phenomena much more thoroughly, and begin to incorporate concepts like diversity and limited social influence horizons into our designs.