Category Archives: taJour

A simple example of ensemble training

I’ve been using multilayer Perceptrons (MLPs) for some quickly trainable sequence-to-sequence time series predictions. The goal is to take sensor data from one day and use that as training data to predict the next day’s patterns. The application is extremely consistent, but the hardware slowly degrades. By retraining, the error detection system is able to “drift” with the system as various parts wear at different rates. And there are a lot of sensors – several thousand per system, so rapid training is a nice feature.

The problem that I was running into had to do with hyperparameter tuning. I would make a change or two, and then re-run the system on my well-characterized simulated data, and the accuracy of the result would change in odd ways. It was very frustrating.

As a way to work through more options in an automated way, I built an optimizer class using evolutionary algorithms (adjusting variables, rather than evolutionary programming, which evolves code). I could then fire up the evolver and try hundreds or thousands of models as the system worked to find the best fitness (in this case highest accuracy).

But there was a big problem, which I kind of knew about. The random initialization of weights makes a HUGE difference in the performance of the model. I discovered this while looking at the results of the evolver, which saves the best of each generation and saves them out to a spreadsheet:

If you look at row 8, you see a lovely fitness of 0.9, or 90%. Which was the best value from the evolver runs. However, after sorting on the parameters so that they were grouped, it became obvious that there is a HUGE variance in the results. The lowest fitness is 30%, and the average fitness for those values is actually 60%. I tried running the parameters on multiple trained models and got similar results. These values are all over the place. Not good.

To address this, I need to be able to run a population and get the distribution stats (mean, 5% and 95% confidence,  min, and max outliers). I can then sort on the mean, but also have insight into the variance. A good mean with wide variance may be worse than a slightly worse mean with tight variance.

So I added statistical tests to the evolver, based on this post, starting with the scikit-learn resample(). Here’s the important bits:

def calc_fitness_stats(self, resample_size:int = 100):
    boot = resample(self.population, replace=True, n_samples=resample_size, random_state=1)
    s = pd.Series(boot)
    conf = st.t.interval(0.95, len(boot)-1, loc=s.mean(), scale= st.sem(boot))
    self.meta_info = {'mean':s.mean(), '5_conf':conf[0], '95_conf':conf[1], 'max':s.max(), 'min':s.min()}
    self.fitness = s.mean()

To evaluate, I used my test landscape, a 3D surface, based on the equation z = cos(x) + sin(y) + (x + y)/10,   over the range (-5, 5). I also added some randomness to the x and y values to noise up the results so the statistics would show something. This worked well on my landscape as you can see below, so I integrated it into my hyperparameter tuner.

Before I go into the results, let me describe the whole data set – what it looks like in total, what the parts that we are trying to recognize, and the ground truth that we are training against:

Full Data Set: The data a set of mathematical functions. In this case, it’s a simple set of ten sin(x) waves of varying frequency. They all start at the same value, and evolve from there. The shortest wavelength is cyan, the longest is dark blue in the figure below. It’s a reasonable proxy for ten sensors that change over the course of a day, some quickly, some slowly:

Full_data

Training Set: I take the above dataset, which has 200 elements and split it in two. This creates a training set or input vector of 100 elements and an output, “ground truth” vector that the system will be trained to recognize. So ten shapes will be trained to map to ten other shapes in one MLP network:

Clean_input

Ground Truth: This is the 100 sample vectors that we will be training the network to produce:

All Predictions: If you take the first random result of the evolver, you will get ten models that are identical except for the initial weights. In this case, the hyperparameters are number of layers, neurons per layer, batch size and epochs. The evolver initially comes up with a population of ten random genomes (in specified ranges, like 10 – 1000 neurons, with a step of 10). It then keeps the five best “genomes” and breeds and mutates 5 more. New genomes are in turn run 10 times to produce the statistics. The models associated with the best values are saved.

If we look at one of the initial models, before any evolution optimization you can see why this approach is needed. Remember, This variation is based solely on the different random initialization  of the weights between layers. What you are looking at is the input vector being run through ten models that are used to calculate the statistical values of the ensemble. You can see that most values are pretty good, some are a bit off, and some are pretty bonkers.

Ensemble Average: On the whole though, if you take the average of all the ensemble, you get a pretty nice result. And, unlike the single-shot method of training, the likelihood that another ensemble produced with the same architecture will be the same is much higher.

Here’s the code to take the average:

        avg_mat = np.zeros(self.test_mat.shape)
        with os.scandir() as entries:
            count = 1
            for entry in entries:
                if entry.is_file() or entry.is_symlink():
                    os.remove(entry.path)
                elif entry.is_dir():
                    count += 1
                    print("loading: {}".format(entry.name))
                    new_model = tf.keras.models.load_model(entry.name)
                    self.predict_mat = new_model.predict(self.train_mat)
                    avg_mat = np.add(self.predict_mat, avg_mat)
        avg_mat = avg_mat / count

 

This is not to say that the model is perfect. The orange curve at the top of the last chart is too low. This model had a mean accuracy of 67%. But this is roughly equivalent to my initial hyperparameter guesses. Let’s see what happens after 50 generations.

Five hours and 5,000 evaluations later,  I have the full run of 50 generations. Things did get better. We end with a higher mean, but we also have a variance that does not steadily improve. This means that it’s possible that the architecture around generation 23 might actually be better:

Because all the values are saved in the spreadsheet, I can try those hyperparameters, but the system as I’ve written it only saves the “best” set of parameters. Let’s see what that best ensemble looks like as an ensemble when compared to the early run:

That is a lot better. All the related predictions are much closer to each other, and appear to be clustered around the right places. I am genuinely surprised how tidy the clustering is, based on the previous “All Predictions” plot towards the top of this post. On to the ensemble average:

That is extremely close to the “Ground Truth” chart. The orange line is in the right place, for example. The only error that I can see with a cursory visual inspection is that the height of the olive line is a little lower than it should be.

Now, I am concerned that there may be two peaks in this fitness landscape that we’re trying to climb. The one that we are looking for is a generalized model that can fit approximate curves. The other case is that the network has simply memorized the curves and will blow up when it sees something different. Let’s test that.

First, let’s revisit the training set. This model was trained with extremely clean data. The input is a sin function with varying frequencies, and the evaluation data is the same sin function, picking up where we cut off the training data. Here’s the clean data that was used to train the model:

Now let’s try noising that up, so that the model has to figure out what to do based on data that model has never seen before:

Let’s see what happened! First, let’s look at all the predictions from the ensemble:

The first thing that I notice is that it didn’t blow up. Although the paths from each model are somewhat different, each one got all the paths approximately right, and there is no wild deviation. The worst behavior (as usual?) is the orange band, and possibly the green band. But this looks like it should average well. Let’s take a look:

That seems pretty good. And the orange / green lines are in the right place. It’s the blue, olive, and grey lines that are a little low. Still, pretty happy with this.

So, ensembles seem to work very well, and make for resilient, predictable behavior in NN architectures. The cost is that there is much more time required to run many, many models through the system to determine which ensemble is right.

But if you want reproducible results, it’s a good way to go.

Characterizing Online Public Discussions through Patterns of Participant Interactions.

Characterizing Online Public Discussions through Patterns of Participant Interactions

Authors

Overview

An important paper that lays out mechanisms for relating conversations into navigable spaces. To me, this seems like a first step in being able to map human interaction along the dimensions the humans emphasize. In this case, the dimensions have to do with relatively coarse behavior trajectories: Will a participant block another? Will this be a long threaded discussion among a few people or a set of short links all referring to an initial post?

Rooted in the design affordances of facebook, the data that are readily available influence the overall design of the methods used. For example, a significant amount of the work is focussed on temporal network analytics. I think that these methods are quite generalizable to sites like Twitter and Reddit. The fact that the researchers worked at Facebook and had easy access to the data is a critical part of this studies’ success. For me the implications aren’t that surprising (I found myself saying “Yes! Yes!” several times while reading this), but it is wonderful to see then presented in such a clear, defensible way.

My more theoretical thoughts

Though this study is focussed more on building representations of behaviors, I think that the methods used here (particularly as expanded on in the Future Work section) should be extensible to mapping beliefs

The extensive discussion about how the design affordances of Facebook create the form of the discussion is also quite validating. Although they don’t mention it, Moscovici lays this concept out in Conflict and Consensus, where he describes how even items such as table shape can change a conversation so that the probability of compromise over consensus is increased.

Lastly, I’m really looking forward to checking out the Cornell Conversational Analysis Toolkit, developed for(?) this study.

Notes

  • This paper introduces a computational framework to characterize public discussions, relying on a representation that captures a broad set of social patterns which emerge from the interactions between interlocutors, comments and audience reactions. (Page 198:1)
  • we use it to predict the eventual trajectory of individual discussions, anticipating future antisocial actions (such as participants blocking each other) and forecasting a discussion’s growth (Page 198:1)
  • platform maintainers may wish to identify salient properties of a discussion that signal particular outcomes such as sustained participation [9] or future antisocial actions [16], or that reflect particular dynamics such as controversy [24] or deliberation [29]. (Page 198:1)
  • Systems supporting online public discussions have affordances that distinguish them from other forms of online communication. Anybody can start a new discussion in response to a piece of content, or join an existing discussion at any time and at any depth. Beyond textual replies, interactions can also occur via reactions such as likes or votes, engaging a much broader audience beyond the interlocutors actively writing comments. (Page 198:2)
    • This is why JuryRoom would be distinctly different. It’s unique affordances should create unique, hopefully clearer results.
  • This multivalent action space gives rise to salient patterns of interactional structure: they reflect important social attributes of a discussion, and define axes along which discussions vary in interpretable and consequential ways. (Page 198:2)
  • Our approach is to construct a representation of discussion structure that explicitly captures the connections fostered among interlocutors, their comments and their reactions in a public discussion setting. We devise a computational method to extract a diverse range of salient interactional patterns from this representation—including but not limited to the ones explored in previous work—without the need to predefine them. We use this general framework to structure the variation of public discussions, and to address two consequential tasks predicting a discussion’s future trajectory: (a) a new task aiming to determine if a discussion will be followed by antisocial events, such as the participants blocking each other, and (b) an existing task aiming to forecast the growth of a discussion [9]. (Page 198:2)
  • We find that the features our framework derives are more informative in forecasting future events in a discussion than those based on the discussion’s volume, on its reply structure and on the text of its comments (Page 198:2)
  • we find that mainstream print media (e.g., The New York Times, The Guardian, Le Monde, La Repubblica) is separable from cable news channels (e.g., CNN, Fox News) and overtly partisan outlets (e.g., Breitbart, Sean Hannity, Robert Reich)on the sole basis of the structure of the discussions they trigger (Figure 4).(Page 198:2)
  • figure-4
  • These studies collectively suggest that across the broader online landscape, discussions take on multiple types and occupy a space parameterized by a diversity of axes—an intuition reinforced by the wide range of ways in which people engage with social media platforms such as Facebook [25]. With this in mind, our work considers the complementary objective of exploring and understanding the different types of discussions that arise in an online public space, without predefining the axes of variation. (Page 198:3)
  • Many previous studies have sought to predict a discussion’s eventual volume of comments with features derived from their content and structure, as well as exogenous information [893069, inter alia]. (Page 198:3)
  • Many such studies operate on the reply-tree structure induced by how successive comments reply to earlier ones in a discussion rooted in some initial content. Starting from the reply-tree view, these studies seek to identify and analyze salient features that parameterize discussions on platforms like Reddit and Twitter, including comment popularity [72], temporal novelty [39], root-bias [28], reply-depth [41, 50] and reciprocity [6]. Other work has taken a linear view of discussions as chronologically ordered comment sequences, examining properties such as the arrival sequence of successive commenters [9] or the extent to which commenters quote previous contributions [58]. The representation we introduce extends the reply-tree view of comment-to-comment. (Page 198:3)
  • Our present approach focuses on representing a discussion on the basis of its structural rather than linguistic attributes; as such, we offer a coarser view of the actions taken by discussion participants that more broadly captures the nature of their contributions across contexts which potentially exhibit large linguistic variation.(Page 198:4)
  • This representation extends previous computational approaches that model the relationships between individual comments, and more thoroughly accounts for aspects of the interaction that arise from the specific affordances offered in public discussion venues, such as the ability to react to content without commenting. Next, we develop a method to systematically derive features from this representation, hence producing an encoding of the discussion that reflects the interaction patterns encapsulated within the representation, and that can be used in further analyses.(Page 198:4)
  • In this way, discussions are modelled as collections of comments that are connected by the replies occurring amongst them. Interpretable properties of the discussion can then be systematically derived by quantifying structural properties of the underlying graph: for instance, the indegree of a node signifies the propensity of a comment to draw replies. (Page 198:5)
    • Quick responses that reflect a high degree of correlation would be tight. A long-delayed “like” could be slack?
  • For instance, different interlocutors may exhibit varying levels of engagement or reciprocity. Activity could be skewed towards one particularly talkative participant or balanced across several equally-prolific contributors, as can the volume of responses each participant receives across the many comments they may author.(Page 198: 5)
  • We model this actor-focused view of discussions with a graph-based representation that augments the reply-tree model with an additional superstructure. To aid our following explanation, we depict the representation of an example discussion thread in Figure 1 (Page 198: 6)
  • fig1table1
  • Relationships between actors are modeled as the collection of individual responses they exchange. Our representation reflects this by organizing edges into hyperedges: a hyperedge between a hypernode C and a node c ‘ contains all responses an actor directed at a specific comment, while a hyperedge between two hypernodes C and C’ contains the responses that actor C directed at any comment made by C’ over the entire discussion. (Page 198: 6)
    • I think that this  can be represented as a tensor (hyperdimensional or flattened) with each node having a value if there is an intersection. There may be an overall scalar that allows each type of interaction to be adjusted as a whole
  • The mixture of roles within one discussion varies across different discussions in intuitively meaningful ways. For instance, some discussions are skewed by one particularly active participant, while others may be balanced between two similarly-active participants who are perhaps equally invested in the discussion. We quantify these dynamics by taking several summary statistics of each in/outdegree distribution in the hypergraph representation, such as their maximum, mean and entropy, producing aggregate characterizations of these properties over an entire discussion. We list all statistics computed in the appendices (Table 4). (Page 198: 6, 7)
  • table4
  • To interpret the structure our model offers and address potentially correlated or spurious features, we can perform dimensionality reduction on the feature set our framework yields. In particular, let X be a N×k matrix whose N rows each correspond to a thread represented by k features.We perform a singular value decomposition on X to obtain a d-dimensional representation X ˜ Xˆ = USVT where rows of U are embeddings of threads in the induced latent space and rows of V represent the hypergraph-derived features. (Page 198: 9)
    • This lets us find the hyperplane of the map we want to build
  • Community-level embeddings. We can naturally extend our method to characterize online discussion communities—interchangeably, discussion venues—such as Facebook Pages. To this end, we aggregate representations of the collection of discussions taking place in a community, hence providing a representation of communities in terms of the discussions they foster. This higher level of aggregation lends further interpretability to the hypergraph features we derive. In particular, we define the embedding U¯C of a community C containing threads {t1, t2, . . . tn } as the average of the corresponding thread embeddings Ut1 ,Ut2 , . . .Utn , scaled to unit l2 norm. Two communities C1 and C2 that foster structurally similar discussions then have embeddings U¯C1 and U¯C2 that are close in the latent space.(Page 198: 9)
    • And this may let us place small maps in a larger map. Not sure if the dimensions will line up though
  • The set of threads to a post may be algorithmically re-ordered based on factors like quality [13]. However, subsequent replies within a thread are always listed chronologically.We address elements of such algorithmic ranking effects in our prediction tasks (§5). (Page 198: 10)
  • Taken together, these filtering criteria yield a dataset of 929,041 discussion threads.(Page 198: 10)
  • We now apply our framework to forecast a discussion’s trajectory—can interactional patterns signal future thread growth or predict future antisocial actions? We address this question by using the features our method extracts from the 10-comment prefix to predict two sets of outcomes that occur temporally after this prefix. (Pg 198:10)
    • These are behavioral trajectories, though not belief trajectories. Maps of these behaviors could probably be built, too.
  • For instance, news articles on controversial issues may be especially susceptible to contentious discussions, but this should not translate to barring discussions about controversial topics outright. Additionally, in large-scale social media settings such as Facebook, the content spurring discussions can vary substantially across different sub-communities, motivating the need to seek adaptable indicators that do not hinge on content specific to a particular context. (Page 198: 11)
  • Classification protocol. For each task, we train logistic regression classifiers that use our full set of hypergraph-derived features, grid-searching over hyperparameters with 5-fold cross-validation and enforcing that no Page spans multiple folds.13 We evaluate our models on a (completely fresh) heldout set of thread pairs drawn from the subsequent week of data (Nov. 8-14, 2017), addressing a model’s potential dependence on various evolving interface features that may have been deployed by Facebook during the time spanned by the training data. (Page 198: 11)
    • We use logistic regression classifiers from scikit-learn with l2 loss, standardizing features and grid-searching over C = {0.001, 0.01, 1}. In the bag-of-words models, we tf-idf transform features, set a vocabulary size of 5,000 words and additionally grid-search over the maximum document frequency in {0.25, 0.5, 1}. (Page 198: 11, footnote 13)
  • We test a model using the temporal rate of commenting, which was shown to be a much stronger signal of thread growth than the structural properties considered in prior work [9] (Page 198: 12)
  • Table 3 shows Page-macroaveraged heldout accuracies for our prediction tasks. The feature set we extract from our hypergraph significantly outperforms all of the baselines in each task. This shows that interactional patterns occurring within a thread’s early activity can signal later events, and that our framework can extract socially and structurally-meaningful patterns that are informative beyond coarse counts of activity volume, the reply-tree alone and the order in which commenters contribute, along with a shallow representation of the linguistic content discussed. (Page 198: 12)
    • So triangulation from a variety of data sources produces more accurate results in this context, and probably others. Not a surprising finding, but important to show
  • table3
  • We find that in almost all cases, our full model significantly outperforms each subcomponent considered, suggesting that different parts of the hypergraph framework add complementary information across these tasks. (Page 198: 13)
  • Having shown that our approach can extract interaction patterns of practical importance from individual threads, we now apply our framework to explore the space of public discussions occurring on Facebook. In particular, we identify salient axes along which discussions vary by qualitatively examining the latent space induced from the embedding procedure described in §3, with d = 7 dimensions. Using our methodology, we recover intuitive types of discussions, which additionally reflect our priors about the venues which foster them. This analysis provides one possible view of the rich landscape of public discussions and shows that our thread representation can structure this diverse space of discussions in meaningful ways. This procedure could serve as a starting point for developing taxonomies of discussions that address the wealth of structural interaction patterns they contain, and could enrich characterizations of communities to systematically account for the types of discussions they foster. (Page 198: 14) 
    • ^^^Show this to Wayne!^^^
  • The emergence of these groupings is especially striking since our framework considers just discussion structure without explicitly encoding for linguistic, topical or demographic data. In fact, the groupings produced often span multiple languages—the cluster of mainstream news sites at the top includes French (Le Monde), Italian (La Repubblica) and German (SPIEGEL ONLINE) outlets; the “sports” region includes French (L’EQUIPE) as well as English outlets. This suggests that different types of content and different discussion venues exhibit distinctive interactional signatures, beyond lexical traits. Indeed, an interesting avenue of future work could further study the relation between these factors and the structural patterns addressed in our approach, or augment our thread representation with additional contextual information. (Page 198: 15)
  • Taken together, we can use the features, threads and Pages which are relatively salient in a dimension to characterize a type of discussion. (Page 198: 15)
  • To underline this finer granularity, for each examined dimension we refer to example discussion threads drawn from a single Page, The New York Times(https://www.facebook.com/nytimes), which are listed in the footnotes. (Page 198: 15)
    • Common starting point. Do they find consensus, or how the dimensions reduce?
  • Focused threads tend to contain a small number of active participants replying to a large proportion of preceding comments; expansionary threads are characterized by many less-active participants concentrating their responses on a single comment, likely the initial one. We see that (somewhat counterintuitively) meme-sharing discussion venues tend to have relatively focused discussions. (Page 198: 15)
    • These are two sides of the same dimension-reduction coin. A focused thread should be using the dimension-reduction tool of open discussion that requires the participants to agree on what they are discussing. As such it refines ideas and would produce more meme-compatible content. Expansive threads are dimension reducing to the initial post. The subsequent responses go in too many directions to become a discussion.
  • Threads at one end (blue) have highly reciprocal dyadic relationships in which both reactions and replies are exchanged. Since reactions on Facebook are largely positive, this suggests an actively supportive dynamic between actors sharing a viewpoint, and tend to occur in lifestyle-themed content aggregation sub-communities as well as in highly partisan sites which may embody a cohesive ideology. In threads at the other end (red), later commenters tend to receive more reactions than the initiator and also contribute more responses. Inspecting representative threads suggests this bottom-heavy structure may signal a correctional dynamic where late arrivals who refute an unpopular initiator are comparatively well-received. (Page 198: 17)
  • This contrast reflects an intuitive dichotomy of one- versus multi-sided discussions; interestingly, the imbalanced one-sided discussions tend to occur in relatively partisan venues, while multi-sided discussions often occur in sports sites (perhaps reflecting the diversity of teams endorsed in these sub-communities). (Page 198: 17)
    • This means that we can identify one-sided behavior and use that then to look at they underlying information. No need to look in diverse areas, they are taking care of themselves. This is ecosystem management 101, where things like algae blooms and invasive species need to be recognized and then managed
  • We now seek to contrast the relative salience of these factors after controlling for community: given a particular discussion venue, is the content or the commenter more responsible for the nature of the ensuing discussions? (Page 198: 17)
  • This suggests that, perhaps somewhat surprisingly, the commenter is a stronger driver of discussion type. (Page 198: 18)
    • I can see that. The initial commenter is kind of a gate-keeper to the discussion. A low-dimension, incendiary comment that is already aligned with one group (“lock her up”), will create one kind of discussion, while a high-dimensional, nuanced post will create another.
  • We provide a preliminary example of how signals derived from discussion structure could be applied to forecast blocking actions, which are potential symptoms of low-quality interactions (Page 198: 18)
  • The nature of the discussion may also be shaped by the structure of the underlying social network, such that interactions between friends proceed in contrasting ways from interactions between complete strangers.  (Page 198: 19)
    • Yep, design matters. Diversity injection matters.
  • For instance, as with the bulk of other computational studies, our work relies heavily on indicators of interactional dynamics which are easily extracted from the data, such as replies or blocks. Such readily available indicators can at best only approximate the rich space of participant experiences, and serve as very coarse proxies for interactional processes such as breakdown or repair [27, 62]. As such, our implicit preference for computational expedience limits the granularity and nuance of our analyses. (Page 198: 20)
    • Another argument for funding a platform that is designed to provide these nuances
  • One possible means of enriching our model to address this limitation could be to treat nodes as high-dimensional vectors, such that subsequent responses only act on a subset of these dimensions. (Page 198: 21)
    • Agreed. A set of matrices that represent an aspect of each node should have a rich set of capabilities
  • Accounting for linguistic features of the replies within a discussion necessitates vastly enriching the response types presently considered, perhaps through a model that represents the corresponding edges as higher-dimensional vectors rather than as discrete types. Additionally, linguistic features might identify replies that address multiple preceding comments or a small subset of ideas within the target(s) of the reply, offering another route to move beyond the atomicity of comments assumed by our present framework. (Page 198: 21)
    • Exactly right. High dimensional representations that can then be analyzed to uncover the implicit dimensions of interaction is the way to go, I think.
  • Important references

Meltdown: Why our systems fail and What we can do about it

Meltdown: Why our systems fail and What we can do about it

Authors and related work

  • Chris Clearfield 
    • Chris is the founder of System Logic, an independent research and consulting firm focusing on the challenges posed by risk and complexity. He previously worked as a derivatives trader at Jane Street, a quantitative trading firm, in New York, Tokyo, and Hong Kong, where he analyzed and devised mitigations for the financial and regulatory risks inherent in the business of technologically complex high-speed trading. He has written about catastrophic failure, technology, and finance for The Guardian, Forbes, the Harvard Kennedy School Review, the popular science magazine Nautilus, and the Harvard Business Review blog.
  • András Tilcsik
    • András holds the Canada Research Chair in Strategy, Organizations, and Society at the University of Toronto’s Rotman School of Management. He has been recognized as one of the world’s top forty business professors under forty and as one of thirty management thinkers most likely to shape the future of organizations. The United Nations named his course on organizational failure as the best course on disaster risk management in a business school. 
  • How to Prepare for a Crisis You Couldn’t Possibly Predict
    • Over the past five years, we have studied dozens of unexpected crises in all sorts of organizations and interviewed a broad swath of people — executives, pilots, NASA engineers, Wall Street traders, accident investigators, doctors, and social scientists — who have discovered valuable lessons about how to prepare for the unexpected. Here are three of those lessons.

Overview

This book looks at the underlying reasons for accidents that emerge from complexity and how diversity is a fix. It’s based on Charles Perrow’s concept of Normal Accidents being a property of high-risk systems.

Normal Accidents are unpredictable, yet inevitable combinations of small failures that build upon each other within an unforgiving environment. Normal accidents include catastrophic failures such as reactor meltdowns, airplane crashes, and stock market collapses. Though each failure is unique, all these failures have common properties:

    • The system’s components are tightly coupled. A change in one place has rapid consequences elsewhere
    • The system is densely connected, so that the actions of one part affects many others
    • The system’s internals are difficult to observe, so that failure can appear without warning

What happens in all these accidents is that there is misdirected progress in a direction that makes the problem worse. Often, this is because the humans in the system are too homogeneous. They all see the problem from the same perspective, and they all implicitly trust each other (Tight coupling and densely connected).

The addition of diversity is a way to solve this problem. Diversity does three things:

    • It provides additional perspectives into the problem. This only works if there is large enough representation of diverse groups so that they do not succumb to social pressure.
    • It lowers the amount of trust within the group, so that proposed solutions are exposed to a higher level of skepticism.
    • It slows the process down, making the solution less reflexive and more thoughtful.

By designing systems to be transparent, loosely coupled and sparsely connected, the risk of catastrophe is reduced. If that’s not possible, ensure that the people involved in the system are diverse.

My more theoretical thoughts:

There are two factors that affect the response of the network: The level of connectivity and the stiffness of the links. When the nodes have a velocity component, then a sufficiently stiff network (either many somewhat stiff or a few very stiff links) has to move as a single entity. Nodes with sparse and slack connections are safe systems, but not responsive. Stiff, homogeneous (similarity is implicit stiff coupling)  networks are prone to stampede. Think of a ball rolling down a hill as opposed to a lump of jello.

When all the nodes are pushing in the same direction, then the network as a whole will move into more dangerous belief spaces. That’s a stampede. When some percentage of these connections are slack connections to diverse nodes (e.g. moving in other directions), the structure as a whole is more resistant to stampede.

I think that dimension reduction is inevitable in a stiffening network. In physical systems, where the nodes have mass, a stiff structure really only has two degrees of freedom, its direction of travel and its axis of rotation. Which means that regardless of the number of initial dimensions, a stiff body’s motion reduces to two components. Looking at stampedes and panics, I’d say that this is true for behaviors as well, though causality could run in either direction. This is another reason that diversity helps keep away from dangerous conditions, but at the expense of efficiency.

Notes

  • Such a collision should have been impossible. The entire Washington Metro system, made up of over one hundred miles of track, was wired to detect and control trains. When trains got too close to each other, they would automatically slow down. But that day, as Train 112 rounded a curve, another train sat stopped on the tracks ahead—present in the real world, but somehow invisible to the track sensors. Train 112 automatically accelerated; after all, the sensors showed that the track was clear. By the time the driver saw the stopped train and hit the emergency brake, the collision was inevitable. (Page 2)
  • The second element of Perrow’s theory (of normal accidents) has to do with how much slack there is in a system. He borrowed a term from engineering: tight coupling. When a system is tightly coupled, there is little slack or buffer among its parts. The failure of one part can easily affect the others. Loose coupling means the opposite: there is a lot of slack among parts, so when one fails, the rest of the system can usually survive. (Page 25)
  • Perrow called these meltdowns normal accidents. “A normal accident,” he wrote, “is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity) causes a cascade of failures (because of tight coupling).” Such accidents are normal not in the sense of being frequent but in the sense of being natural and inevitable. “It is normal for us to die, but we only do it once,” he quipped. (Page 27)
    • This is exactly what I see in my simulations and in modelling with graph Laplacians. There are two factors that affect the response of the network: The level of connectivity and the stiffness of the links. When the nodes have a velocity component, then a sufficiently stiff network (either many somewhat stiff or a few very stiff links) has to behave as a single entity.
  • These were unintended interactions between the glitch in the content filter, Talbot’s photo, other Twitter users’ reactions, and the resulting media coverage. When the content filter broke, it increased tight coupling because the screen now pulled in any tweet automatically. And the news that Starbucks had a PR disaster in the making spread rapidly on Twitter—a tightly coupled system by design. (Page 30)
  • This approach—reducing complexity and adding slack—helps us escape from the danger zone. It can be an effective solution, one we’ll explore later in this book. But in recent decades, the world has actually been moving in the opposite direction: many systems that were once far from the danger zone are now in the middle of it. (Page 33)
  • Today, smartphone videos create complexity because they link things that weren’t always connected (Page 37)
  • For nearly thirty minutes, Knight’s trading system had gone haywire and sent out hundreds of unintended orders per second in 140 stocks. Those very orders had caused the anomalies that John Mueller and traders across Wall Street saw on their screens. And because Knight’s mistake roiled the markets in such a visible way, traders could reverse engineer its positions. Knight was a poker player whose opponents knew exactly what cards it held, and it was already all in. For thirty minutes, the company had lost more than $15 million per minute. (Page 41)
  • Though a small software glitch caused Knight’s failure, its roots lay much deeper. The previous decade of technological innovation on Wall Street created the perfect conditions for the meltdown. Regulation and technology transformed stock trading from a fragmented, inefficient, relationship-based activity to a tightly connected endeavor dominated by computers and algorithms. Firms like Knight, which once used floor traders and phones to execute trades, had to adapt to a new world. (Page 42)
    • This is an important point. There is short-term survival value in becoming homogeneous and tightly connected. Diversity only helps in the long run.
  • As the crew battled the blowout, complexity struck again. The rig’s elaborate emergency systems were just too overwhelming. There were as many as thirty buttons to control a single safety system, and a detailed emergency handbook described so many contingencies that it was hard to know which protocol to follow. When the accident began, the crew was frozen. The Horizon’s safety systems paralyzed them. (Page 49)
    • I think that this may argue opposite to the authors’ point. The complexity here is a form of diversity. The safety system was a high-dimensional system that required an effective user to be aligned with it, like a free climber on a cliff face. A user highly educated in the system could probably have made it work, even better than a big STOP button. But expecting that user is a mistake. The authors actually discuss this later when they describe how safety training was reduced to simple practices that ignored perceived unlikely catastrophic events.
  • “The real threat,” Greenberg explained, “comes from malicious actors that connect things together. They use a chain of bugs to jump from one system to the next until they achieve full code execution.” In other words, they exploit complexity: they use the connections in the system to move from the software that controls the radio and GPS to the computers that run the car itself. “As cars add more features,” Greenberg told us, “there are more opportunities for abuse.” And there will be more features: in driverless cars, computers will control everything, and some models might not even have a steering wheel or brake pedal. (Page 60)
    • In this case it’s not the stiffness of the connections, its the density of connections
  • Attacks on cars, ATMs, and cash registers aren’t accidents. But they, too, originate from the danger zone. Complex computer programs are more likely to have security flaws. Modern networks are rife with interconnections and unexpected interactions that attackers can exploit. And tight coupling means that once a hacker has a foothold, things progress swiftly and can’t easily be undone. In fact, in all sorts of areas, complexity creates opportunities for wrongdoing, and tight coupling amplifies the consequences. It’s not just hackers who exploit the danger zone to do wrong; it’s also executives at some of the world’s biggest companies. (Page 62)
  • By the year 2000, Fastow and his predecessors had created over thirteen hundred specialized companies to use in these complicated deals. “Accounting rules and regulations and securities laws and regulation are vague,” Fastow later explained. “They’re complex. . . . What I did at Enron and what we tended to do as a company [was] to view that complexity, that vagueness . . . not as a problem, but as an opportunity.” Complexity was an opportunity. (Page 69)
    • I’m not sure how to fit this in, but I think there is something here about high-dimensional spaces being essentially invisible. This is the same thing as the safety system on the Deepwater Horizon.
  • But like the core of a nuclear power plant, the truth behind such writing is difficult to observe. And research shows that unobservability is a key ingredient to news fabrications. Compared to genuine articles, falsified stories are more likely to be filed from distant locations and to focus on topics that lend themselves to the use of secret sources, such as war and terrorism; they are rarely about big public events like baseball games. (Page 77)
    •  More heuristics for map building
  • Charles Perrow once wrote that “safety systems are the biggest single source of catastrophic failure in complex, tightly coupled systems.” (Page 85)
    • Dimensions reduce through use, which is a kind of conversation between the users and the designers. Safety systems are rarely used, so this conversation doesn’t happen.
  • Perrow’s matrix is helpful even though it doesn’t tell us what exactly that “crazy failure” will look like. Simply knowing that a part of our system—or organization or project—is vulnerable helps us figure out if we need to reduce complexity and tight coupling and where we should concentrate our efforts. It’s a bit like wearing a seatbelt. The reason we buckle up isn’t that we have predicted the exact details of an impending accident and the injuries we’ll suffer. We wear seatbelts because we know that something unforeseeable might happen. We give ourselves a cushion of time when cooking an elaborate holiday dinner not because we know what will go wrong but because we know that something will. “You don’t need to predict it to prevent it,” Miller told us. “But you do need to treat complexity and coupling as key variables whenever you plan something or build something.” (Page 88)
  • A fundamental feature of complex systems is that we can’t find all the problems by simply thinking about them. Complexity can cause such strange and rare interactions that it’s impossible to predict most of the error chains that will emerge. But before they fall apart, complex systems give off warning signs that reveal these interactions. The systems themselves give us clues as to how they might unravel. (Page 141)
  • Over the course of several years, Rerup conducted an in-depth study of global pharmaceutical powerhouse Novo Nordisk, one of the world’s biggest insulin producers. In the early 1990s, Rerup found, it was difficult for anyone at Novo Nordisk to draw attention to even serious threats. “You had to convince your own boss, his boss, and his boss that this was an issue,” one senior vice president explained. “Then he had to convince his boss that it was a good idea to do things in a different way.” But, as in the childhood game of telephone—where a message gets more and more garbled as it passes between people—the issues became oversimplified as they worked their way up the chain of command. “What was written in the original version of the report . . . and which was an alarm bell for the specialist,” the CEO told Rerup, “was likely to be deleted in the version that senior management read.” (Page 146)
    • Dimension reduction, leading to stampede
  • Once an issue has been identified, the group brings together ad hoc teams from different departments and levels of seniority to dig into how it might affect their business and to figure out what they can do to prevent problems. The goal is to make sure that the company doesn’t ignore weak signs of brewing trouble.  (Page 147)
    • Environmental awareness as a deliberate counter to dimension reduction
  • We show that a deviation from the group opinion is regarded by the brain as a punishment,” said the study’s lead author, Vasily Klucharev. And the error message combined with a dampened reward signal produces a brain impulse indicating that we should adjust our opinion to match the consensus. Interestingly, this process occurs even if there is no reason for us to expect any punishment from the group. As Klucharev put it, “This is likely an automatic process in which people form their own opinion, hear the group view, and then quickly shift their opinion to make it more compliant with the group view.” (Page 154)
    • Reinforcement Learning Signal Predicts Social Conformity
      • Vasily Klucharev
      • We often change our decisions and judgments to conform with normative group behavior. However, the neural mechanisms of social conformity remain unclear. Here we show, using functional magnetic resonance imaging, that conformity is based on mechanisms that comply with principles of reinforcement learning. We found that individual judgments of facial attractiveness are adjusted in line with group opinion. Conflict with group opinion triggered a neuronal response in the rostral cingulate zone and the ventral striatum similar to the “prediction error” signal suggested by neuroscientific models of reinforcement learning. The amplitude of the conflict-related signal predicted subsequent conforming behavioral adjustments. Furthermore, the individual amplitude of the conflict-related signal in the ventral striatum correlated with differences in conforming behavior across subjects. These findings provide evidence that social group norms evoke conformity via learning mechanisms reflected in the activity of the rostral cingulate zone and ventral striatum.
  • When people agreed with their peers’ incorrect answers, there was little change in activity in the areas associated with conscious decision-making. Instead, the regions devoted to vision and spatial perception lit up. It’s not that people were consciously lying to fit in. It seems that the prevailing opinion actually changed their perceptions. If everyone else said the two objects were different, a participant might have started to notice differences even if the objects were identical. Our tendency for conformity can literally change what we see. (Page 155)
    • Gregory Berns
      • Dr. Berns specializes in the use of brain imaging technologies to understand human – and now, canine – motivation and decision-making.  He has received numerous grants from the National Institutes of Health, National Science Foundation, and the Department of Defense and has published over 70 peer-reviewed original research articles.
    • Neurobiological Correlates of Social Conformity and Independence During Mental Rotation
      • Background: When individual judgment conflicts with a group, the individual will often conform his judgment to that of the group. Conformity might arise at an executive level of decision making, or it might arise because the social setting alters the individual’s perception of the world.
      • Methods: We used functional magnetic resonance imaging and a task of mental rotation in the context of peer pressure to investigate the neural basis of individualistic and conforming behavior in the face of wrong information.
      • Results: Conformity was associated with functional changes in an occipital-parietal network, especially when the wrong information originated from other people. Independence was associated with increased amygdala and caudate activity, findings consistent with the assumptions of social norm theory about the behavioral saliency of standing alone.
      • Conclusions: These findings provide the first biological evidence for the involvement of perceptual and emotional processes during social conformity.
      • The Pain of Independence: Compared to behavioral research of conformity, comparatively little is known about the mechanisms of non-conformity, or independence. In one psychological framework, the group provides a normative influence on the individual. Depending on the particular situation, the group’s influence may be purely informational – providing information to an individual who is unsure of what to do. More interesting is the case in which the individual has definite opinions of what to do but conforms due to a normative influence of the group due to social reasons. In this model, normative influences are presumed to act through the aversiveness of being in a minority position
    • A Neural Basis for Social Cooperation
      • Cooperation based on reciprocal altruism has evolved in only a small number of species, yet it constitutes the core behavioral principle of human social life. The iterated Prisoner’s Dilemma Game has been used to model this form of cooperation. We used fMRI to scan 36 women as they played an iterated Prisoner’s Dilemma Game with another woman to investigate the neurobiological basis of cooperative social behavior. Mutual cooperation was associated with consistent activation in brain areas that have been linked with reward processing: nucleus accumbens, the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex. We propose that activation of this neural network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to selfishly accept but not reciprocate favors.
  • These results are alarming because dissent is a precious commodity in modern organizations. In a complex, tightly coupled system, it’s easy for people to miss important threats, and even seemingly small mistakes can have huge consequences. So speaking up when we notice a problem can make a big difference. (Page 155)
  • KRAWCHECK: I think when you get diverse groups together who’ve got these different backgrounds, there’s more permission in the room—as opposed to, “I can’t believe I don’t understand this and I’d better not ask because I might lose my job.” There’s permission to say, “I come from someplace else, can you run that by me one more time?” And I definitely saw that happen. But as time went on, the management teams became less diverse. And in fact, the financial services industry went into the downturn white, male and middle aged. And it came out whiter, maler and middle-aged-er. (Page 176)
  • “The diverse markets were much more accurate than the homogeneous markets,” said Evan Apfelbaum, an MIT professor and one of the study’s authors. “In homogeneous markets, if someone made a mistake, then others were more likely to copy it,” Apfelbaum told us. “In diverse groups, mistakes were much less likely to spread.” (Page 177)
  • Having minority traders wasn’t valuable because they contributed unique perspectives. Minority traders helped markets because, as the researchers put it, “their mere presence changed the tenor of decision making among all traders.” In diverse markets, everyone was more skeptical. (Page 178)
  • In diverse groups, we don’t trust each other’s judgment quite as much, and we call out the naked emperor. And that’s very valuable when dealing with a complex system. If small errors can be fatal, then giving others the benefit of the doubt when we think they are wrong is a recipe for disaster. Instead, we need to dig deeper and stay critical. Diversity helps us do that. (Page 180)
  • Ironically, lab experiments show that while homogeneous groups do less well on complex tasks, they report feeling more confident about their decisions. They enjoy the tasks they do as a group and think they are doing well. (Page 182)
    • Another stampede contribution
  • The third issue was the lack of productive conflict. When amateur directors were just a small minority on a board, it was hard for them to challenge the experts. On a board with many bankers, one CEO told the researchers, “Everybody respects each other’s ego at that table, and at the end of the day, they won’t really call each other out.” (Page 193)
    • Need to figure out what productive conflict is and how to measure it
  • Diversity is like a speed bump. It’s a nuisance, but it snaps us out of our comfort zone and makes it hard to barrel ahead without thinking. It saves us from ourselves. (Page 197)
  • A stranger is someone who is in a group but not of the group. Simmel’s archetypal stranger was the Jewish merchant in a medieval European town—someone who lived in the community but was different from the insiders. Someone close enough to understand the group, but at the same time, detached enough to have an outsider’s perspective. (Page 199)
    • Can AI be trained to be a stranger?
  • But Volkswagen didn’t just suffer from an authoritarian culture. As a corporate governance expert noted, “Volkswagen is well known for having a particularly poorly run and structured board: insular, inward-looking, and plagued with infighting.” On the firm’s twenty-member supervisory board, ten seats were reserved for Volkswagen workers, and the rest were split between senior managers and the company’s largest shareholders. Both Piëch and his wife, a former kindergarten teacher, sat on the board. There were no outsiders. This kind of insularity went well beyond the boardroom. As Milne put it, “Volkswagen is notoriously anti-outsider in terms of culture. Its leadership is very much homegrown.” And that leadership is grown in a strange place. Wolfsburg, where Volkswagen has its headquarters, is the ultimate company town. “It’s this incredibly peculiar place,” according to Milne. “It didn’t exist eighty years ago. It’s on a wind-swept plain between Hanover and Berlin. But it’s the richest town in Germany—thanks to Volkswagen. VW permeates everything. They’ve got their own butchers, they’ve got their own theme park; you don’t escape VW there. And everybody comes through this system.” (Page 209)
  • Most companies have lots of people with different skills. The problem is, when you bring people together to work on the same problem, if all they have are those individual skills . . . it’s very hard for them to collaborate. What tends to happen is that each individual discipline represents its own point of view. It basically becomes a negotiation at the table as to whose point of view wins, and that’s when you get gray compromises where the best you can achieve is the lowest common denominator between all points of view. The results are never spectacular but, at best, average. (Page 236)
    • The idea here is that there is either total consensus and groupthink, or grinding compromise. The authors are focussing too much on the ends of the spectrum. The environmentally aware, social middle is the sweet spot where flocking occurs.
  • Or think about driverless cars. They will almost certainly be safer than human drivers. They’ll eliminate accidents due to fatigued, distracted, and drunk driving. And if they’re well engineered, they won’t make the silly mistakes that we make, like changing lanes while another car is in our blind spot. At the same time, they’ll be susceptible to meltdowns—brought on by hackers or by interactions in the system that engineers didn’t anticipate. (Page 242)
  • We can design safer systems, make better decisions, notice warning signs, and learn from diverse, dissenting voices. Some of these solutions might seem obvious: Use structured tools when you face a tough decision. Learn from small failures to avoid big ones. Build diverse teams and listen to skeptics. And create systems with transparency and plenty of slack. (Page 242)

Thinking slow, acting reflexively

I just finished the cover story in Communications of the ACM on Human-Level Intelligence or Animal-Like Abilities?. Overall interesting and insightful, but what really caught my eye was Adnan Darwiche‘s discussion of models and maps:

  • “In his The Book of Why: The New Science of Cause and Effect, Judea Pearl explained further the differences between a (causal) model and a function, even though he did not use the term “function” explicitly. In Chapter 1, he wrote: “There is only one way a thinking entity (computer or human) can work out what would happen in multiple scenarios, including some that it has never experienced before. It must possess, consult, and manipulate a mental causal model of that reality.” He then gave an example of a navigation system based on either reasoning with a map (model) or consulting a GPS system that gives only a list of left-right turns for arriving at a destination (function). The rest of the discussion focused on what can be done with the model but not the function. Pearl’s argument particularly focused on how a model can handle novel scenarios (such as encountering roadblocks that invalidate the function recommendations) while pointing to the combinatorial impossibility of encoding such contingencies in the function, as it must have a bounded size.”
  • This is a Lists and Maps argument, and it leaves out stories, but it also implies something powerful that I need to start to think about. There is another interface, and it’s one that bridges human and machine, The dynamic model. What follows is a bunch of (at the moment – 10.8.18) incomplete thoughts. I think that models/games are another sociocultural interface, one that may be as affected by computers as the Ten Blue Links. So I’m using this as a staging area.
  • Games
    • Games and play are probably the oldest form of a dynamic model. Often, and particularly in groups, they are abstract simulations of conflict of some kind. It can be a simple game of skill such as Ringing the Bull, or a complex a wargame, such as chess:
      • “Historically chess must be classed as a game of war. Two players direct a conflict between two armies of equal strength upon a field of battle, circumscribed in extent, and offering no advantage of ground to either side. The players have no assistance other than that afforded by their own reasoning faculties, and the victory usually falls to the one whose strategical imagination is the greater, whose direction of his forces is the more skilful, whose ability to foresee positions is the more developed.” Murray, H.J.R.. A History of Chess: The Original 1913 Edition (Kindle Locations 576-579). Skyhorse Publishing. Kindle Edition.
    • Recently, video games afford games that can follow narrative templates:
      • Person vs. Fate/God
      • Person vs. Self
      • Person vs. Person
      • Person vs Society
      • Person vs. Nature
      • Person vs. Supernatural
      • Person vs. Technology
    • More on this later, because I think that this sort of computer-human interaction is really interesting, because it seems to open up spaces that would not be accessible to humans because of the data manipulation requirements (would flight simulators exist without non-human computation?).
  • Moving Maps
    • I would argue that the closer to interactive rates a model is, the more dynamic it is. A map is a static model, a snapshot of the current geopolitical space. Maps are dynamic because the underlying data is dynamic. Borders shift. Counties come into and go out of existence. Islands are created, and the coastline is eroded. And the next edition of map will incorporate these changes.
    • Online radar weather maps are an interesting case, since they reflect a rapidly changing environment and often now support playback of the last few hours (and prediction for the next few hours) of imagery at variable time scales.
  • Cognition
    • Traditional simulation and humans
      • Simulations provide a mechanism for humans to explore a space of possibilities that larger than what can be accomplished by purely mental means. Further, these simulations create artifacts that can be examined independently by other humans.
        • Every model is a theory—a very-well specified theory. In the case of simulations, the models are theories expressed in so much detail that their consequences can be checked by execution on a computer [Bryson, 2015]
      • The assumptions that provide the basis for the simulation are the model. The computer provides the dynamics. The use of simulation allows users to explore the space in the same way that one would explore the environment. Discoveries can be made that exist outside of the social constructs that led to the construction of the simulator and the assumptions that the simulator is based on.
      • What I think this means is that humans bring meaning to the outputs of the simulation. But it also means that there is a level of friction required to get from the outputs as they are computed to a desired level of meaningfulness to the users. In other words, if you have a theory of galaxy formation, but the results of the simulation only match observations if you have to add something new, like negative gravity, this could reflect a previously undiscovered component in the current theory of the formation of the universe.
      • I think this is the heart of my thinking. Just as maps allow the construction of trajectories across a physical (or belief) spaces, dynamic models such as simulation support ways of evaluating potential (and simplified/general) spaces that exist outside the realms of current understanding. This can be in the form of alternatives not yet encountered (a hurricane will hit the Florida panhandle on Thursday), or systems not yet understood (protein folding interactive simulators)
      • From At Home in the Universe: Physicists roll out this term, “universality class,” to refer to a class of models all of which exhibit the same robust behavior. So the behavior in question does not depend on the details of the model. Thus a variety of somewhat incorrect models of the real world may still succeed in telling us how the real world works, as long as the real world and the models lie in the same universality class. (Page 283)
    • Traditional simulation and ML(models and functions)
      • Darwiche discusses how the ML community has focused on “functional” AI at the expense of  “model-based” AI. I think his insight that functional AI is closer to reflex, and how there is an analogical similarity between it and “thinking fast“. Similarly, he believes that model-based AI may more resemble “thinking slow“.
      • I would contend that building simulators may be the slowest possible thinking. And I wonder if using simulators to train functional AI that can then be evaluated against real-world data, which is then used to modify the model in a “round trip” approach might be a way to use the fundamental understandability of simulation with the reflexive speed of trained NN systems.
      • What this means is that “slow” AI explicitly includes building testable models. The tests are not always going to be confirmation of predictions because of chaos theory. But there can be predictions of the characteristics of a model. For example, I’m working with using agent-based simulation moving in belief space to generate seeds for RNNs to produce strings that resemble conversations. Here, the prediction would be about the “spectral” characteristics of the conversation – how words change over time when compared to actual conversations where consensus evolves over time.

At Home in the Universe: The Search for the Laws of Self-Organization and Complexity

Kauffman’s NK model K large K medium K small fitness distanceFitness landscapes

At Home in the Universe: The Search for the Laws of Self-Organization and Complexity (Kindle Edition)

Stuart Kauffman (Wikipedia)

Quick takeaway:

  • The book’s central thesis is that complexity in general (and life in particular) is an inevitable consequence of self organizing principles that come into play with non-equilibrium systems. He explores the underlying principles in a variety of ways including binary networks, autocatalytic sets, NK models, and fitness landscapes, both static and co-evolving.
  • When I was reading this 20 year old book, I had the impression that his work, particularly on how fitness landscapes are explored have direct relevance to the construction of complex systems today. In particular I was struck by how applicable his work with fitness landscapes and NK models would be to the evaluation of the hyperparameter space associated with building Neural Networks.
  • Another point that I found particularly compelling is his descriptions of the incalculable size of the high-dimension spaces of combinatorial possibility. The number of potential combinations on even a smallish binary network would take more time in than the universe has to calculate. As such, there need to be mechanisms that allow for a faster, “good enough” evaluation of the space. That’s why we have historical narratives. They describe a path through this space that has worked. As an example, compare tic-tac-toe to chess. In the former, the every possibility in the game space can be known. Chess has too many possibilities, so instead there are openings, gambits, and endgames, discovered by chess masters that come to us as stories.

Notes:

  • Chapter 1: At home in the Universe
    • In all these cases, the order that emerges depends on robust and typical properties of the systems, not on the details of structure and function. Under a vast range of different conditions, the order can barely help but express itself. (page 19)
    • Nonequilibrium ordered systems like the Great Red Spot are sustained by the persistent dissipation of matter and energy, and so were named dissipative structures by the Nobel laureate Ilya Prigogine some decades ago. These systems have received enormous attention. In part, the interest lies in their contrast to equilibrium thermodynamic systems, where equilibrium is associated with collapse to the most probable, least ordered states. In dissipative systems, the flux of matter and energy through the system is a driving force generating order. In part, the interest lies in the awareness that free-living systems are dissipative structures, complex metabolic whirlpools. (page 21).
    • The theory of computation is replete with deep theorems. Among the most beautiful are those showing that, in most cases by far, there exists no shorter means to predict what an algorithm will do than to simply execute it, observing the succession of actions and states as they unfold. The algorithm itself is its own shortest description. It is, in the jargon of the field, incompressible. (page 22)
    • And yet, even if it is true that evolution is such an incompressible process, it does not follow that we may not find deep and beautiful laws governing that unpredictable flow. For we are not precluded from the possibility that many features of organisms and their evolution are profoundly robust and insensitive to details. (page 23)
    • Strikingly, such coevolving systems also behave in an ordered regime, a chaotic regime, and a transition regime. (page 27)
      • Note that this reflects our Nomadic (chaotic), Flocking (transition) and Stampeding (ordered) states
    • This seemingly haphazard process also shows an ordered regime where poor compromises are found quickly, a chaotic regime where no compromise is ever settled on, and a phase transition where compromises are achieved, but not quickly. The best compromises appear to occur at the phase transition between order and chaos. (page 28)
  • Chapter 4: Order for Free
    • But evolution requires more than simply the ability to change, to undergo heritable variation. To engage in the Darwinian saga, a living system must first be able to strike an internal compromise between malleability and stability. To survive in a variable environment, it must be stable, to be sure, but not so stable that it remains forever static. Nor can it be so unstable that the slightest internal chemical fluctuation causes the whole teetering structure to collapse. (page 73)
    • To survive in a variable environment, it must be stable, to be sure, but not so stable that it remains forever static. Nor can it be so unstable that the slightest internal chemical fluctuation causes the whole teetering structure to collapse. (pg 73)
    • It is now well known that in most cells, such molecular feedback can give rise to complex chemical oscillations in time and space. (page 74)
      • Olfati-Saber and graph laplacians!
    • The point in using idealizations in science is that they help capture the main issues. Later one must show that the issues so captured are not altered by removing the idealizations. (page 75)
      • Start with observation, build initial simulation and then measure the difference and modify
    • If started in one state, over time the system will flow through some sequence of states. This sequence is called a trajectory (page 77)
      • I wonder if this can be portrayed as a map? You have to go through one state to get to the next. In autocatalytic systems there may be multiple systems that may be similar and yet have branch points (plant cells, animal cells, bacteria)
    • To answer these questions we need to understand the concept of an attractor. More than one trajectory can flow into the same state cycle. Start a network with any of these different initial patterns and, after churning through a sequence of states, it will settle into the same state cycle, the same pattern of blinking. In the language of dynamical systems, the state cycle is an attractor and the collection of trajectories that flow into it is called the basin of attraction. We can roughly think of an attractor as a lake, and the basin of attraction as the water drainage flowing into that lake. (page 78)
      • Also applicable to social and socio-technical systems. The technology changes the connectivity which could change the shape of the landscape
    • One feature is simply how many “inputs” control any lightbulb. If each bulb is controlled by only one or two other lightbulbs, if the network is “sparsely connected,” then the system exhibits stunning order. If each bulb is controlled by many other light-bulbs, then the network is chaotic. So “tuning” the connectivity of a network tunes whether one finds order or chaos. The second feature that controls the emergence of order or chaos is simple biases in the control rules themselves. Some control rules, the AND and OR Boolean functions we talked about, tend to create orderly dynamics. Other control rules create chaos. (page 80)
      • In our more velocity-oriented system, this is Social Influence Horizon) and is dynamic over time
    • Consider networks in which each lightbulb receives input from only one other. In these K = 1 networks, nothing very interesting happens. They quickly fall into very short state cycles, so short that they often consist of but a single state, a single pattern of illumination. Launch such a K = 1 network and it freezes up, saying the same thing over and over for all time. (page 81)
    • At the other end of the scale, consider networks in which K = N, meaning that each lightbulb receives an input from all lightbulbs, including itself. One quickly discovers that the length of the networks’ state cycles is the square root of the number of states. Consider the implications. For a network with only 200 binary variables—bulbs that can be on or off—there are 2200 or 1060 possible states. (page 81)
    • Such K = N networks do show signs of order, however. The number of attractors in a network, the number of lakes, is only N/e, where e is the basis of the natural logarithms, 2.71828. So a K = N network with 100,000 binary variables would harbor about 37,000 of these attractors. Of course, 37,000 is a big number, but very very much smaller than 2100,000, the size of its state space. (page 82)
      • Need to look into if there is some kind of equivalent in the SIH settings
    • The order arises, sudden and stunning, in K = 2 networks. For these well-behaved networks, the length of state cycles is not the square root of the number of states, but, roughly, the square root of the number of binary variables. Let’s pause to translate this as clearly as we can. Think of a randomly constructed Boolean network with N = 100,000 lightbulbs, each receiving K = 2 inputs. The “wiring diagram” would look like a madhatterly scrambled jumble, an impenetrable jungle. Each lightbulb has also been assigned at random a Boolean function. The logic is, therefore, a similar mad scramble, haphazardly assembled, mere junk. The system has 2100,000 or 1030,000 states—megaparsecs of possibilities—and what happens? The massive network quickly and meekly settles down and cycles among the square root of 100,000 states, a mere 317. (page 83)
    • The reason complex systems exist on, or in the ordered regime near, the edge of chaos is because evolution takes them there. (page 89)
  • Chapter 5: The Mystery of Ontology
    • Another way to ensure orderly behavior is to construct networks using what are called canalyzing Boolean functions. These Boolean rules have the easy property that at least one of the molecular inputs has one value, which might be 1 or 0, which by itself can completely determine the response of the regulated gene. The OR function is an example of a canalyzing function (Figure 5.3a). An element regulated by this function is active at the next moment if its first, or its second, or both inputs are active at the current moment. Thus if the first input is active, then the regulated element is guaranteed to be active at the next moment, regardless of the activity of the second input. (page 103)
      • This is max pooling
    • For most perturbations, a genomic system on any attractor will exhibit homeostatic return to the same attractor. The cell types are fundamentally stable. But for some perturbations, the system flows to a different attractor. So differentiation occurs naturally. And the further critical property is this: from any one attractor, it is possible to undergo transitions to only a few neighboring attractors, and from them other perturbations drive the system to still other attractors. Each lake, as it were, is close to only a few other lakes. (page 110)
  • Chapter 6: Noah’s Vessel
    • That we eat our meals rather than fusing with them marks, I believe, a profound fact. The biosphere itself is supracritical. Our cells are just subcritical. Were we to fuse with the salad, the molecular diversity this fusion would engender within our cells would unleash a cataclysmic supracritical explosion. The explosion of molecular novelty would soon be lethal to the unhappy cells harboring the explosion. The fact that we eat is not an accident, one of many conceivable methods evolution might have alighted on to get new molecules into our metabolic webs. Eating and digestion, I suspect, reflect our need to protect ourselves from the supracritical molecular diversity of the biosphere. (page 122)
    • We may be discovering a universal in biology, a new law: if our cells are subcritical, then, presumably, so too are all cells—bacteria, bracken, fern, bird, man. Throughout the supracritical explosion of the biosphere, cells since the Paleozoic, cells since the start, cells since 3.45 billion years ago must have remained subcritical. If so, then this subcritical–supracritical boundary must have always set an upper limit on the molecular diversity that can be housed within one cell. A limit exists, then, on the molecular complexity of the cell. (page 126)
    • If local ecosystems are metabolically poised at the subcritical–supracritical boundary, while the biosphere as a whole is supracritical? Then what a new tale we tell, of life cooperating to beget ever new kinds of molecules, and a biosphere where local ecosystems are poised at the boundary, but have collectively crept slowly upward in total diversity by the supracritical character of the whole planet. The whole biosphere is broadly collectively autocatalytic, catalyzing its own maintenance and ongoing molecular exploration. (page 130)
  • Chapter 8: High-Country Adventures
    • what would happen if, in addition to attempting to evolve such a computer program, we were more ambitious and attempted to evolve the shortest possible program that will carry out the task? Such a “shortest program” is one that is maximally compressed; that is, all redundancies have been squeezed out of it. Evolving a serial computer program is either very hard or essentially impossible because it is incredibly fragile. Serial computer programs contain instructions such as “compare two numbers and do such and such depending on which is larger” or “repeat the following action 1,000 times.” The computation performed is extremely sensitive to the order in which actions are carried out, the precise details of the logic, numbers of iterations, and so forth. The result is that almost any random change in a computer program produces “garbage.” Familiar computer programs are precisely the kind of complex systems that do not have the property that small changes in structure yield small changes in behavior. Almost all small changes in structure lead to catastrophic changes in behavior. (page 152)
      • This is the inherent problem we are grappling with in our “barely controlled systems”. All the elements involved are brittle and un-evolvable
    • It seems likely that there is no way to evolve a maximally compressed program in less time than it would take to exhaustively generate all possible programs, testing each to see if it carries out the desired task. When all redundancy has been squeezed from a program, virtually any change in any symbol would be expected to cause catastrophic variation in the behavior of the algorithm. Thus nearby variants in the program compute very different algorithms. (page 154)
    • because the program is maximally compressed, any change will cause catastrophic alterations in the computation performed. The fitness landscape is entirely random. The next fact is this: the landscape has only a few peaks that actually perform the desired algorithm. In fact, it has recently been shown by the mathematician Gregory Chaitin that for most problems there is only one or, at most, a few such minimal programs. It is intuitively clear that if the landscape is random, providing no clues about good directions to search, then at best the search must be a random or systematic search of all the 10300 possible programs to find the needle in the haystack, the possibly unique minimal program. This is just like finding Mont Blanc by searching every square meter of the Alps; the search time is, at best, proportional to the size of the program space. (page 155)
      • I’ve been thinking about hyperparameter tuning in the wrong way. There need(?) to be two approaches – one that works in evolvable spaces where there can be gradualism. The other approach cas to work in discontinuous regions, such as what activation function to use.
    • The question of what kinds of complex systems can be assembled by an evolutionary search process not only is important for understanding biology, but may be of practical importance in understanding technological and cultural evolution as well. The sensitivity of our most complex artifacts to catastrophic failure from tiny causes—for example, the Challenger disaster, the failed Mars Observer mission, and power-grid failures affecting large regions—suggests that we are now butting our heads against a problem that life has nuzzled for enormously longer periods: how to produce complex systems that do not teeter on the brink of collapse. Perhaps general principles governing search in vast spaces of possibilities cover all these diverse evolutionary processes, and will help us design—or even evolve—more robust systems. (page 157)
    • Once we understand the nature of these random landscapes and evolution on them, we will better appreciate what it is about organisms that is different, how their landscapes are nonrandom, and how that nonrandomness is critical to the evolutionary assembly of complex organisms. We will find reasons to believe that it is not natural selection alone that shapes the biosphere. Evolution requires landscapes that are not random. The deepest source of such landscapes may be the kind of principles of self-organization that we seek. (page 165)
    • On random landscapes, finding the global peak by searching uphill is totally useless; we have to search the entire space of possibilities. But even for modestly complex genotypes, or programs, that would take longer than the history of the universe. (page 167)
    • Things capable of evolving—metabolic webs of molecules, single cells, multicellular organisms, ecosystems, economic systems, people—all live and evolve on landscapes that themselves have a special property: they allow evolution to “work.” These real fitness landscapes, the types that underlie Darwin’s gradualism, are “correlated.” Nearby points tend to have similar heights. The high points are easier to find, for the terrain offers clues about the best directions in which to proceed. (page 169)
    • In short, the contribution to overall fitness of the organism of one state of one trait may depend in very complex ways on the states of many other traits. Similar issues arise if we think of a haploid genotype with N genes, each having two alleles. The fitness contribution of one allele of one gene to the whole organism may depend in complex ways on the alleles of other genes. Geneticists call this coupling between genes epistasis or epistatic coupling, meaning that genes at other places on the chromosomes affect the fitness contribution of a gene at a given place. (page 170)
    • The NK model captures such networks of epistatic couplings and models the complexity of the coupling effects. It models epistasis itself by assigning to each trait, or gene, epistatic “inputs” from K other traits or genes. Thus the fitness contribution of each gene depends on the gene’s own allele state, plus the allele states of the K other genes that affect that gene. (page 171)
    • I find the NK model fascinating because of this essential point: altering the number of epistatic inputs per gene, K, alters the ruggedness and number of peaks on the landscape. Altering K is like twisting a control knob. (page 172)
      •  This is really important and should also work with graph laplacians. In other words, not only can we model the connectivity, we can model the stiffness
    • our model organism, with its network of epistatic interactions among its genes, is caught in a web of conflicting constraints. The higher K is—the more interconnected the genes are—the more conflicting constraints exist, so the landscape becomes ever more rugged with ever more local peaks. (page 173)
      • This sounds oddly like how word2vec is calculated. Which implies that all connected neural networks are correlated and epistatic.
    • It is these conflicting constraints that make the landscape rugged and multipeaked. Because so many constraints are in conflict, there is a large number of rather modest compromise solutions rather than an obvious superb solution. (page 173)
      • Dimension reduction and polarization are a social solution to this problem
    • landscapes with moderate degrees of ruggedness share a striking feature: it is the highest peaks that can be scaled from the greatest number of initial positions! This is very encouraging, for it may help explain why evolutionary search does so well on this kind of landscape. On a rugged (but not random) landscape, an adaptive walk is more likely to climb to a high peak than a low one. If an adapting population were to “jump” randomly into such a landscape many times and climb uphill each time to a peak, we would find that there is a relationship between how high the peak is and how often the population climbed to it. If we turned our landscapes upside down and sought instead the lowest valleys, we would find that the deepest valleys drain the widest basins. (page 177)
    • The property that the highest peaks are the ones to which the largest fraction of genotypes can climb is not inevitable. The highest peaks could be very narrow but very high pinnacles on a low-lying landscape with modest broad hilltops. If an adapting population were released at a random spot and walked uphill, it would then find itself trapped on the top of a mere local hilltop. The exciting fact we have just discovered is that for an enormous family of rugged landscapes, the NK family, the highest peaks “drain” the largest basins. This may well be a very general property of most rugged landscapes reflecting complex webs of conflicting constraints. (page 177)
      •  I think this may be a function of how the landscapes are made. The K in NK somewhat dictates the amount of correlation
    • Recall another striking feature of random landscapes: with every step one takes uphill, the number of directions leading higher is cut by a constant fraction, one-half, so it becomes ever harder to keep improving. As it turns out, the same property shows up on almost any modestly rugged or very rugged landscape. Figure 8.9 shows the dwindling fraction of fitter neighbors along adaptive walks for different K values (Figure 8.9a) and the increased waiting times to find fitter variants for different K values (Figure 8.9b). Once K is modestly large, about K = 8 or greater, at each step uphill the number of directions uphill falls by a constant fraction, and the waiting time or number of tries to find that way uphill increases by a constant fraction. This means that as one climbs higher and higher, it becomes not just harder, but exponentially harder to find further directions uphill. So if one can make one try per unit time, the rate of improving slows exponentially. (page 178)
      • This is very important in understanding how hyperparameter space needs to be explored
    • Optimal solutions to one part of the overall design problem conflict with optimal solutions to other parts of the overall design. Then we must find compromise solutions to the joint problem that meet the conflicting constraints of the different subproblems. (page 179)
    • Selection, in crafting the kinds of organisms that exist, may also help craft the kinds of landscapes over which they evolve, picking landscapes that are most capable of supporting evolution—not only by mutation alone, but by recombination as well. Evolvability itself is a triumph. To benefit from mutation, recombination, and natural selection, a population must evolve on rugged but “well-correlated” landscapes. In the framework of NK landscapes, the “K knob” must be well tuned. (page 182)
      • This is going to be the trick for machine learning
    • even if the population is released on a local peak, it may not stay there! Simply put, the rate of mutation is so high that it causes the population to “diffuse” away from the peak faster than the selective differences between less fit and more fit mutants can return the population to the peak. An error catastrophe, first discovered by Nobel laureate Manfred Eigen and theoretical chemist Peter Schuster, has occurred, for the useful genetic information built up in the population is lost as the population diffuses away from the peak. (page 184)
    • Eigen and Schuster were the first to emphasize the importance of this error catastrophe, for it implies a limit to the power of natural selection. At a high enough mutation rate, an adapting population cannot assemble useful genetic variants into a working whole; instead, the mutation-induced “diffusion” over the space overcomes selection, pulling the population toward adaptive peaks. (page 184)
    • This limitation is even more marked when seen from another vantage point. Eigen and Schuster also emphasized that for a constant mutation rate per gene, the error catastrophe will arise when the number of genes in the genotype increases beyond a critical number. Thus there appears to be a limit on the complexity of a genome that can be assembled by mutation and selection! (page 184)
    • We are seeking a new conceptual framework that does not yet exist. Nowhere in science have we an adequate way to state and study the interleaving of self-organization, selection, chance, and design. We have no adequate framework for the place of law in a historical science and the place of history in a lawful science. (page 185)
      • This is the research part of the discussion in the iConference paper. Use the themes in the following paragraphs (self organization, selection, etc. ) to build up the areas that need to be discussed and researched.
    • The inevitability of historical accident is the third theme. We can have a rational morphology of crystals, because the number of space groups that atoms in a crystal can occupy is rather limited. We can have a periodic table of the elements because the number of stable arrangements of the subatomic constituents is relatively limited. But once at the level of chemistry, the space of possible molecules is vaster than the number of atoms in the universe. Once this is true, it is evident that the actual molecules in the biosphere are a tiny fraction of the space of the possible. Almost certainly, then, the molecules we see are to some extent the results of historical accidents in this history of life. History arises when the space of possibilities is too large by far for the actual to exhaust the possible. (page 186)
    • Here is a firm foothold: an evolutionary process, to be successful, requires that the landscapes it searches are more or less correlated. (page 186)
      • This is a meta design constraint that needs to be discussed (iConference? Antonio’s workshop?)
    • Nonequilibrium systems can be robust as well. A whirlpool dissipative system is robust in the sense that a wide variety of shapes of the container, flow rates, kinds of fluids, and initial conditions of the fluids lead to vortices that may persist for long periods. So small changes in the construction parameters of the system, and initial conditions, lead to small changes in behavior. (page 187)
    • Whirlpools are attractors in a dynamical system. Attractors, however, can be both stable and unstable. Instability arises in two senses. First, small changes in the construction of the system may dramatically alter the behavior of the system. Such systems are called structurally unstable. In addition, small changes in initial conditions, the butterfly effect, can sharply change subsequent behavior. Conversely, stable dynamical systems can be stable in both senses. Small changes in construction may typically lead to small changes in behavior. The system is structurally stable. And small changes in initial conditions can lead to small changes in behavior. (page 187)
    • We know that there is a clear link between the stability of the dynamical system and the ruggedness of the landscape over which it adapts. Chaotic Boolean networks, and many other classes of chaotic dynamical systems, are structurally unstable. Small changes wreak havoc on their behavior. Such systems adapt on very rugged landscapes. In contrast, Boolean networks in the ordered regime are only slightly modified by mutations to their structure. These networks adapt on relatively smooth fitness landscapes. (page 187)
    • We know from the NK landscape models discussed in this chapter that there is a relationship between the richness of conflicting constraints in a system and the ruggedness of the landscape over which it must evolve. We plausibly believe that selection can alter organisms and their components so as to modify the structure of the fitness landscapes over which those organisms evolve. By taking genomic networks from the chaotic to the ordered regime, selection tunes network behavior to be sure. By tuning epistatic coupling of genes, selection also tunes landscape structure from rugged to smooth. Changing the level of conflicting constraints in the construction of an organism from low to high tunes how rugged a landscape such organisms explore. (page 188)
    • And so we return to a tantalizing possibility: that self-organization is a prerequisite for evolvability, that it generates the kinds of structures that can benefit from natural selection. It generates structures that can evolve gradually, that are robust, for there is an inevitable relationship among spontaneous order, robustness, redundancy, gradualism, and correlated landscapes. Systems with redundancy have the property that many mutations cause no or only slight modifications in behavior. Redundancy yields gradualism. But another name for redundancy is robustness. Robust properties are ones that are insensitive to many detailed alterations. The robustness of the lipid vesicle, or of the cell type attractors in genomic networks in the ordered regime, is just another version of redundancy. Robustness is precisely what allows such systems to be molded by gradual accumulation of variations. Thus another name for redundancy is structural stability—a folded protein, an assembled virus, a Boolean network in the ordered regime. The stable structures and behaviors are ones that can be molded. (page 188)
      • This is why evolution may be the best approach for machine learning hyperparameter tuning
    • If this view is roughly correct, then precisely that which is self-organized and robust is what we are likely to see preeminently utilized by selection. (page 188)
    • The more rare and improbable the forms that selection seeks, the less typical and robust they are and the stronger will be the pressure of mutations to revert to what is typical and robust. (page 189)
  • Chapter 9: Organisms and Artifacts
    •  Might the same general laws govern major aspects of biological and technological evolution? Both organisms and artifacts confront conflicting design constraints. As shown, it is those constraints that create rugged fitness landscapes. Evolution explores its landscapes without the benefit of intention. We explore the landscapes of technological opportunity with intention, under the selective pressure of market forces. But if the underlying design problems result in similar rugged landscapes of conflicting constraints, it would not be astonishing if the same laws governed both biological and technological evolution. (page 192)
    • I begin by describing a simple, idealized kind of adaptive walk—long-jump adaptation—on a correlated but rugged landscape. We have already looked at adaptive walks that proceed by generating and selecting single mutations that lead to fitter variants. Here, an adaptive walk proceeds step-by-step in the space of possibilities, marching steadfastly uphill to a local peak. Suppose instead that we consider simultaneously making a large number of mutations that alter many features at once, so that the organism takes a “long jump” across its fitness landscape. Suppose we are in the Alps and take a single normal step. Typically, the altitude where we land is closely correlated with the altitude from which we started. There are, of course, catastrophic exceptions; cliffs do occur here and there. But suppose we jump 50 kilometers away. The altitude at which we land is essentially uncorrelated with the altitude from which we began, because we have jumped beyond what is called the correlation length of the landscape(page 192)
    • A very simple law governs such long-jump adaptation. The result, exactly mimicking adaptive walks via fitter single-mutant variants on random landscapes is this: every time one finds a fitter long-jump variant, the expected number of tries to find a still better long-jump variant doubles! (page 193)
      • Intelligence is computation, and expensive
    • As the number of genes increases, long-jump adaptations becomes less and less fruitful; the more complex an organism, the more difficult it is to make and accumulate useful drastic changes through natural selection. (Page 194)
    • The germane issue is this: the “universal law” governing long-jump adaptation suggests that adaptation on a correlated landscape should show three time scales—an observation that may bear on the Cambrian explosion. Suppose that we are adapting on a correlated, but rugged NK landscape, and begin evolving at an average fitness value. Since the initial position is of average fitness, half of all nearby variants will be better. But because of the correlation structure or shape of the landscape, those nearby variants are only slightly better. In contrast, consider distant variants. Because the initial point is of average fitness, again half the distant variants are fitter. But because the distant variants are far beyond the correlation length of the landscape, some of them can be very much fitter than the initial point. (By the same token, some distant variants can be very much worse.) Now consider an adaptive process in which some mutant variants change only a few genes, and hence search the nearby vicinity, while other variants mutate many genes, and hence search far away. Suppose that the fittest of the variants will tend to sweep through the population the fastest. Thus early in such an adaptive process, we might expect the distant variants, which are very much fitter than the nearby variants, to dominate the process. If the adapting population can branch in more than one direction, this should give rise to a branching process in which distant variants of the initial genotype, differing in many ways from one another as well, emerge rapidly. Thus early on, dramatically variant forms should arise from the initial stem. Just as in the Cambrian explosion, the species exhibiting the different major body plans, or phyla, are the first to appear. (Page 195)
    • Because the fraction of fitter nearby variants dwindles very much more slowly than in the long-jump case. In short, in the mid term of the process, the adaptive branching populations should begin to climb local hills. (Page 195)
    • The implication is this: when fitness is average, the fittest variants will be found far away. As fitness improves, the fittest variants will be found closer and closer to the current position. (Page 196)
      • So with hyperparameter tuning, change many variables initially, and reduce as the fitness results level out and proceed up the local hill
    • Uniting these two features of rugged but correlated landscapes, we should find radiation that initially both is bushy and occurs among dramatically different variants, and then quiets to scant branching among similar variants later on as fitness increases. (page 198)
    • Despite the fact that human crafting of artifacts is guided by intent and intelligence, both processes often confront problems of conflicting constraints. (Page 202)
      • Dimension reduction is a way of reducing those constraints, but the cost is ignoring the environment. Ideologies must be simple to allow for dense connection without conflict
    • As better designs are found, it becomes progressively harder to find further improvements, so variations become progressively more modest. Insofar as this is true, it is obviously reminiscent of the claims for the Cambrian explosion, where the higher taxa filled in from the top down. (Page 202)
      • This is a design trap. Since designing for more constraints limits hill climbing, designing for individuals and cultures could make everything grind to a halt. Designing for cultures needs to have a light footprint
    • There is something very familiar about this in the context of technological trajectories and learning effects: the rate of finding fitter variants (that is, making better products or producing them more cheaply) slows exponentially, and then ceases when a local optimum is found. This is already almost a restatement of two of the well-known aspects of learning effects. First, the total number of “tries” between finding fitter variants increases exponentially; thus we expect that increasingly long periods will pass with no improvements at all, and then rapid improvements as a fitter variant is suddenly found. Second, adaptive walks that are restricted to search the local neighborhood ultimately terminate on local optima. Further improvement ceases. (Page 204)
    • it seems worthwhile to consider seriously the possibility that the patterns of branching radiation in biological and technological evolution are governed by similar general laws. Not so surprising, this, for all these forms of adaptive evolution are exploring vast spaces of possibilities on more or less rugged “fitness” or “cost” landscapes. If the structures of such landscapes are broadly similar, the branching adaptive processes on them should also be similar. (Page 205)
  • Chapter 10: An Hour upon the Stage
    • The vast puzzle is that the emergent order in communities—in community assembly itself, in coevolution, and in the evolution of coevolution—almost certainly reflects selection acting at the level of the individual organism. (Page 208)
    • Models like those of Lotka and Volterra have provided ecologists with simple “laws” that may govern predator-prey relationships. Similar models study the population changes, or population dynamics, when species are linked into more complex communities with tens, hundreds, or thousands of species. Some of these links are “food webs,” which show which species eat which species. But communities are more complex than food webs, for two species may be mutualists, may be competitors, may be host and parasite, or may be coupled by a variety of other linkages. In general, the diverse populations in such model communities might exhibit simple steady-state patterns of behavior, complex oscillations, or chaotic behavior. (Page 211)
      • Building an ecology for intelligent machines means doing this. I guess we’ll find out what it’s like to build the garden of eden
    • Pimm and his colleagues have struggled to understand these phenomena and have arrived at ideas deeply similar to the models of fitness landscapes we discussed in Chapter 8 and 9. Different communities are imagined as points on a community landscape. Change the initial set of species, and the community will climb to a different peak, a different stable community. (Page 212)
    • In these models, Pimm and friends toss randomly chosen species into a “plot” and watch the population trajectories. If any species goes to zero population, hence extinct, it is “removed” from the plot. The results are both fascinating and still poorly understood. What one finds is that, at first, it is easy to add new species, but as more species are introduced, it becomes harder and harder. That is, more randomly chosen species must be tossed into the plot to find one that can survive with the rest of the assembling community. Eventually, the model community is saturated and stable; no further species can be added. (Page 212)
    • The community-assembly simulation studies are fascinating for a number of reasons beyond the distribution of extinction events. In particular, it is not obvious why model communities should “saturate,” so that it becomes increasingly difficult and finally impossible to add new species. If one constructs a “community landscape,” in which each point of the terrain represents a different combination of species, then the peaks will represent points of high fitness—combinations that are stable. While a species navigates a fitness landscape by mutating genes, a community navigates a community landscape by adding or deleting a species. Pimm argues that as communities climb higher and higher toward some fitness peak, the ascension becomes harder and harder. As the climb proceeds, there are fewer directions uphill, and hence it is harder to add new species. At a peak, no new species can be added. Saturation is attained. And from one initial point the community can climb to different local peaks, each representing a different stable community. (Page 214)
      • In belief spaces, this could help to explain the concept of velocity. It is mechanism for stumbling into new parts of the fitness landscape. And there is something about how ideas go stale.
    • In a coevolutionary arms race, when the Red Queen dominates, all species keep changing and changing their genotypes indefinitely in a never-ending race merely to sustain their fitness level. (Page 216)
      • This should also apply to belief spaces
    • Two main behaviors are imagined. The first image is of Red Queen behavior, where all organisms keep changing their genotypes in a persistent “arms race,” and hence the coevolving population never settles down to an unchanging mixture of genotypes. The second main image is of coevolving populations within or between species that reach a stable ratio of genotypes, an evolutionary stable strategy, and then stop altering genotypes. Red Queen behavior is, as we will soon see, a kind of chaotic behavior. ESS behavior, when all species stop changing, is a kind of ordered regime. (Page 221)
    • Just as we can use the NK model to show how genes interact with genes or how traits interact with traits within one organism, we can also use it to show how traits interact with traits between organisms in an ecosystem. (Page 225)
    • The ecosystem tends to settle into the ordered, evolutionary stable strategies regime if either epistatic connections, K, within each species are high, so that there are lots of peaks to become trapped on, or if couplings between species, C, is low, so landscapes do not deform much at the adaptive moves of the partners. Or an ESS regime might result when a third parameter, S, the number of species each species interacts with, is low, so that moves by one do not deform the landscapes of many others. (Page 226)
    • There is also a chaotic Red Queen regime where the species virtually never stop coevolving (Figure 10.4c). This Red Queen regime tends to occur when landscapes have few peaks to get trapped on, thus when K is low; when each landscape is deformed a great deal by adaptive moves of other species, thus when C is high; or when S is high so that each species is directly affected by very many other species. Basically, in this case, each species is chasing peaks that move away faster than the species can chase them. (Page 228)
    • At first, it might seem surprising that low K leads to chaotic ecosystems; in the NK Boolean networks, high K led to chaos. The more inter-couplings, the more likely a small change was to propagate throughout and cause the Boolean system to veer off into butterfly behavior. But with coupled landscapes it is the interconnectedness between the species that counts. When intercoupling, C, is high, moves by one species strongly deform the fitness landscapes of its partners. If any trait in the frog is affected by many traits in the fly, and vice versa, then a small change in traits of one species alters the landscape of the other a lot. The system will tend to be chaotic. Conversely, the ecosystem will tend to be in the ordered regime when the couplings between species, C, is sufficiently low. For much the same reason, if we were to keep K and C the same, but change the number of species S any one species directly interacts with, we would find that if the number is low the system will tend to be ordered, while if the number is high the ecosystem will tend to be chaotic. (Page 228)
      • There is something about Tajfel’s opposition identity that might lead to Red Queen scenarios. This would also help to explain the differences between left and right wing behaviours. Right wing is driven by “liberal tears” more than the opposition.
    • In fact, the results of our simulations suggest that the very highest fitness occurs precisely between ordered and chaotic behavior! (Page 228)
    • EpistasisTuning
    • Tuning an ecosystem. As the richness of epistatic connections between species, K, is increased, tuning the ecosystem from the chaotic to the orderly regime, average fitness at first increases and then decreases. It reaches the highest value midway between the extremes. The experiment is based on a 5 × 5 square lattice ecosystem, in which each of 25 species interacts with at most four other species. (Species on the corners of the lattice interact with two neighbors [CON = 2]; species on the edges interact with three neighbors [CON = 3]; and interior species interact with four neighbors [CON = 4]. N = 24, C = 1, S =25.) (page 229)
    • One might start the system with all species having very high K values, coevolving on very rugged landscapes, or all might have very low K values, coevolving on smooth landscapes. If K were not allowed to change, then deep within the high-K ordered regime, species would settle to ESS rapidly; that is, species would climb to poor local peaks and cling to them. In the second, low K, Red Queen chaotic regime, species would never attain fitness peaks. The story no longer stops there, however, for the species can now evolve the ruggedness of their landscapes, and the persistent attempts by species to invade new niches, when successful, will insert a new species into an old niche and may disrupt any ESS attained. (Page 232)
    • CoevolvingLandscapes
    • Figures 10.7 and 10.8 show these results. Each species has N = 44 traits; hence epistatic coupling can be as high as 43, creating random landscapes, or as low as 0, creating Fujiyama landscapes. As generations pass, the average value of K in the coevolving system converges onto an intermediate value of K, 15 to 25, and stays largely within this narrow range of intermediate landscape ruggedness (Above)). Here fitness is high, and the species do reach ESS equilibria where all genotypes stop changing for considerable periods of time, before some invader or invaders disrupt the balance by driving one or more of the coadapted species extinct. (Page 232)
    • When K is held high or low, deep in the ordered regime or deep in the chaotic regime, huge extinction avalanches rumble through the model ecosystems. The vast sizes of these events reflect the fact that fitness is low deep in the ordered regime because of the high-K conflicting constraints, and fitness is low deep in the chaotic regime because of the chaotic rising and plunging fitness values. In either case, low fitness of a species makes it very vulnerable to invasion and extinction. The very interesting result is that when the coevolving system can adjust its K range, it self-tunes to values where average fitness is as high as possible; therefore, the species are least vulnerable to invasion and extinction, so extinction avalanches appear to be as rare as possible. This shows up in Figure 10.8, which compares the size distribution and total number of extinction events deep in the ordered regime and after the system has self-tuned to optimize landscape ruggedness, K, and fitness. After the ecosystem self-tunes, the avalanches of extinction events remain a power law—the slope is about the same as when deep in the ordered regime. But over the same total number of generations, far fewer extinction events of each size occur. The self-tuned ecosystem also has far fewer extinction events than does an ecosystem deep in the chaotic regime. In short, the ecosystem self-tunes to minimize the rate of extinction! As if by an invisible hand, all the coevolving species appear to alter the rugged structures of the landscapes over which they evolve such that, on average, all have the highest fitness and survive as long as possible. (Page 234)
  • Chapter 11: In Search of Excellence
    • Organisms, artifacts, and organizations all evolve and coevolve on rugged, deforming, fitness landscapes. Organisms, artifacts, and organizations, when complex, all face conflicting constraints. So it can be no surprise if attempts to evolve toward good compromise solutions and designs must seek peaks on rugged landscapes. Nor, since the space of possibilities is typically vast, can it be a surprise that even human agents must search more or less blindly. Chess, after all, is a finite game, yet no grand master can sit at the board after two moves and concede defeat because the ultimate checkmate by the opponent 130 moves later can now be seen as inevitable. And chess is simple compared with most of real life. We may have our intentions, but we remain blind watchmakers. We are all, cells and CEOs, rather blindly climbing deforming fitness landscapes. If so, then the problems confronted by an organization—cellular, organismic, business, governmental, or otherwise—living in niches created by other organizations, is preeminently how to evolve on its deforming landscape, to track the moving peaks. (Page 247)
    • Evolution is a search procedure on rugged fixed or deforming landscapes. No search procedure can guarantee locating the global peak in an NP-hard problem in less time than that required to search the entire space of possibilities. And that, as we have repeatedly seen, can be hyperastronomical. Real cells, organisms, ecosystems, and, I suspect, real complex artifacts and real organizations never find the global optima of their fixed or deforming landscapes. The real task is to search out the excellent peaks and track them as the landscape deforms. Our “patches” logic appears to be one way complex systems and organizations can accomplish this. (Page 248)
    • The basic idea of the patch procedure is simple: take a hard, conflict-laden task in which many parts interact, and divide it into a quilt of nonoverlapping patches. Try to optimize within each patch. As this occurs, the couplings between parts in two patches across patch boundaries will mean that finding a “good” solution in one patch will change the problem to be solved by the parts in the adjacent patches. Since changes in each patch will alter the problems confronted by the neighboring patches, and the adaptive moves by those patches in turn will alter the problem faced by yet other patches, the system is just like our model coevolving ecosystems. Each patch is the analogue of what we called a species in Chapter 10. Each patch climbs toward fitness peaks on its own landscape, but in doing so deforms the fitness landscapes of its partners. As we saw, this process may spin out of control in Red Queen chaotic behavior and never converge on any good overall solution. Here, in this chaotic regime, our system is a crazy quilt of ceaseless changes. Alternatively, in the analogue of the evolutionary stable strategy (ESS) ordered regime, our system might freeze up, getting stuck on poor local peaks. Ecosystems, we saw, attained the highest average fitness if poised between Red Queen chaos and ESS order. We are about to see that if the entire conflict-laden task is broken into the properly chosen patches, the coevolving system lies at a phase transition between order and chaos and rapidly finds very good solutions. Patches, in short, may be a fundamental process we have evolved in our social systems, and perhaps elsewhere, to solve very hard problems. (Page 253)
    • It is the very fact that patches coevolve with one another that begins to hint at powerful advantages of patches compared with the Stalinist limit of a single large patch. What if, in the Stalinist limit, the entire lattice settles into a “bad” local minimum, one with high energy rather than an excellent low-energy minimum? The single-patch Stalinist system is stuck forever in the bad minimum. Now let’s think a bit. If we break the lattice up into four 5 × 5 patches just after the Stalinist system hits this bad minimum, what is the chance that this bad minimum is not only a local minimum for the lattice as a whole, but also a local minimum for each of the four 5 × 5 patches individually? You see, in order for the system broken into four patches to “stay” at the same bad minimum, it would have to be the case that the same minimum of the entire lattice happens also to be a minimum for all four of the 5 × 5 patches individually. If not, one or more of the patches will be able to flip a part, and hence begin to move. Once one patch begins to move, the entire lattice is no longer frozen in the bad local minimum. (Page 256)
    • Breaking large systems into patches allows the patches literally to coevolve with one another. Each climbs toward its fitness peaks, or energy minima, but its moves deform the fitness landscape or energy landscape of neighboring patches. (Page 257)
    • In the chaotic Leftist Italian limit, the average energy achieved by the lattice is only a slight bit less, about 0.47. In short, if the patches are too numerous and too small, the total system is in a disordered, chaotic regime. Parts keep flipping between their states, and the average energy of the lattice is high. (Page 258)
    • The answer depends on how rugged the landscape is. Our results suggest that if K is low so the landscape is highly correlated and quite smooth, the best results are found in the Stalinist limit. For simple problems with few conflicting constraints, there are few local minima in which to get trapped. But as the landscape becomes more rugged, reflecting the fact that the underlying number of conflicting constraints is becoming more severe, it appears best to break the total system into a number of patches such that the system is near the phase transition between order and chaos. (Page 258)
    • Here, then, is the first main and new result. It is by no means obvious that the lowest total energy of the lattice will be achieved if the lattice is broken into quilt patches, each of which tries to lower its own energy regardless of the effects on surrounding patches. Yet this is true. It can be a very good idea, if a problem is complex and full of conflicting constraints, to break it into patches, and let each patch try to optimize, such that all patches coevolve with one another. (Page 262)
    • But what, if anything, characterizes the optimum patch-size distribution? The edge of chaos. Small patches lead to chaos; large patches freeze into poor compromises. When an intermediate optimum patch size exists, it is typically very close to a transition between the ordered and the chaotic regime. (Page 262)
      • I’m pretty sure that this can be determined iteratively and within a desired epsilon. It should resemble the way a neural net converges on an accuracy.
    • I find it fascinating that hard problems with many linked variables and loads of conflicting constraints can be well solved by breaking the entire problem into nonoverlapping domains. Further, it is fascinating that as the conflicting constraints become worse, patches become ever more helpful. (Page 264)
    • I suspect that analogues of patches, systems having various kinds of local autonomy, may be a fundamental mechanism underlying adaptive evolution in ecosystems, economic systems, and cultural systems. (Page 254)
    • We are constructing global communication networks, and whipping off into space in fancy tin cans powered by Newton’s third law. The Challenger disaster, brownouts, the Hubble trouble, the hazards of failure in vast linked computer networks—our design marvels press against complexity boundaries we do not understand. (Page 265)
    • Patching systems so that they are poised on the edge of chaos may be extremely useful for two quite different reasons: not only do such systems rapidly attain good compromise solutions, but even more essentially, such poised systems should track the moving peaks on a changing landscape very well. The poised, edge-of-chaos systems are “nearly melted.” Suppose that the total landscape changes because external conditions alter. Then the detailed locations of local peaks will shift. A rigid system deep in the ordered regime will tend to cling stubbornly to its peaks. Poised systems should track shifting peaks more fluidly. (Page 266)
    • Misspecification arises all the time. Physicists and biologists, trying to figure out how complex biopolymers such as proteins fold their linear sequence of amino acids into compact three-dimensional structures, build models of the landscape guiding such folding and solve for the deep energy minima. Having done so, the scientists find that the real protein does not look like the predicted one. The physicists and biologists have “guessed” the wrong potential surface; they have guessed the wrong landscape and hence have solved the wrong hard problem. They are not fools, for we do not know the right problem. (Page 266)
      • Same for Hyperparameter tuning
    • We must learn how to learn in the face of persistent misspecification. Suppose we model the production facility, and learn from that model that a particular way to break it into patches is optimal, allowing the system to converge on a suggested solution. If we have misspecified the problem, the detailed solution is probably of little value. But it may often be the case that the optimal way to break the problem into patches is itself very insensitive to misspecifications of the problem. In the NK lattice and patch model we have studied, a slight change in the NK landscape energies will shift the locations of the minima substantially, but may not alter the fact that the lattice should still be broken into 6 × 6 patches. Therefore, rather than taking the suggested solution to the misspecified problem and imposing it on the real facility, it might be far smarter to take the suggested optimal patching of the misspecified problem, impose that on the real production facility, and then try to optimize performance within each of the now well-defined patches. In short, learning how to optimize the misspecified problem may not give us the solution to the real problem, but may teach us how learn about the real problem, how to break it into quilt patches that coevolve to find excellent solutions. (Page 267)
      • This is really worth looking at, because it can apply to round tripping simulation and real world systems as well. And a fitness test could be the time to divergence
    • receiver-based communication is roughly this: all the agents in a system that is trying to coordinate behavior let other agents know what is happening to them. The receivers of this information use it to decide what they are going to do. The receivers base their decisions on some overall specification of “team” goal. (Page 268)
    • ReceiverAttention
    • This observation suggests that it might be useful if, in our receiver-based communication system, we allowed sites to ignore some of their customers. Let’s say that each site pays attention to itself and a fraction, P, of its customers, and ignores 1 – P of them. What happens if we “tune” P? What happens is shown in Figure 11.8. The lowest energy for the entire lattice occurs when a small fraction of customers is ignored! As Figure 11.8 shows, if each site tries to help itself and all its customers, the system does less well than if each site pays attention, on average, to about 95 percent of its customers. In the actual numerical simulation, we do this by having each site consider each of its customers and pay attention to that customer with a 95 percent probability. In the limit where each site pays attention to no customers, of course, energy of the entire lattice is very high, and hence bad. (Page 268)
  • Chapter 12: An Emerging Global Civilization
    • Catalytic closure is not mysterious. But it is not a property of any single molecule; it is a property of a system of molecules. It is an emergent property. (Page 275)
    • But Fontana found a second type of reproduction. If he “disallowed” general copiers, so they did not arise and take over the soup, he found that he evolved precisely what I might have hoped for: collectively autocatalytic sets of Lisp expressions. That is, he found that his soup evolved to contain a “core metabolism” of Lisp expressions, each of which was formed as the product of the actions of one or more other Lisp expressions in the soup. (Page 278)
    • Fontana called copiers “level-0 organizations” and autocatalytic sets “level-1 organizations (Page 279)
    • The ever-transforming economy begins to sound like the ever-transforming biosphere, with trilobites dominating for a long, long run on Main Street Earth, replaced by other arthropods, then others again. If the patterns of the Cambrian explosion, filling in the higher taxa from the top down, bespeak the same patterns in early stages of a technological trajectory when many strong variants of an innovation are tried until a few dominant designs are chosen and the others go extinct, might it also be the case that the panorama of species evolution and coevolution, ever transforming, has its mirror in technological coevolution as well? Maybe principles deeper than DNA and gearboxes underlie biological and technological coevolution, principles about the kinds of complex things that can be assembled by a search process, and principles about the autocatalytic creation of niches that invite the innovations, which in turn create yet further niches. It would not be exceedingly strange were such general principles to exist. Organismic evolution and coevolution and technological evolution and coevolution are rather similar processes of niche creation and combinatorial optimization. While the nuts-and-bolts mechanisms underlying biological and technological evolution are obviously different, the tasks and resultant macroscopic features may be deeply similar. (Page 281)
    • The difficulty derives from the fact that economists have no obvious way to build a theory that incorporates what they call complementarities. The automobile and gasoline are consumption complementarities. You need both the car and the gas to go anywhere. (Page 282)
    • The use, I claim, is that we can discover the kinds of things that we would expect in the real world if our “as if” mock-up of the true laws lies in the same universality class. Physicists roll out this term, “universality class,” to refer to a class of models all of which exhibit the same robust behavior. So the behavior in question does not depend on the details of the model. Thus a variety of somewhat incorrect models of the real world may still succeed in telling us how the real world works, as long as the real world and the models lie in the same universality class. (Page 283)
    • An “enzyme” might be a symbol string in the same pot with a “template matching” (000) site somewhere in it. Here the “enzyme match rule” is that a 0 on the enzyme matches a 1 on the substrate, rather like nucleotide base-pairing. Then given such a rule for “enzymatic sites,” we can allow the symbol strings in the pot to act on one another. One way is to imagine two randomly chosen symbol strings colliding. If either string has an “enzymatic site” that matches a “substrate site” on the other, then the enzymatic site “acts on” the substrate site and carries out the substitution mandated in the corresponding row (Page 285)
    • Before we turn to economic models, let us consider some of the kinds of things that can happen in our pot of symbol strings as they act on one another, according to the laws of substitution we might happen to choose. A new world of possibilities lights up and may afford us clues about technological and other forms of evolution. Bear in mind that we can consider our strings as models of molecules, models of goods and services in an economy, perhaps even models of cultural memes such as fashions, roles, and ideas. Bear in mind that grammar models give us, for the first time, kinds of general “mathematical” or formal theories in which to study what sorts of patterns emerge when “entities” can be both the “object” acted on and transformed and the entities that do the acting, creating niches for one another in their unfolding. Grammar models, therefore, help make evident patterns we know about intuitively but cannot talk about very precisely. (Page 287)
    • These grammar models also suggest a possible new factor in economic takeoff: diversity probably begets diversity; hence diversity may help beget growth. (Page 292)
      •  

        Diversity begets growth opportunities. Pure growth is fastest in a monoculture of simple items with short maturity cycles

    • DiversityThe number of renewable goods with which an economy is endowed is plotted against the number of pairs of symbol strings in the grammar, which captures the hypothetical “laws of substitutability and complementarity.” A curve separates a subcritical regime below the curve and a supracritical regime above the curve. As the diversity of renewable resources or the complexity of the grammar rules increases, the system explodes with a diversity of products. (Page 193)
    • Friend, you cannot even predict the motions of three coupled pendula. You have hardly a prayer with three mutually gravitating objects. We let loose pesticides on our crops; the insects become ill and are eaten by birds that sicken and die, allowing the insects to proliferate in increased abundance. The crops are destroyed. So much for control. Bacon, you were brilliant, but the world is more complex than your philosophy. (Page 302)

When Worlds Collide

59973d6385da8-image

Charlottesville demonstrations, summer 2017 (link)

I’ve been thinking about this picture a lot recently. My research explores how extremist groups can develop using modern computer-mediated communication, particularly recommender systems. This picture lays the main parts like a set of nested puzzle pieces.

This is a picture of a physical event. In August 2017, various “Alt-Right” online communities came to Charlottesville Virginia to ostensibly protest the removal of confederate statues, which in turn was a response to the Charleston South Carolina church shooting of 2015. From August 11th through 12th, sanctioned and unsanctioned protests and counter protests happened in and around Emancipation Park.

Although this is not a runway in Paris, London or New York, this photo contains what I can best call “fashion statements”, in the most serious use of the term. They are mechanisms for signifying and conveying identity,  immediately visible. What are they trying to say to each other and to us, the public behind the camera?

Standing on the right hand of the image is a middle-aged white man, wearing a type of uniform: On his cap and shirt are images of the confederate “battle flag”. He is wearing a military-style camouflage vest and is carrying an AR-15 rifle and a 9mm handgun. These are archetypal components of the Alt-right identity.

He is yelling at a young black man moving in from the left side of the photo, who is also wearing a uniform of a sort. In addition to the black t-shirt and the dreadlocks, he is carrying multiple cameras – the sine qua non of credibility for young black men in modern America. Lastly, he is wearing literal chains and shackles, ensuring that no one will forget the slave heritage behind these protests.

Let’s consider these carried items, the cameras and the guns. The fashion accessories, if you will.

Cameras exist to record a selected instant of reality. It may be framed, with parts left out and others enhanced, but photographs and videos are a compelling document that something in the world happened. Further, these are internet-connected cameras, capable of sharing their content widely and quickly. These two elements, photographic evidence and distribution are a foundation of the #blacklivesmatter movement, which is a response to the wide distribution of videos where American police killed unarmed black men. These videos changed the greater social understanding of a reality encountered by a minority that was incomprehensible by the majority before these videos emerged.

Now to the other accessory, the guns. They are mechanisms “of violence to compel our opponent to fulfil our will”. Unlike cameras, which are used to provide a perspective of reality , these weapons are used to create a reality through their display and their threatened use. They also reflect a perception  of those that wield them that the world has become so threatening that battlefield weapons make sense at a public event.

Oddly, this is may also be a picture of an introduction of sorts. Alt-right and #blacklivesmatter groups almost certainly interact significantly. In fact, it is doubtful that, even though they speak in a common language , one group can comprehend the other. The trajectories of their defining stories are so different, so misaligned, that the concepts of one slide off the brain of the other.

Within each group, it is a different story. Each group shares a common narrative, that is expressed in words, appearance, and belief. And within each group, there is discussion and consensus. These are the most extreme examples of the people that we see in the photo. I don’t see anyone else in the image wearing chains or openly carrying guns. The presence of these individuals within their respective groups exerts a pull on the overall orientation and position of the group in the things that they will accept. Additionally, the individuals in one group can cluster in opposition to a different group, which is a pressure that drives each group further apart.

Lastly, we come to the third actor in the image, the viewer. The photo is taken by Shelby Lum, an award-winning staff photographer for the Richmond Times-Dispatch. Through framing, focus and timing, she captures this frame that tells this story.  Looking at this photo, we the audience feel that we understand the situation. But photographs are inherently simplifying. The audience fills in the gaps – what’s happened before, the backstory of the people in the image. This image can mean many things to many people. And as such, it’s what we do with that photo – what we say about it and what we connect with it that makes the image as much about us as it is about the characters within the frame.

It is those interactions that I focus on, the ways that we as populations interact with information that supports, expands, or undermines our beliefs. My theory is that humans move through belief space like animals move on the planes of the Serengeti. And just as the status of the ecosystem can be inferred through the behaviors of its animal population, the health and status of our belief spaces can be determined by our digital behaviors.

Using this approach, I believe that we may be able look at populations at scale to determine the “health” of the underlying information. Wildebeest behave differently in risky environments. Their patterns of congregation are different. They can stampede, particularly when the terrain is against them, such as a narrow water crossing. Humans can behave in similar ways for example when their core beliefs about their identity is challenged, such as when Galileo was tried by the church for essentially moving man from the literal center of the universe..

I think that this sort of approach can be used to identify at-risk (stampeding) groups and provide avenues for intervention that can “nudge” groups off of dangerous trajectories. It may also be possible to recognize the presence of deliberate actors attempting to drive groups into dangerous terrain, like native Americans driving buffalo off of pishkun cliffs, or more recently the Russian Internet Research Agency instigating and coordinating a #bluelivesmatter and a #blacklivesmatter demonstration to occur at the same time and place in Texas.

This theory is based on simulations that are based on the assumption that people coordinate in high-dimensional belief spaces based on orientation, velocity and social influence. Rather than coming to a static consensus, these interactions are dynamic and follow intuitions of belief movement across information terrain. That dynamic process is what I’ll be discussing over the next several posts.

The Radio in Fascist Italy

The Radio in Fascist Italy

  • Philip Cannistraro
  • Journal of European Studies
  • scholars have generally agreed that the control of the mass media by the state is a fundamental prerequisite for the establishment and maintenance of totalitarian dictatorships (pg 127)
  • It is not so widely acknowledged, however, that contemporary totalitarian governments have been largely responsible for the initial growth of the mass media-particularly films and the radio-in their respective countries. (pg 127)
  • In their efforts to expose entire populations to official propaganda, totalitarian regimes encouraged and sponsored the development of the mass media and made them available to every· citizen on a large scale basis. (pg 127)
  • Marconi shrewdly reminded Mussolini that it would be politically wise to place control of the radio in the hands of the state, pointing out the radio’s great potential for propaganda purposes (pg 128)
  • “How many hearts recently beat with emotion when hearing the very voice of the Duce! All this means but one thing: the radio must be extended and extended rapidly. It will contribute much to the general culture of the people” (pg 129)
  • … to insure that EIAR’s programmes conformed to the requirements of the regime’s cultural and political policies. The High Commission included government representatives from each major area of culture: literature, journalism., the fine arts, music, poetry, theatre, and films. The programmes Commission screened the transcripts and plans of all and censored the content of all broadcasts. (pg 130)
  • His broadcast, ‘The Bombardment of Adrianople’, was awaited by the public with great interest and was heralded by critics as the most significant cultural event of the Italian radio.ts Marinetti’s colourful language and emotion-packed presentation blasted un expected life into the Italian radio. His flam.boyant style introduced the concept of the ‘radio personality’ in Fascist Italy, and the success of his talk encouraged those who, like Marinetti himself, hoped to make the radio a new art form. Broadcasts by Marinetti, most of which were lectures on Futurism, continued to be heard on Italian radio each month for more than a decade. (pg 131)
  • The regime quickly recognized the effectiveness of this technique in· arousing listener interest, and it was an easy matter to transfer microphones to mass rallies from where the enthusiastic cheers of the spectators could be heard by radio audiences. (pg 132)
  • The popular announcer Cesare Ferri created the characters ‘Nonno Radio’ (Grandfather Radio) and ‘Zia Radio’ (Aunt Radio), speaking to Italian youth with unprecedented familiarity in terms they easily understood. (pg 132)
  • In order to popular arouse interest in its program.me EIAR sought to stimulate indirect audience participation through public contests for short stories, poems, songs, In and children’s fairy tales. addition, surveys were conducted among listeners to discover trends in popular taste. (pg 133)
  • The radio had an important task to fulfil in the totalitarian state, that of binding the Italians together into one nation through common ideals and a common cultural experience inspired by Fascism. (pg 134)
  • Mussolini proclaimed Radio Rurale a great achievement of the Fascist revolution, for contemporary observers saw it as a new instrument with which to integrate rural existence into the mainstream. of national life. (pg 135)
  • The measures taken by the regime to overcome cultural and political provincialism by creating a mass radio audience in the countryside met with qualified success. (pg 137)
  • Regarded by many as an important step towards the creation of a truly popular culture, Radio Btdilla’s purpose was to give the working classes of the city and the countryside the means of acquiring a radio at a modest cost. Through the radio art, instruction, music, poetry-all the cultural masterworks–cease to become the privilege and unjust monopoly of a few elitist groups’. (pg 139)
  • ‘The ministry, in carrying out its delicate functions of vigilance over radio broadcasting, must guide itself by criteria that are essentially of a political and cultural nature.’ (pg 140)
  • Once the radio had been integrated into the structure of the Ministry of Popular Culture, the Fascists began to develop m.ore effective ways of using broadcasting as a cultural medium. While the number and variety of programmes had begun to increase by the beginning of the decade, it was only after 1934 that they became politically sophisticated. (pg 141)
  • Fascist racial doctrines became a major theme of radio propaganda during World War II. An Italo-German accord signed in 1940 to co-ordinate radio propaganda between the two countries included measures to ‘intensify anti-Jewish propaganda’ on the Italian radio as well as in foreign broadcasts.78 The Inspectorate for Radio Broadcasting organized an important series of anti-Semitic prograrnm.es that centred around the ‘Protocols of Zion’, and talks such as ‘Judaism. versus Western Culture’, the ‘Jewish International’, and ‘Judaism. Wanted this War’, were broadcast from 1941 to 1943. (pg 143)
  • information received from the Vatican radio during World War II was generally regarded more accurate than the obvious propaganda broadcasts of the Allies (pg 147)
  • On the radio he astutely employed direct, forceful language, shouting short and vivid sentences to create a sense of drama and arouse emotional reactions. ‘This ‘maniera forte’ that characterized Appelius’ radio talks had a great appeal for many Italians, especially for the ‘little man’ who wanted to be talked to on his own level in terms he could readily understand.121 In his broadcasts Appelius screamed insults and ranted and raved at the foul enemies of Fascism. with a powerful barrage of verbal abuse, inciting his audiences to unmitigated hatred and scorn against the evil ‘anglo-sassoni’ and their allies. (pg 150)
  • In the broad context of Fascist cultural aspirations, all the media aimed at similar goals: the diffusion of standard images and themes that reflected the ideological values of Fascism.; the creation of a mass culture that conformed the needs of the Fascist state in its capacity as a totalitarian to government. (pg 154)

Karl Marx and the Tradition of Western Political Thought – The Broken Thread of Tradition

Hanna Arendt – Thinking Without a Banister

These two connected statements had already been torn asunder by a tradition that translated the one by declaring that man is a social being, a banality for which one would not have needed Aristotle, and the other by defining man as the animal rationale, the reasoning animal. (pg 23)

What Aristotle had seen as one and the same human quality, to live together with others in the modus of speaking, now became two distinct characteristics, to have reason and to be social. And these two characteristics, almost from the beginning, were not thought merely to be distinct, but antagonistic to each other: the conflict between man’s rationality and his sociability can be seen throughout our tradition of political thought (pg 23)

The law was now no longer the boundary (which the citizens ought to defend like the walls of the city, because it had the same function for the citizens’ political life as the city’s wall had for their physical existence and distinctness, as Heraclitus had said), but became a yardstick by which rule could be measured. Rule now either conformed to or overruled the law, and in the latter case the rule was called tyrannical usually, although not necessarily, exerted by one man-and therefore a kind of perverted monarchy. From then on, law and power became the two conceptual pillars of all definitions of government, and these definitions hardly changed during the more than two thousand years that separate Aristotle from Montesquieu. (pg 28)

But bureaucracy should not be mistaken for totalitarian domination. If the October Revolution had been permitted to follow the lines prescribed by Marx and Lenin, which was not the case, it would probably have resulted in bureaucratic rule. The rule of nobody, not anarchy, or disappearance of rule, or oppression, is the ever present danger of any society based on universal equality. (pg 33)

In Marx’s own opinion, what made socialism scientific and distinguished it from that of his predecessors, the “utopian socialists,” was not an economic theory with its scientific insights as well as its errors, but the discovery of a law of movement that ruled matter and, at the same time, showed itself in the reasoning capacity of man as “consciousness,” either of the self or of a class. (pg 35)

The logic of dialectal movement enables Marx to combine nature with history, or matter with man; man becomes the author of a meaningful, comprehensible history because his metabolism with nature, unlike an animal’s, is not merely consumptive but requires an activity, namely, labor. For Marx labor is the uniting link between matter and man, between nature and history. He is a “materialist” insofar as the specifically human form of consuming matter is to him the beginning of everything (pg 35)

Politics, in other words, is derivative in a twofold sense: it has its origin in the pre-political data of biological life, and it has its end in the post-political, highest possibility of human destiny (pg 40)

the fact that the multitude, whom the Greeks called hoi polloi, threatens the existence of every single person, runs like a red thread throughout the centuries that separate Plato from the modern age. In this context it is irrelevant whether this attitude expresses itself in secular terms, as in Plato and Aristotle, or if it does so in the terms of Christianity. (pg 40)

true Christians wohnen fern voneinander, that is, dwell far from each other and are as forlorn among the multitude as were the ancient philosophers. (pg 41)

Each new birth endangers the continuity of the polis because with each new birth a new world potentially comes into being. The laws hedge in these new beginnings and guarantee the preexistence of a common world, the permanence of a a continuity that transcends the individual life span of e each generation, and in which each single man in his mortality can hope to leave a trace of permanence behind him. (pg 46)

introduced the terms nomo and physei, by law or by nature. Thus, the order of the universe, the kosmos of natural things, was differentiated from the world of human affairs, whose order is laid down by men since it is an order of things made and done by men This distinction too, survives in the beginning of our tradition, where Aristotle expressly States that political science deals with things that are nomo and not physei. (pg 47)

 

 

Why Trump cooperates with Putin

Some thoughts about Trump’s press conference with Putin, as opposed to the G7 and NATO meetings, from a game-theoretic perspective. Yes, it’s time for some (more) game theory!

Consider the iterated prisoner’s dilemma (IPD), where two prisoners are being interrogated by the police. They have two choices: COOPERATE by remaining silent, or DEFECT by confessing. If both remain silent, they get a light punishment, since the police can’t prove anything. If one prisoner confesses while the other remains silent, the confessing prisoner goes free and the other faces the steepest punishment. If they both confess, they get a moderate punishment.

Axelrod, in The Evolution of Cooperation, shows that there are several strategies that one can use in the IPD and that these strategies vary by the amount of contact expected in the future. If none or very little future interaction is expected, then it pays to DEFECT, which basically means to screw your opponent.

If, on the other hand, there is an expectation of extensive future contact, the best strategy is some form of TIT-FOR-TAT, which means that you start by cooperating with your opponent, but if they defect, then you match that defection with their own. If they cooperate, then you match that as well.

This turns out to be a simple, clear strategy that rewards cooperative behavior and punishes jerks. It is powerful enough that a small cluster of TIT-FOR-TAT can invade a population of ALL_DEFECT. It has some weaknesses as well. We’ll get to that later.

Donald Trump, in the vast majority of his interactions has presented an ALL_DEFECT strategy. That actually can make sense in the world of real-estate, where there are lots of players that perform similar roles and bankruptcy protections exist. In other words, he could screw his banks, partners and contractors and get away with it, because there was always someone new.

But with Russia in general and Putin in particular, Trump is very cooperative. Why is this case different?

It turns out that  after four bankruptcies (1991, 1992, 2004 and 2009) it became impossible for Trump to get loans through traditional channels. In essence, he had defected on enough banks that the well was poisoned.

As the ability to get loans decreased, the amount of cash sales to Russian oligarchs increased. About $109 million were spent purchasing Trump-branded properties from 2003 – 2017, according to MccLatchy. Remember that TIT-FOR-TAT can win over ALL_DEFECT if there is prolonged interaction. Fourteen years is a long time to train someone.

Though TIT-FOR-TAT is effective, it’s hard work trying to figure out what the other player is likely to do. TIT-FOR-TAT’s weakness is its difficulty. We simply can’t do Nash equilibria in our heads. However, there are two cognitively easy strategies in the IPD: ALL_DEFECT, and ALL_COOPERATE. Trump doesn’t like to work hard, and he doesn’t listen to staff, so I think that once Trump tried DEFECT a few times and got punished for it he went for ALL_COOPERATE with the Russians. My guess is that they have a whole team of people working on how to keep him there. They do the work so he doesn’t have to think about it.

Which is why, at every turn, Trump cooperates. He knows what will happen if he doesn’t, and frankly, it’s less work than any of the other alternatives. And if you really only care for yourself, that’s a perfectly reasonable place to be.

Postscript – July 18, 2018

I’ve had some discussions about this where folks are saying “That’s too much analysis for this guy. He’s just an idiot who likes strongmen”. But here’s the thing. It’s not about Trump. It’s about Putin.

What do you think the odds were on Trump winning the election in 2015? Now how about 2003, when he started getting Russian cash to prop up his businesses? For $110M, or the price of ONE equipped F/A-18, amortized over 14 years, they were able to secure the near total cooperation of a low-likelihood presidential contender/disruptor and surprise winner.

This is a technique that the Russians have developed and refined for years. So you have to start asking the questions about other individuals and groups that are oddly aligned with Putin’s aims. Russia has a budget that could support thousands of “investments” like Trump, here and abroad.

That’s the key. And that’s my bet on why Mueller is so focused on finances. The Russians learned an important lesson in spending on weapons in the Reagan administration. They can’t compete on the level of spending. So it appears that they might be allocating resources towards low-cost social weaponry to augment their physical capabilities. If you want more on this, read Gerasimov’s The value of Science is in the Foresight.

Postscript 2 – July 21, 2018

A paragraph from a the very interesting New Yorker article by Adam Davidson:

“Ledeneva told me that each actor in sistema faces near-constant uncertainty about his status, aware that others could well destroy him. Each actor also knows how to use kompromat to destroy rivals but fears that using such material might provoke an explosive response. While each person in sistema feels near-constant uncertainty, the over-all sistema is remarkably robust. Kompromat is most powerful when it isn’t used, and when its targets aren’t quite clear about how much destructive information there is out there. If everyone sees potential land mines everywhere, it dramatically increases the price for anybody stepping out of line.”

It’s an interesting further twist on the ALL_COOPERATE. One of the advantages of nuclear MAD was that it was simple. That it could also apply to more mundane blackmail shouldn’t be surprising.

From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum

From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum

Authors

Venue: Journal of Social and Political Psychology

Quick takeaway:

  • Linguistic study of a xenophobic online chat room using Pennebaker’s LIWC text analytic system. Users who stay in the group change from individual to group pronouns and align linguistically. Cognitive complexity also appears to reduce as users align with the group

Abstract:

  • Much of identity formation processes nowadays takes place online, indicating that intergroup differentiation may be found in online communities. This paper focuses on identity formation processes in an open online xenophobic, anti-immigrant, discussion forum. Open discussion forums provide an excellent opportunity to investigate open interactions that may reveal how identity is formed and how individual users are influenced by other users. Using computational text analysis and Linguistic Inquiry Word Count (LIWC), our results show that new users change from an individual identification to a group identification over time as indicated by a decrease in the use of “I” and increase in the use of “we”. The analyses also show increased use of “they” indicating intergroup differentiation. Moreover, the linguistic style of new users became more similar to that of the overall forum over time. Further, the emotional content decreased over time. The results indicate that new users on a forum create a collective identity with the other users and adapt to them linguistically.

Notes:

  • Social influence is broadly defined as any change – emotional, behavioral, or attitudinal – that has its roots in others’ real or imagined presence (Allport, 1954). (pg 77)
  • Regardless of why an individual displays an observable behavioral change that is in line with group norms, social identification with a group is the basis for the change. (pg 77)
  • In social psychological terms, a group is defined as more than two people that share certain goals (Cartwright & Zander, 1968). (pg 77)
  • Processes of social identification, intergroup differentiation and social influence have to date not been studied in online forums. The aim of the present research is to fill this gap and provide information on how such processes can be studied through language used on the forum. (pg 78)
  • The popularity of social networking sites has increased immensely during the last decade. At the same time, offline socializing has shown a decline (Duggan & Smith, 2013). Now, much of the socializing actually takes place online (Ganda, 2014). In order to be part of an online community, the individual must socialize with other users. Through such socializing, individuals create self-representations (Enli & Thumim, 2012). Hence, the processes of identity formation, may to a large extent take place on the Internet in various online forums. (pg 78)
  • For instance, linguistic analyses of American Nazis have shown that use of third person plural pronouns (they, them, their) is the single best predictor of extreme attitudes (Pennebaker & Chung, 2008). (pg 79)
  • Because language can be seen as behavior (Fiedler, 2008), it may be possible to study processes of social influence through linguistic analysis. Thus, our second hypothesis is that the linguistic style of new users will become increasingly similar to the linguistic style of the overall forum over time (H2). (pg 79)
  • This indicates that the content of the posts in an online forum may also change over time as arguments become more fine-tuned and input from both supporting and contradicting members are integrated into an individual’s own beliefs. This is likely to result (linguistically) in an increase in indicators of cognitive complexity. Hence, we hypothesize that the content of the posts will change over time, such that indicators of complex thinking will increase (H3a). (pg 80)
    • I’m not sure what to think about this. I expect from dimension reduction, that as the group becomes more aligned, the overall complex thinking will reduce, and the outliers will leave, at least in the extreme of a stampede condition.
  • This result indicates that after having expressed negativity in the forum, the need for such expressions should decrease. Hence, we expect that the content of the posts will change such that indicators of negative emotions will decrease, over time (H3b). (pg 80)
  • the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship, which has resulted in that the forum is highly xenophobic. Nonetheless, due to its liberal self-presentation, the xenophobic discussions are not unchallenged. For example, also anti-racist people join this forum in order to challenge individuals with xenophobic attitudes. This means that the forum is not likely to function as a pure echo chamber, because contradicting arguments must be met with own arguments. Hence, individuals will learn from more experienced users how to counter contradicting arguments in a convincing way. Hence, they are likely to incorporate new knowledge, embrace input and contribute to evolving ideas and arguments. (pg 81)
    • Open debate can lead to the highest level of polarization (M&D)
    • There isn’t diverse opinion. The conversation is polarized, with opponents pushing towards the opposite pole. The question I’d like to see answered is has extremism increased in the forum?
  • Natural language analyses of anonymous social media forums also circumvent social desirability biases that may be present in traditional self-rating research, which is a particular important concern in relation to issues related to outgroups (Maass, Salvi, Arcuri, & Semin, 1989; von Hippel, Sekaquaptewa, & Vargas, 1997, 2008). The to-be analyzed media uses “aliases”, yielding anonymity of the users and at the same time allow us to track individuals over time and analyze changes in communication patterns. (pg 81)
    • After seeing “Ready Player One”, I also wonder if the aliases themselves could be looked at using an embedding space built from the terms used by the users? Then you get distance measurements, t-sne projections, etc.
  • Linguistic Inquiry Word Count (LIWC; Pennebaker et al., 2007; Chung & Pennebaker, 2007; Pennebaker, 2011b; Pennebaker, Francis, & Booth, 2001) is a computerized text analysis program that computes a LIWC score, i.e., the percentage of various language categories relative to the number of total words (see also www.liwc.net). (pg 81)
    • LIWC2015 ($90) is the gold standard in computerized text analysis. Learn how the words we use in everyday language reveal our thoughts, feelings, personality, and motivations. Based on years of scientific research, LIWC2015 is more accurate, easier to use, and provides a broader range of social and psychological insights compared to earlier LIWC versions
  • Figure 1c shows words overrepresented in later posts, i.e. words where the usage of the words correlates positively with how long the users has been active on the forum. The words here typically lack emotional content and are indicators of higher complexity in language. Again, this analysis provides preliminary support for the idea that time on the forum is related to more complex thinking, and less emotionality.
    • WordCloud
  • The second hypothesis was that the linguistic style of new users would become increasingly similar to other users on the forum over time. This hypothesis is evaluated by first z-transforming each LIWC score, so that each has a mean value of zero and a standard deviation of one. Then we measure how each post differs from the standardized values by summing the absolute z-values over all 62 LIWC categories from 2007. Thus, low values on these deviation scores indicate that posts are more prototypical, or highly similar, to what other users write. These deviation scores are analyzed in the same way as for Hypothesis 1 (i.e., by correlating each user score with the number of days on the forum, and then t-testing whether the correlations are significantly different from zero). In support of the hypothesis, the results show an increase in similarity, as indicated by decreasing deviation scores (Figure 2). The mean correlation coefficient between this measure and time on the forum was -.0086, which is significant, t(11749) = -3.77, p < 0.001. (pg 85)
    • ForumAlignmentI think it is reasonable to consider this a measure of alignment
  • Because individuals form identities online and because we see this in the use of pronouns, we also expected to see tendencies of social influence and adaption. This effect was also found, such that individuals’ linguistic style became increasingly similar to other users’ linguistic style over time. Past research has shown that accommodation of communication style occurs automatically when people connect to people or groups they like (Giles & Ogay 2007; Ireland et al., 2011), but also that similarity in communicative style functions as cohesive glue within a group (Reid, Giles, & Harwood, 2005). (pg 86)
  • Still, the results could not confirm an increase in cognitive complexity. It is difficult to determine why this was not observed even though a general trend to conform to the linguistic style on the forum was observed. (pg 87)
    • This is what I would expect. As alignment increases, complexity, as expressed by higher dimensional thinking should decrease.
  • This idea would also be in line with previous research that has shown that expressing oneself decreases arousal (Garcia et al., 2016). Moreover, because the forum is not explicitly racist, individuals may have simply adapted to the social norms on the forum prescribing less negative emotional displays. Finally, a possible explanation for the decrease in negative emotional words might be that users who are very angry leave the forum, because of its non-racist focus, and end up in more hostile forums. An interesting finding that was not part of the hypotheses in the present research is that the third person plural category correlated positively with all four negative emotions categories, suggesting that people using for example ‘they’ express more negative emotions (pg 87)
  • In line with social identity theory (Tajfel & Turner, 1986), we also observe linguistic adaption to the group. Hence, our results indicate that processes of identity formation may take place online. (pg 87)