Category Archives: Article

Sick echo chambers

Over the past year, I’ve been building a model that lets me look at how opinions evolve in belief space, much in the manner that flocks, herds and schools emerge in the wild.

CI_GP_Poster4a

Recently, I was Listening to BBC Business Daily this morning on Facebook vs Democracy:

  • Presenter Ed Butler hears a range of voices raising concern about the existential threat that social media could pose to democracy, including Ukrainian government official Dmytro Shymkiv, journalist Berit Anderson, tech investor Roger McNamee and internet pioneer Larry Smarr.

Roger McNamee and Larry Smarr in particular note how social media can be used to increase polarization based on emergent poles. In other words, “normal” opposing views can be amplified by attentive bad actors [page 24] with an eye towards causing generalized societal disruption.

My model explores emergent group interactions and I wondered if this adversarial herding in information space as it might work in my model.

These are the rough rules I started with:

  • Herders can teleport, since they are not emotionally invested in their belief space position and orientation
  • Herders appear like multiple individuals that may seem close and trustworthy, but they are actually a distant monolithic entity that is aware of a much larger belief space.
  • Herders amplify arbitrary pre-existing positions. The insight is that they are not herding in a direction, but to increase polarization
  • To add this to the model, I needed to do the following:
    • Make the size of the agent a function of the weight so we can see what’s going on
    • When in ‘herding mode’ the overall heading of the population is calculated, and the agent that is closest to that heading is selected to be amplified by our trolls/bot army.
    • The weight is increased to X, and the radius is increased to Y.
      • X represents AMPLIFICATION BY trolls, bots, etc.
      • A large Y means that the bots can swamp other, normally closer signals. This models the effect of a monolithic entity controlling thousands of bots across the belief space

Here’s a screenshot of the running simulation. There is an additional set of controls at the upper left that allow herding to be enables, and the weight of the influence to be set. In this case, the herding weight is 10. Though the screenshot shows one large agent shape, the amplified shape flits from agent to agent, always keeping closest to the average heading.

2017-10-28

The results are kind of scary. If I set the weight of the herder to 15, I can change the change the flocking behavior of the default to echo chamber.

  • Normal: No Herding
  • Herding weight set to 15, other options the same: HerdingWeight15

I did some additional tweaking to see if having highly-weighted herders ignore each other (they would be coordinated through C&C) would have any effect. It doesn’t. There is enough interaction through the regular populations to keep the alignment space reduced.

It looks like there is a ‘sick echo chamber’ pattern. If the borders are reflective, and the herding weight + influence radius is great enough, then a wall-hugging pattern will emerge.

The influence weight is sort of a credibility score. An agent that has a lot of followers, or says a lot of the things that I agree with has a lot of influence weight The range weight is reach.

Since a troll farm or botnet can be regarded as a single organization,  interacting with any one of the agents is really interacting with the root entity.  So a herding agent has high influence and high reach. The high reach explains the border hugging behavior.

It’s like there’s someone at the back of the stampede yelling YOUR’E GOING THE RIGHT WAY! KEEP AT IT! And they never go off the cliff because they are a swarm Or, it never goes of the cliff, because it manifests as a swarm.

A loud, distributed voice pointing in a bad direction means wall hugging. Note that there is some kind of floating point error that lets wall huggers creep off the edge.Edgecrawling

With a respawn border, we get the situation where the overall heading of the flock doesn’t change even as it gets destroyed as it goes over the border. Again, since the herding algorithm is looking at the overall population, it never crosses the border but influences all the respawned agents to head towards the same edge: DirectionPreserving

Who’d have thought that there could be something worse than runaway polarization?

Some thoughts about awareness and trust

I had some more thoughts about how behavior patterns emerge from the interplay between trust and awareness. I think the following may be true:

  1. Awareness refers to how complete the knowledge of an information domain is. Completely aware indicates complete information. Unaware indicates not only absent information but no knowledge of the domain at all.
  2. Trust is a social construct to deal with incomplete information. It’s a shortcut that essentially states “based on some set of past experiences, I will assume that this (now trusted) entity will behave in a predictable, reliable, and beneficial way for me”
  3. Healthy behaviors emerge when trust and awareness are equivalent.
  4. Low trust and low awareness is reasonable. It’s like walking through a dark, unknown space. You go slow, bump into things, and adjust.
  5. Low trust and high awareness is paralytic.
  6. High trust and low awareness is reckless. Runaway conditions like echo chambers. The quandary here is that high trust is efficient. Consider the prisoner’s dilemma:
      1. dilemma
      2. In the normal case, the two criminals have to evaluate what the best action is based on all the actions the other individual could choose, ideally resulting in a Nash Equilibrium. For two players (p), there are 4 choices (c). However, if each player believes that the other player will make the same choice, then only the two diagonal choices remain. For two players, this reduces the complexity by half. But for multiple dissimilar players, the options go up by cp, so that if this were The Usual Suspects, there would be 32 possibilities to be worked out by each player. But for 5 identical prisoners, the number of choices remains 2, which is basically “what should we all do?”. The more we believe that the others in our social group see the world the same way, the less work we all have to do.
  7. Diversity is a mechanism for extending awareness, but it depends on trusting those who are different. That may be the essence of the explore/exploit dilemma.
  8. Attention is a form of focused awareness, can reduce general awareness. This is one to the reasons that Tufekci’s thoughts on the attention economy matter so much. As technology increases attention on proportionally more “marketable” items, the population’s social awareness is distorted.
  9. In a healthy group context, trust falls off as a function of awareness. That’s why we get flocking. That is the pattern that emerges when you trust more those who are close, while they in turn do the same, building a web of interaction. It’s kind of like interacting ripples?
  10. This may work for any collection of entities that have varied states that undergo change in some predictable way. If they were completely random, then awareness of the state is impossible, and trust should be zero.
    1. Human agent trust chains might proceed from self to family to friends to community, etc.
    2. Machine agent trust chains might proceed from self to direct connections (thumb drives, etc) to LAN/WAN to WAN
    3. Genetic agent trust chain is short – self to species. Contact is only for reproduction. Interaction would reflect the very long sampling times.
    4. Note that (1) is evolved and is based on incremental and repeated interactions, while (2) is designed and is based on arbitrary rules that can change rapidly. Genetics are maybe dealing with different incentives? The only issue is persisting and spreading (which helps in the persisting)
  11. Computer-mediated-communication disturbs this process (as does probably every form of mass communication) because the trust in the system is applied to the trust of the content. This can work in both ways. For example, lowering trust in the press allows for claims of Fake News. Raising the trust of social networks that channel anonymous online sources allows for conspiracy thinking.
  12. An emerging risk is how this affects artificial intelligence, given that currently high trust in the algorithms and training sets is assumed by the builders
    1. Low numbers of training sets mean low diversity/awareness,
    2. Low numbers of algorithms (DNNs) also mean low diversity/awareness
    3. Since training/learning is spread by update, the installed base is essentially multiple instances of the same individual. So no diversity and very high trust. That’s a recipe for a stampede of 10,000 self driving cars.

Since I wrote this, I’ve had some additional thoughts. I think that our understanding of Awareness and Trust is getting confused with Faith and Doubt. Much of what we believe to be true is no longer based on direct evidence, or even an understandable chain of reasoning. Particularly as more and more of our understanding comes from statistical analysis of large sets of fuzzy data, the line between Awareness and Faith becomes blurred, I think.

Doubt is an important part of faith, and it has to do with the mind coming up against the unknowable. The question does God exist? contains the basics of the tension between faith and doubt. Proving the existence of God can even be thought of as distraction from the attempt to come to terms with the mysteries of life. Within every one of us is the ability to reject all prior religious thought and start our own journey that aligns with our personal understandings.

Conversely, it is impossible to increase awareness without trusting the prior work. Isaac Newton had to trust in large part, the shoulders of the giants he stood on, even if he was refining notions of what gravity was. So too with Albert Einstein, Rosalind Franklin and others in their fields. The scientific method is a framework for building a large, broad-based, interlocking tapestry awareness.

When science is approached from a perspective of Faith and Doubt, communities like the Flat Earth Society emerge. It’s based on the faith that the since the world appears flat here, it must be flat everywhere, and doubt of a history of esoteric measurements and math that disprove this personally reasonable assumption. From this perspective, the Flat Earthers are a protestant movement, much in the way that the community that emerged around Martin Luther, when he rejected the organized, carefully constructed orthodoxy of the Catholic Church, based on his personally reasonable interpretation of scripture.

Confusing Awareness and Trust with Faith and Doubt is toxic to both. Ongoing, systemic doubt in trustworthy information will destroy progress, ultimately unraveling the tapestry of awareness. Trust that mysteries can be proven is toxic in its own way, since it gives rise to confusion between reality and fantasy like we see in doomsday cults.

My sense is that as our ability to manipulate and present information is handed over to machines, that we will need to educate them in these differences, and make sure that they do not become as confused as we are. Because we are rapidly heading for a time where these machines will be co complex and capable that our trust in them will be based on faith.

Modeling The Law of Group Polarization

INTRODUCTION

The the detection of echo chambers and information bubbles is becoming increasingly relevant in this new era of personalized information and ‘fake news’. However, the behavior of groups of individuals has been researched since Le Bon’s 1896 book ‘The Crowd’ Of crowds, he states that ‘one of their general characteristics was an excessive suggestibility, and we have shown to what an extent suggestions are contagious in every human agglomeration; a fact which explains the rapid turning of the sentiments of a crowd in a definite direction’ (Le Bon, 2009, p. 28).  The existence of this phenomenon was demonstrated in studies by Moscovici and Doise who showed that the consensus reached will be most extreme with less cohesive, homogeneous groups [Moscovici, Doise, & Halls, 1994].

Cass Sunstein described these tendencies as The Law of Group Polarization, which states that members of a deliberating  group  predictably  move  toward  a  more  extreme  point  in  the direction  indicated  by  the  members’  predeliberation  tendencies. (Sunstein, 2002, p 176). Sunstein further states that:

  1. A deliberating group, asked to make a  group  decision, will  shift toward  a more  extreme  point  in the  direction  indicated  by the median predeliberation judgment.
  2. the tendency of individuals who compose a  deliberating group, if polled  anonymously  after  discussion, will  be to shift toward a more  extreme  point in the  direction indicated  by the median predeliberation judgment
  3. The  effect  of deliberation is  both to  decrease  variance  among  group  members,  as  individual differences  diminish,  and  also  to  produce  convergence  on  a  relatively  more extreme  point among predeliberation judgments
  4. people  are  less  likely  to  shift  if  the  direction advocated  is  being  pushed  by  unfriendly  group  members;  the  likelihood  of  a shift,  and its likely size,  are  increased when  people  perceive fellow  members as friendly,  likeable,  and  similar  to  them
  5. there  will  be  depolarization  if  and  when  new  persuasive  arguments  are offered  that  are  opposite  to  the  direction  initially favored  by  group  members. There  is  evidence  for  this  phenomenon.
  6. Excluded  by  choice  or coercion from  discussion with others, such  groups  may become  polarized  in quite  extreme  directions,  often  in  part  because  of  group polarization.

Similar social interactions have been modeled in the agent-based simulation community using opinion dynamics, voter and flocking models.  In this paper, I attempt to model Sunstein’s statements using agents navigating within a multidimensional information space where the amount of social influence is controlled. The results of these experiments are a set of identifiable behaviors that range from random walks to tight clusters that resemble the polarized groups described by Sunstein and others.

APPROACH

The intuition behind this research is that group polarization appears to reproduce certain aspects of flocking behavior, but in information space, where individuals can hold overlapping opinions across a large numbers of dimensions. In other words, individuals within a certain ‘Social Horizon’ (SH) of each other should be capable of influencing each other’s orientation and speed in that space. The closer the heading and speed, the easier to align completely to a nearby neighbor. If the speed and particular the orientation is not closely aligned, there will not be as much on an opportunity to ‘join the flock’. These three factors – proximity, speed and heading appear sufficient to address Sunstein’s statements from the introduction.

Animal flocking has been shown to represent a form of group cognition [Deneubourg & Goss. 1989] [Petit et. al. 2010]. We chose the Reynolds boids flocking model  [Reynolds 1987] as the basis for our model, which was developed to work in any number of dimensions greater than one. We further modified the boids algorithm to have each agent only calculate its next position with respect to the other visible agent’s heading and speed without a collision avoidance term.

N-dimensional position was handled as a set of named variables that could vary continuously on an arbitrary interval similarly to the Opinion Dynamics models of Krause [Hegselmann & Krause, 2002], but extended to multiple dimensions. For this initial work, each ‘social dimension’ was considered equivalent. This allowed the straightforward implementation of distance-based cluster detection using DBSCAN [Ester, et al 1996]. Social distance interactions across dissimilar spaces have been discussed by  Bogunia [2004] and Schwammle [2007] and show that this approach can be extended to more sophisticated environments. Since agents in this simulation also have an orientation, n-dimensional heading was handled in a similar way. We developed a platform for interactively exploring the simulation space or performing repeatable experiments in batch mode

INITIAL RESULTS

Initial experiments were done in 2 dimensions for ease of visualization and understanding. Very rapidly, we were able to see that agent behavior manifested in three phases by varying only the parameter that controlled the ‘social horizon radius’, which is the distance that one agent can ‘see’ another agent. The influence of neighboring agents falls off linearly as a function of distance until the horizon radius is reached. This follows Sunstein’s statement that ‘the  likelihood  of  a shift,  and its likely size,  are  increased when  people  perceive fellow  members as friendly,  likeable,  and  similar  to  them‘ [pp 181].

For the simulation runs, agents were initialized on a range of (-1.0, 1.0) on each dimension. A reflective barrier was placed at (-2.0, 2.0). This reflects the intuition that many concepts have inherent limits. For example in fashion, a skirt can only be so low or so high [Curran 1999]. The three phases can be seen in figures 1 – 3 below. In each figure, a screenshot of the mature state is shown on the left. On the right are traces of the distance of each agent from the center of the n-dimensional space. These particular simulations took place in 2D for easier visualization in the screenshots.

all-exploit-si-radius-0

Figure 1: Zero SH – No social interaction and no emergent behaviors

all-exploit-si-ratio-0

Figure 2 : Limited SH Radius (0.2) with emergent flocks and rich interaction.

all-exploit-si-radius-10

Figure 3: ‘Infinite’ SH (10.0) with strong group polarization

The first phase is determined entirely by the random generation of the agents. They continue along their paths until they encounter the containing barrier and are reflected back in . The resulting chart shows this random behavior and no emergent pattern. The second phase is the richest, characterised by the emergence of ‘flocks’ that can be discriminated using DBSCAN (each color is a cluster, while white is unaffiliated). Interestingly, the flocks tend to orbit near the center of the space. This makes sense, as any agent offering attraction is on average spending most of its time nearer the center of the stage than the edges. The third phase represents a good example of Sunstein’s definitions. All agents become aligned and each agent as well as the average belief become more extreme over time. The only thing that interferes with the polarized group heading off into infinity is the reflective boundary.

To verify that these patterns emerge involving higher dimensions, simulation runs were performed for up to 10 dimensions. The only adjustment that needed to be incorporated  is that the social horizon distance is influenced by the number of dimensions. Since distance is the sum of the squares in each dimension, we found that the ‘social radius’ had to be multiplied by the square root of the number of dimensions used to produce the same effect. once appropriately adjusted, the same three phases emerged.

We also examined the effects of having populations with different social horizons. Multiple studies across different disciplines ranging from neurology [Cohen et. al. 2007] to computer-human interaction [Munson & Resnick 2010] have shown that populations often have explorer and exploiter subgroups. In game theory, this is known as the multi-armed bandit problem, which explores how to make decisions using incomplete information [Burnetas & Katehakis 1997]. Does the gambler stay with a particular machine (exploit) or go find a different one (explore). The most effective strategies revolve around a majority exploit/minority explore pattern. In the case of the simulation, 10% of the population were given zero SH, which let them explore the environment unhindered, while the other 90% were given the highest SH, which in prior runs had resulted in the group polarization of figure 3. These values reflect the numbers found in the above studies as well as the percentage of diverse news consumers found by Flaxman, Goel and Rao in their study of weblogs [Flaxman et. al. 2016]

The results of mixing these populations was startling. Although still tightly clustered, the ‘exploit’ group would rarely interact with the simulation boundary and would instead be pulled back towards the center by the presence of the ‘explorers’

90-exploitr10-10-explorer0

Figure 4: Two populations interacting (10% Zero SH and 90% Infinite SH)

DISCUSSION [designing interfaces for populations]

This study shows that it is possible to implement many of the claims of Cass Sunstein’s Law of Group Polarization using a simple flocking agent-based model. By manipulating only the ‘social horizon radius’, behaviors ranging from random to flocking to polarizing group were produced. Surprisingly, the introduction of even a small number of ‘explorers’ with diverse positions in the information space were capable of sufficiently influencing the behavior of the polarized ‘exploiters’ that they would bend back towards the central areas of the information space.

This work also refines the idea of Group Polarization in that polarization need not be linear – it can curve and meander under the influence of other individuals. Indeed, one need only look at the recent switch in regard to Vladimir Putin by American right wing politics to see that this can manifest in reality as well. If influence from diverse sources  can change extremely polarized behavior and keep it more ‘centered’, then perhaps the design of our search interfaces should reflect the ability to explore by some users and then in turn use that exploration as a means of influencing more polarized groups. Currently, most work in information retrieval from Search to Social Networks is to provide the most relevant information to the user. This research implies that it may be even more important to provide diverse information.

REFERENCES

Deneubourg, Jean-Louis, and Simon Goss. “Collective patterns and decision-making.” Ethology Ecology & Evolution 1.4 (1989): 295-311.

Petit, Odile, and Richard Bon. “Decision-making processes: the case of collective movements.” Behavioural Processes 84.3 (2010): 635-647.

Reynolds, Craig W. “Flocks, herds and schools: A distributed behavioral model.” ACM SIGGRAPH computer graphics 21.4 (1987): 25-34.

Hegselmann, Rainer, and Ulrich Krause. “Opinion dynamics and bounded confidence models, analysis, and simulation.” Journal of Artificial Societies and Social Simulation 5.3 (2002).

Ester, Martin, et al. “A density-based algorithm for discovering clusters in large spatial databases with noise.” Kdd. Vol. 96. No. 34. 1996.

Curran, Louise. “An analysis of cycles in skirt lengths and widths in the UK and Germany, 1954-1990.” Clothing and Textiles Research Journal 17.2 (1999): 65-72.

Cohen, Jonathan D., Samuel M. McClure, and J. Yu Angela. “Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration.” Philosophical Transactions of the Royal Society of London B: Biological Sciences 362.1481 (2007): 933-942.

Munson, Sean A., and Paul Resnick. “Presenting diverse political opinions: how and how much.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2010.

Burnetas, Apostolos N., and Michael N. Katehakis. “Optimal adaptive policies for Markov decision processes.” Mathematics of Operations Research 22.1 (1997): 222-255.

Flaxman, Seth, Sharad Goel, and Justin Rao. “Filter bubbles, echo chambers, and online news consumption.” Public Opinion Quarterly (2016): nfw006.

Notes——————————————

all-exploit-si-radius-0

all-exploit-si-ratio-0

all-exploit-si-radius-10

90-exploitr10-10-explorer0

The Law of Group Polarization

Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School.

Relevant flocking and collective decision making papers:

Relevant Sociophysics papers:

Machine learning to classify agents:

The following are what I consider to be the most pertinent statements in the paper, and a discussion of modelling, measurements and potential implications

In brief, group polarization means that members of a deliberating  group  predictably  move  toward  a  more  extreme  point  in  the direction  indicated  by  the  members’  predeliberation  tendencies. [pp 176]

Note that this statement has two different implications. First, a deliberating group, asked to make a  group  decision, will  shift toward  a more  extreme  point  in the  direction  indicated  by the median predeliberation judgment. Second, the tendency of individuals who compose a  deliberating group, if polled  anonymously  after  discussion, will  be to shift toward a more  extreme  point in the  direction indicated  by the median predeliberation judgment. [pp 176]

Notably,  groups consisting  of  individuals with extremist tendencies are more likely to shift,  and likely  to  shift  more  (a  point  that  bears  on  the  wellsprings  of  violence  and terrorism);  the same is  true for groups with some kind of salient shared identity (like  Republicans,  Democrats,  and lawyers,  but unlike jurors and experimental subjects). When  like-minded people  are  participating  in  “iterated  polarization games” -when  they  meet  regularly,  without sustained  exposure  to  competing views- extreme movements are all the more likely. [pp 176]

One of my largest purposes is to cast light on enclave deliberation,  a  process that I understand to involve deliberation among like-minded people who talk or even  live,  much  of  the  time,  in  isolated  enclaves.  I  will  urge  that  enclave deliberation is, simultaneously, a  potential danger to social stability, a source of social fragmentation or even violence, and a safeguard against social injustice and unreasonableness [pp 177]

Without  a place for enclave deliberation, citizens in the broader public sphere may move  in certain directions,  even  extreme directions, precisely  because opposing voices are not heard at all [pp 177]

Though standard, the term “group polarization” is somewhat misleading. It is not meant to suggest that group members will shift to the poles, nor does it refer to an increase in variance among groups, though this may be the ultimate result. Instead the term refers to a  predictable shift within a group discussing a  case or problem. As the shift occurs,  groups,  and group  members,  move  and coalesce, not toward the  middle  of antecedent  dispositions,  but toward a  more  extreme position  in  the  direction  indicated  by  those  dispositions.  The  effect  of deliberation is  both to  decrease  variance  among  group  members,  as  individual differences  diminish,  and  also  to  produce  convergence  on  a  relatively  more extreme  point among predeliberation judgments. [pp 178]

It  is possible that when people are making judgments individually, they err on the side of caution,  expressing  a  view in the  direction  that they really  hold,  but stating that view  cautiously,  for  fear  of seeming  extreme.  Once  other  people  express supportive views,  the  relevant inhibition disappears,  and people feel  free  to say what,  in  a  sense,  they  really  believe.  There  appears to  be  no  direct test  of this hypothesis,  but it is reasonable  to  believe  that the  phenomenon plays  a  role  in group polarization and choice shifts. [pp 180]

First,  it matters a  great  deal  whether  people  consider  themselves  part  of  the  same  social group  as the  other members;  a  sense of shared identity will heighten the shift, and  a  belief  that  identity  is  not shared will reduce  and  possibly  eliminate  it. Second,  deliberating  groups  will  tend  to  depolarize  if they  consist  of equally opposed  subgroups  and  if  members  have  a  degree  of  flexibility  in  their positions. [pp 180]

Hence  people  are  less  likely  to  shift  if  the  direction advocated  is  being  pushed  by  unfriendly  group  members;  the  likelihood  of  a shift,  and its likely size,  are  increased when  people  perceive fellow  members as friendly,  likeable,  and  similar  to  them. [pp 181]

  • This is handled in the model by having a position and heading in the n-dimensional belief space. Two agents may occupy the same space, but unless they are travelling in the same direction or the social influence horizon is very large, there will not be sufficient time to overcome the orientation of the agents (slew rate)

…it  has  been  found  to  matter  whether  people  think  of themselves,  antecedently  or  otherwise,  as  part  of a  group  having  a  degree  of solidarity. If they think of themselves  in this way,  group  polarization is  all the more likely,  and it is  likely  too to  be  more  extreme. Thus when the  context emphasizes  each  person’s  membership  in  the  social  group  engaging  in deliberation,  polarization  increases. [pp 181]

  • The model shows this as the ‘tightness’ of the group, which can be described also as the variance of distance or angle measures.

 Depolarization and deliberation  without shifts. … In  fact  the  persuasive  arguments  theory  implies that  there  will  be  depolarization  if  and  when  new  persuasive  arguments  are offered  that  are  opposite  to  the  direction  initially favored  by  group  members. There  is  evidence  for  this  phenomenon. [pp 181]

  • The model shows something slightly different. As long as there is a sufficient diversity of visible opinion, the polarized flock is influenced back towards the center of the (bounded) belief space

“familiar  and  long debated issues  do  not depolarize  easily.” With respect to such issues,  people are simply  less  likely  to  shift at all.  And when one or more  people  in a  group know the right answer to  a  factual  question,  the  group is  likely to shift in the direction  of accuracy [pp 182]

  • For future work. Agents that have associated over a period of time can be more attracted to each other, creating greater inertia and mimicking this effect.
  • From Presenting Diverse Political Opinions: How and How Much: In interviews with users of several online political spaces, Stromer-Galley found that those participants sought out diverse opinions  and enjoyed the range of opinions they encountered online [20]. A study by the Pew Internet and American Life Project during the 2004 election season found that, overall, Americans were not using the Internet to access only supporting materials [8]. Instead, Internet users were more aware  than non-Internet users of a range of political arguments, including those that challenged their own positions and preferences.
    • The model divides groups into explorers (diversity seekers) and exploiters (Confirmers and Avoiders). These behave differently with respect to how much they pay attention to their social influence horizons.

Group polarization has particular implications for insulated “outgroups” and (in the  extreme  case)  for  the  treatment  of  conspiracies.  Recall  that  polarization increases when group members identify themselves along some salient dimension, and especially when the group is able to define itself by contrast to another group. Outgroups  are  in  this  position-of  self-contrast  to  others-by  definition. Excluded  by  choice  or coercion from  discussion with others, such  groups  may become  polarized  in quite  extreme  directions,  often  in  part  because  of  group polarization. It is for this reason that outgroup members can sometimes be led, or lead themselves, to  violent acts [pp 184]

  • Note the “salient dimension”
  • Anti-belief is designed in, but disabled at this point. Future work
  • Exclusion from other groups can be modelled as only disabling intra-group communication “allow interaction” check

The  central  problem is  that widespread  error  and  social  fragmentation  are likely to result when like-minded people, insulated from others, move in extreme directions simply because of limited  argument pools and parochial influences. As an  extreme  example,  consider  a  system  of  one-party  domination,  which  stifles dissent in part because  it refuses to establish space for the emergence of divergent positions;  in  this way,  it  intensifies  polarization  within  the  party  while  also disabling  external  criticism[pp 186]

  • Domination is modeled here by increasing the radius of social interaction such that all agents are visible to all other agents. This does result in the maximization of polarization.

A certain measure of isolation will, in some cases, be crucial to the development of ideas and approaches that would not otherwise  emerge  and  that deserve  a social hearing. [pp 186]

  • Limiting the radius of social interaction provides this capability in the model. Low, non-zero values provide conditions for the emergence of individual flocks, identifiable by DBSCAN clustering, which identifies clusters using density measures rather than an a priori determination of the number of clusters to find.

Answering Sunstein’s Questions

If people are shifting their position in  order to maintain their reputation and self-conception, before groups that may or may  not be representative of the public  as a whole, is there  any reason to  think that  deliberation is  making things  better rather  than worse? [pp 187]

  • The model implies that visibility between deliberating groups may providing a “restoring force” that brings all groups to a more moderate position that exists between destructive/reflective boundaries (not sure what would happen with “sticky” boundaries). As an aside here, the movement of the lethal boundaries should result in a movement of the average center of the population.

Implications for Design

By  contrast,  those who  believe  that  “destabilization”  is  an intrinsic  good,  or that the status  quo contains sufficient injustice that it is worthwhile to incur the risks of encouraging polarization on the part of diverse groups, will, or should, be  drawn to a system that enthusiastically promotes insular deliberation within enclaves [pp 191]

  • The internet seems in many ways to have evolved into a system that encourages destabilization (disruption) and the creation of many isolated groups. The level of this seems to have become dangerous to the cohesion of society as a whole, where the acceptance of “alternative facts” is now an accepted political reality. Changing that design so that there is more visibility to the wider range of points of view could bring back moderation.

The  constraints  of  time  and  attention  call  for  limits  to heterogeneity; and-a separate point-for good deliberation to take place, some views  are properly placed off the  table, simply because  time  is  limited and they are so  invidious,  implausible,  or both.  This  point might seem  to  create  a  final conundrum: To know what points of view should be represented in any group deliberation,  it  is  important  to  have  a  good  sense  of  the  substantive  issues involved,  indeed a  sufficiently  good sense  as to  generate judgments about what points of view must be included and excluded. But if we already know that, why should we  not proceed directly to  the  merits?  If we  already  know that,  before deliberation occurs, does deliberation have any point at all? [pp 193]

  • It’s not that heterogeneity needs to be limited per se. There does need to be a mechanism that provides sufficient visibility across individuals and groups so that as a whole, society stares reasonably centered. The model shows that flocking can occur across arbitrarily high dimensions, but that the information distance increases as a function of the number of dimensions. Computer-Mediated communication might be able to address this issue by projecting high-dimensional sets of concepts and projecting them into spaces (e.g. self-organizing maps) that can be navigated by individuals and groups of human users. The goal is to recognize and encourage particular types of flocking behaviors while providing enough credible visibility to counter information so that this interaction of flocks of flocks stays within the bounds that support a healthy society.