👓 Talking to Boys the Way We Talk to Girls | New York Times

Talking to Boys the Way We Talk to Girls by Andrew Reiner (New York Times)

Stereotypically macho messages limit children’s understanding of what it means to be a father, a man and a boy, as well.

👓 Talking to Boys the Way We Talk to Girls | New York Times was originally published on Chris Aldrich


👓 Why Facts Don’t Change Our Minds | The New Yorker

Why Facts Don’t Change Our Minds by Elizabeth Kolbert (The New Yorker)

New discoveries about the human mind show the limitations of reason.

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight. Credit Illustration by Gérard DuBois

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

Cartoon“Thanks again for coming—I usually find these office parties rather awkward.”

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. 

Source: Why Facts Don’t Change Our Minds | The New Yorker

👓 Why Facts Don’t Change Our Minds | The New Yorker was originally published on Chris Aldrich | Boffo Socko

Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016

Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016

A Problem with Transcripts

In the past few weeks, I’ve seen dozens of news outlets publish multi-paragraph excerpts of speeches from Donald Trump and have been appalled that I was unable to read them in any coherent way. I could not honestly follow or discern any coherent thought or argument in the majority of them. I was a bit shocked because in listening to him, he often sounds like he has some kind of point, though he seems to be spouting variations on one of ten one-liners he’s been using for over a year now. There’s apparently a flaw in our primal reptilian brains that seems to be tricking us into thinking that there’s some sort of substance in his speech when there honestly is none. I’m going to have to spend some time reading more on linguistics and cognitive neuroscience. Maybe Stephen Pinker knows of an answer?

The situation got worse this week as I turned to news sources for fact-checking of the recent presidential debate. While it’s nice to have web-based annotation tools like Genius[1] and Hypothes.is[2] to mark up these debates, it becomes another thing altogether to understand the meaning of what’s being said in order to actually attempt to annotate it. I’ve included some links so that readers can attempt the exercise for themselves.

Recent transcripts (some with highlights/annotations):

Doubletalk and Doublespeech

It’s been a while since Americans were broadly exposed to actual doubletalk. For the most part our national experience with it has been a passing curiosity highlighted by comedians.

a deliberately unintelligible form of speech in which inappropriate, invented or nonsense syllables are combined with actual words. This type of speech is commonly used to give the appearance of knowledge and thereby confuse, amuse, or entertain the speaker’s audience.
another term for doublespeak
see also n. doubletalk [3]

Since the days of vaudeville (and likely before), comedians have used doubletalk to great effect on stage, in film, and on television. Some comedians who have historically used the technique as part of their acts include Al Kelly, Cliff Nazarro, Danny Kaye, Gary Owens, Irwin Corey, Jackie Gleason, Sid Caesar, Stanley Unwin, and Reggie Watts. I’m including some short video clips below as examples.

A well-known, but foreshortened, form of it was used by Dana Carvey in his Saturday Night Live performances caricaturizing George H.W. Bush by using a few standard catch phrases with pablum in between: “Not gonna do it…”, “Wouldn’t be prudent at this juncture”, and “Thousand Points of Light…”. These snippets in combination with some creative hand gestures (pointing, lacing fingers together), along with a voice melding of Mr. Rogers and John Wayne were the simple constructs that largely transformed a diminutive comedian convincingly into a president.

Doubletalk also has a more “educated” sibling known as technobabble. Engineers are sure to recall a famous (and still very humorous) example of both doubletalk and technobabble in the famed description of the Turboencabulator.[4] (See also, the short videos below.)

Doubletalk comedy examples

Al Kelly on Ernie Kovaks

Sid Caesar

Technobabble examples


Rockwell Turbo Encabulator Version 2


And of course doubletalk and technobabble have closely related cousins named doublespeak and politicobabble. These are far more dangerous than the others because they move over the line of comedy into seriousness and are used by people who make decisions effecting hundreds of thousands to millions, if not billions, of people on the planet. I’m sure an archeo-linguist might be able to discern where exactly politicobabble emerged and managed to evolve into a non-comedic form of speech which people manage to take far more seriously than its close ancestors. One surely suspects some heavy influence from George Orwell’s corpus of work:

The term “doublespeak” probably has its roots in George Orwell’s book Nineteen Eighty-Four.[5] Although the term is not used in the book, it is a close relative of one of the book’s central concepts, “doublethink”. Another variant, “doubletalk”, also referring to deliberately ambiguous speech, did exist at the time Orwell wrote his book, but the usage of “doublespeak” as well as of “doubletalk” in the sense emphasizing ambiguity clearly postdates the publication of Nineteen Eighty-Four. Parallels have also been drawn between doublespeak and Orwell’s classic essay Politics and the English Language [6] , which discusses the distortion of language for political purposes.

in Wikipedia.com [7]


While politicobabble is nothing new, I did find a very elucidating passage from the 1992 U.S. Presidential Election cycle which seems to be a major part of the Trump campaign playbook:

Repetition of a meaningless mantra is supposed to empty the mind, clearing the way for meditation on more profound matters. This campaign has achieved the first part. I’m not sure about the second.

Candidates are now told to pick a theme, and keep repeating it-until polls show it’s not working, at which point the theme vanishes and another takes its place.

The mantra-style repetition of the theme of the week, however, leaves the impression that Teen Talk Barbie has acquired some life-size Campaign Talk Ken dolls. Pull the string and you get: ‘Congress is tough,’ ‘worst economic performance since the Depression,’ or ‘a giant sucking sound south of the border.’

A number of words and phrases, once used to express meaningful concepts, are becoming as useful as ‘ommm’ in the political discourse. Still, these words and phrases have meanings, just not the ones the dictionary originally intended.

Joanne Jacobs
in A Handy Guide To Politico-babble in the Chicago Tribune on October 31, 1992 [8]


In the continuation of the article, Jacobs goes on to give a variety of examples of the term as well as a “translation” guide for some of the common politicobabble words from that particular election. I’ll leave it to the capable hands of others (perhaps in the comments, below?) to come up with the translation guide for our current political climate.

The interesting evolutionary change I’ll note for the current election cycle is that Trump hasn’t delved into any depth on any of his themes to offend anyone significantly enough. This has allowed him to stay with the dozen or so themes he started out using and therefore hasn’t needed to change them as in campaigns of old.

Filling in the Blanks

These forms of pseudo-speech area all meant to fool us into thinking that something of substance is being discussed and that a conversation is happening, when in fact, nothing is really being communicated at all. Most of the intended meaning and reaction to such speech seems to stem from the demeanor of the speaker as well as, in some part, to the reaction of the surrounding interlocutor and audience. In reading Donald Trump transcripts, an entirely different meaning (or lack thereof) is more quickly realized as the surrounding elements which prop up the narrative have been completely stripped away. In a transcript version, gone is the hypnotizing element of the crowd which is vehemently sure that the emperor is truly wearing clothes.

In many of these transcripts, in fact, I find so little is being said that the listener is actually being forced to piece together the larger story in their head. Being forced to fill in the blanks in this way leaves too much of the communication up to the listener who isn’t necessarily engaged at a high level. Without more detail or context to understand what is being communicated, the listener is far more likely to fill in the blanks to fit a story that doesn’t create any cognitive dissonance for themselves — in part because Trump is usually smiling and welcoming towards his adoring audiences.

One will surely recall that Trump even wanted Secretary Clinton to be happy during the debate when he said, “Now, in all fairness to Secretary Clinton — yes, is that OK? Good. I want you to be very happy. It’s very important to me.” (This question also doubles as an example of a standard psychological sales tactic of attempting to get the purchaser to start by saying ‘yes’ as a means to keep them saying yes while moving them towards making a purchase.)

His method of communicating by leaving large holes in his meaning reminds me of the way our brain smooths out information as indicated in this old internet meme [9]:

I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Scuh a cdonition is arpppoiatrely cllaed typoglycemia.


I’m also reminded of the biases and heuristics research carried out in part (and the remainder cited) by Daniel Kahneman in his book Thinking, Fast and Slow [10] in which he discusses the mechanics of how system 1 and system 2 work in our brains. Is Trump taking advantage of the deficits of language processing in our brains in something akin to system 1 biases to win large blocks of votes? Is he creating a virtual real-time Choose-Your-Own-Adventure to subvert the laziness of the electorate? Kahneman would suggest the the combination of what Trump does say and what he doesn’t leaves it up to every individual listener to create their own story. Their system 1 is going to default to the easiest and most palatable one available to them: a happy story that fits their own worldview and is likely to encourage them to support Trump.

Ten Word Answers

As an information theorist, I know all too well that there must be a ‘linguistic Shannon limit’ to the amount of semantic meaning one can compress into a single word. [11] One is ultimately forced to attempt to form sentences to convey more meaning. But usually the less politicians say, the less trouble they can get into — a lesson hard won through generations of political fighting.

I’m reminded of a scene from The West Wing television series. In season 4, episode 6 which aired on October 30, 2002 on NBC, Game On had a poignant moment (video clip below) which is germane to our subject: [12]

Moderator: Governor Ritchie, many economists have stated that the tax cut, which is the centrepiece of your economic agenda, could actually harm the economy. Is now really the time to cut taxes?
Governor Ritchie, R-FL: You bet it is. We need to cut taxes for one reason – the American people know how to spend their money better than the federal government does.
Moderator: Mr. President, your rebuttal.
President Bartlet: There it is…
That’s the 10 word answer my staff’s been looking for for 2 weeks. There it is.
10 word answers can kill you in political campaigns — they’re the tip of the sword.
Here’s my question: What are the next 10 words of your answer?
“Your taxes are too high?” So are mine…
Give me the next 10 words: How are we going to do it?
Give me 10 after that — I’ll drop out of the race right now.
Every once in a while — every once in a while, there’s a day with an absolute right and an absolute wrong, but those days almost always include body counts. Other than that there aren’t very many un-nuanced moments in leading a country that’s way too big for 10 words.
I’m the President of the United States, not the president of the people who agree with me. And by the way, if the left has a problem with that, they should vote for somebody else.

As someone who studies information theory and complexity theory and even delves into sub-topics like complexity and economics, I can agree wholeheartedly with the sentiment. Though again, here I can also see the massive gaps between system 1 and 2 that force us to want to simplify things down to such a base level that we don’t have to do the work to puzzle them out.

(And yes, that is Jennifer Anniston’s father playing the moderator.)

One can’t but wonder why Mr. Trump doesn’t seem to have ever gone past the first ten words? Is it because he isn’t capable? interested? Or does he instinctively know better? It would seem that he’s been doing business by using the uncertainty inherent in his speech for decades, but always operating by using what he meant (or thought he wanted to mean) than what the other party heard and thought they understood. If it ain’t broke, don’t fix it.

Idiocracy or Something Worse?

In our increasingly specialized world, people eventually have to give in and quit doing some tasks that everyone used to do for themselves. Yesterday I saw a lifeworn woman in her 70s pushing a wheeled wire basket with a 5 gallon container of water from the store to her home. As she shuffled along, I contemplated Thracian people from fourth century BCE doing the same thing except they likely carried amphorae possibly with a yoke and without the benefit of the $10 manufactured custom shopping cart. 20,000 years before that people were still carrying their own water, but possibly without even the benefit of earthenware containers. Things in human history have changed very slowly for the most part, but as we continually sub-specialize further and further, we need to remember that we can’t give up one of the primary functions that makes us human: the ability to think deeply and analytically for ourselves.

I suspect that far too many people are too wrapped up in their own lives and problems to listen to more than the ten word answers our politicians are advertising to us. We need to remember to ask for the next ten words and the ten after that.

Otherwise there are two extreme possible outcomes:

We’re either at the beginning of what Mike Judge would term Idiocracy[13]

Or we’re headed to what Michiko Kakutani is “subtweeting” about in her recent review In ‘Hitler’ an Ascent from ‘Dunderhead’ to Demagogue [14] of Volker Ulrich’s new book Hitler: Ascent 1889-1939[15] 


Here, one is tempted to quote George Santayana’s famous line (from The Life of Reason, 1905), “Those who cannot remember the past are condemned to repeat it.” However, I far prefer the following as more apropos to our present national situation:

Sir Winston Leonard Spencer-Churchill (November 30, 1874—January 24 1965), a British statesman, historian, writer and artist,
in House of Commons, 2 May 1935, after the Stresa Conference, in which Britain, France and Italy agreed—futilely—to maintain the independence of Austria.



If Cliff Navarro comes back to run for president, I hope no one falls for his joke just because he wasn’t laughing as he acted it out. If his instructions for fixing the wagon (America) are any indication, the voters who are listening and making the repairs will be in severe pain.

Cliff Navarro


“Genius | Song Lyrics & Knowledge,” Genius, 2016. [Online]. Available: http://genius.com. [Accessed: 29-Sep-2016]
“Hypothesis | The Internet, peer reviewed. | Hypothesis,” hypothes.is, 2016. [Online]. Available: https://hypothes.is/. [Accessed: 29-Sep-2016]
“Double-talk – Wikipedia, the free encyclopedia,” en.wikipedia.org, 2016. [Online]. Available: https://en.wikipedia.org/wiki/Double-talk. [Accessed: 29-Sep-2016]
“Turboencabulator – Wikipedia, the free encyclopedia,” en.wikipedia.org, 2016. [Online]. Available: https://en.wikipedia.org/wiki/Turboencabulator. [Accessed: 29-Sep-2016]
G. Orwell, Nineteen Eighty-four, 1st ed. London: Harvill Secker & Warburg, 1949.
G. Orwell, “Politics and the English Language,” Horizon, vol. 13, no. 76, pp. 252–265, Apr. 1946 [Online]. Available: http://www.orwell.ru/library/essays/politics/english/e_polit/
“Doublespeak – Wikipedia, the free encyclopedia,” en.wikipedia.org, 29-Sep-2016. [Online]. Available: https://en.wikipedia.org/wiki/Doublespeak. [Accessed: 29-Sep-2016]
J. Jacobs, “A Handy Guide To Politico-babble,” Chicago Tribune, 31-Oct-1992. [Online]. Available: http://articles.chicagotribune.com/1992-10-31/news/9204080638_1_family-values-trickle-bill-clinton. [Accessed: 29-Sep-2016]
M. Davis, “cmabridge | Cognition and Brain Sciences Unit,” mrc-cbu.cam.ac.uk, 2012. [Online]. Available: https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/. [Accessed: 29-Sep-2016]
D. Kahneman, Thinking, Fast and Slow. Macmillan, 2011.
C. Shanon E., “A Mathematical Theory of Communication,” Bell System Technical Journal, vol. 27, no. 3, pp. 379–423, Jul. 1948. [Source]
A. Sorkin, J. Wells, and T. Schlamme , “Game On,” The West Wing, NBC, 30-Oct-2002.
M. Judge, Idiocracy. Twentieth Century Fox, 2006.
M. Katutani, “In ‘Hitler’ an Ascent from ‘Dunderhead’ to Demagogue,” New York Times, p. 1, 27-Sep-2016 [Online]. Available: http://www.nytimes.com/2016/09/28/books/hitler-ascent-volker-ullrich.html?_r=0. [Accessed: 28-Sep-2016] [Source]
V. Ullrich, Adolf Hitler: Ascent 1889-1939, 1st ed. Knopf Publishing Group, 2016.

Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 was originally published on Chris Aldrich | Boffo Socko

🔖 Human Evolution: Our Brains and Behavior by Robin Dunbar (Oxford University Press) marked as want to read.
Official release date: November 1, 2016
09/14/16: downloaded a review copy via NetGalley


The story of human evolution has fascinated us like no other: we seem to have an insatiable curiosity about who we are and where we have come from. Yet studying the “stones and bones” skirts around what is perhaps the realest, and most relatable, story of human evolution – the social and cognitive changes that gave rise to modern humans.

In Human Evolution: Our Brains and Behavior, Robin Dunbar appeals to the human aspects of every reader, as subjects of mating, friendship, and community are discussed from an evolutionary psychology perspective. With a table of contents ranging from prehistoric times to modern days, Human Evolution focuses on an aspect of evolution that has typically been overshadowed by the archaeological record: the biological, neurological, and genetic changes that occurred with each “transition” in the evolutionary narrative. Dunbar’s interdisciplinary approach – inspired by his background as both an anthropologist and accomplished psychologist – brings the reader into all aspects of the evolutionary process, which he describes as the “jigsaw puzzle” of evolution that he and the reader will help solve. In doing so, the book carefully maps out each stage of the evolutionary process, from anatomical changes such as bipedalism and increase in brain size, to cognitive and behavioral changes, such as the ability to cook, laugh, and use language to form communities through religion and story-telling. Most importantly and interestingly, Dunbar hypothesizes the order in which these evolutionary changes occurred-conclusions that are reached with the “time budget model” theory that Dunbar himself coined. As definitive as the “stones and bones” are for the hard dates of archaeological evidence, this book explores far more complex psychological questions that require a degree of intellectual speculation: What does it really mean to be human (as opposed to being an ape), and how did we come to be that way?

    Syndicated to:

🔖 Human Evolution: Our Brains and Behavior by Robin Dunbar (Oxford University Press) was originally published on Chris Aldrich | Boffo Socko