I wonder what a statistical analysis would do to improve peoples’ lives if registrars attempted to put the mass of classes in the middle of the day? Would educational outcomes improve along with peoples’ psyches? Many schedulers are trying to maximize based on the scarcity of classroom resources. What if they maximized on mental health and classroom performance? Is classroom scheduling potentially a valuable public health tool?
📖 Read chapter one of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
I don’t think she’s used the specific words in the book yet, but O’Neil is fundamentally writing about social justice and transparency. To a great extent both governments and increasingly large corporations are using these Weapons of Math Destruction inappropriately. Often it may be the case that the algorithms are so opaque as to be incomprehensible by their creators/users, but, as I suspect in many cases, they’re being used to actively create social injustice by benefiting some classes and decimating others. The evolving case of Facebook’s involvement in potentially shifting the outcome of the 2016 Presidential election especially via “dark posts” is an interesting case in point with regard to these examples.
In some sense these algorithms are like viruses running rampant in a large population without the availability of antibiotics to tamp down or modify their effects. Without feedback mechanisms and the ability to see what is going on as it happens the scale issue she touches on can quickly cause even greater harm over short periods of time.
I like that one of the first examples she uses for modeling is that of preparing food for a family. It’s simple, accessible, and generic enough that the majority of people can relate directly to it. It has lots of transparency (even more than her sabermetrics example from baseball). Sadly, however, there is a large swath of the American population that is poor, uneducated, and living in horrific food deserts that they may not grasp the subtleties of even this simple model. As I was reading, it occurred to me that there is a reasonable political football that gets pushed around from time to time in many countries that relates to food and food subsidies. In the United States it’s known as the Supplemental Nutrition Assistance Program (aka SNAP) and it’s regularly changing, though fortunately for many it has some nutritionists who help to provide a feedback mechanism for it. I suspect it would make a great example of the type of Weapon of Mass Destruction she’s discussing in this book. Those who are interested in a quick overview of it and some of the consequences can find a short audio introduction to it via the Eat This Podcast episode How much does a nutritious diet cost? Depends what you mean by “nutritious” or Crime and nourishment Some costs and consequences of the Supplemental Nutrition Assistance Program which discusses an interesting crime related sub-consequence of something as simple as when SNAP benefits are distributed.
I suspect that O’Neil won’t go as far as to bring religion into her thesis, so I’ll do it for her, but I’ll do so from a more general moral philosophical standpoint which underpins much of the Judeo-Christian heritage so prevalent in our society. One of my pet peeves of moralizing (often Republican) conservatives (who often both wear their religion on their sleeves as well as beat others with it–here’s a good recent case in point) is that they never seem to follow the Golden Rule which is stated in multiple ways in the Bible including:
He will reply, ‘Truly I tell you, whatever you did not do for one of the least of these, you did not do for me.
In a country that (says it) values meritocracy, much of the establishment doesn’t seem to put much, if any value, into these basic principles as they would like to indicate that they do.
I’ve previously highlighted the application of mathematical game theory before briefly in relation to the Golden Rule, but from a meritocracy perspective, why can’t it operate at all levels? By this I’ll make tangential reference to Cesar Hidalgo‘s thesis in his book Why Information Grows in which he looks not at just individuals (person-bytes), but larger structures like firms/companies (firmbytes), governments, and even nations. Why can’t these larger structures have their own meritocracy? When America “competes” against other countries, why shouldn’t it be doing so in a meritocracy of nations? To do this requires that we as individuals (as well as corporations, city, state, and even national governments) need to help each other out to do what we can’t do alone. One often hears the aphorism that “a chain is only as strong as it’s weakest link”, why then would we actively go out of our way to create weak links within our own society, particularly as many in government decry the cultures and actions of other nations which we view as trying to defeat us? To me the statistical mechanics of the situation require that we help each other to advance the status quo of humanity. Evolution and the Red Queeen Hypothesis dictates that humanity won’t regress back to the mean, it may be regressing itself toward extinction otherwise.
Highlights, Quotes, & Marginalia
You can often see troubles when grandparents visit a grandchild they haven’t seen for a while.
Highlight (yellow) page 22 | Location 409-410
Added on Thursday, October 12, 2017 11:19:23 PM
Upon meeting her a year later, they can suffer a few awkward hours because their models are out of date.
Highlight (yellow) page 22 | Location 411-412
Added on Thursday, October 12, 2017 11:19:41 PM
Racism, at the individual level, can be seen as a predictive model whirring away in billions of human minds around the world. It is built from faulty, incomplete, or generalized data. Whether it comes from experience or hearsay, the data indicates that certain types of people have behaved badly. That generates a binary prediction that all people of that race will behave that same way.
Highlight (yellow) page 22 | Location 416-420
Added on Thursday, October 12, 2017 11:20:34 PM
Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models.
Highlight (yellow) page 23 | Location 420-421
Added on Thursday, October 12, 2017 11:20:52 PM
the workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite.
Highlight (yellow) page 25 | Location 454-455
Added on Thursday, October 12, 2017 11:24:46 PM
A 2013 study by the New York Civil Liberties Union found that while black and Latino males between the ages of fourteen and twenty-four made up only 4.7 percent of the city’s population, they accounted for 40.6 percent of the stop-and-frisk checks by police.
Highlight (yellow) page 25 | Location 462-463
Added on Thursday, October 12, 2017 11:25:50 PM
So if early “involvement” with the police signals recidivism, poor people and racial minorities look far riskier.
Highlight (yellow) page 26 | Location 465-466
Added on Thursday, October 12, 2017 11:26:15 PM
The questionnaire does avoid asking about race, which is illegal. But with the wealth of detail each prisoner provides, that single illegal question is almost superfluous.
Highlight (yellow) page 26 | Location 468-469
Added on Friday, October 13, 2017 6:01:28 PM
judge would sustain it. This is the basis of our legal system. We are judged by what we do, not by who we are.
Highlight (yellow) page 26 | Location 478-478
Added on Friday, October 13, 2017 6:02:53 PM
(And they’ll be free to create them when they start buying their own food.) I should add that my model is highly unlikely to scale. I don’t see Walmart or the US Agriculture Department or any other titan embracing my app and imposing it on hundreds of millions of people, like some of the WMDs we’ll be discussing.
You have to love the obligatory parental aphorism about making your own rules when you have your own house.
Yet the US SNAP program does just this. It could be an interesting example of this type of WMD.
Highlight (yellow) page 28 | Location 497-499
Added on Friday, October 13, 2017 6:06:04 PM
three kinds of models.
namely: baseball, food, recidivism
Highlight (yellow) page 27 | Location 489-489
Added on Friday, October 13, 2017 6:08:26 PM
The first question: Even if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible?
Highlight (yellow) page 28 | Location 502-503
Added on Friday, October 13, 2017 6:08:59 PM
many companies go out of their way to hide the results of their models or even their existence. One common justification is that the algorithm constitutes a “secret sauce” crucial to their business. It’s intellectual property, and it must be defended,
Highlight (yellow) page 29 | Location 513-514
Added on Friday, October 13, 2017 6:11:03 PM
the second question: Does the model work against the subject’s interest? In short, is it unfair? Does it damage or destroy lives?
Highlight (yellow) page 29 | Location 516-518
Added on Friday, October 13, 2017 6:11:22 PM
While many may benefit from it, it leads to suffering for others.
Highlight (yellow) page 29 | Location 521-522
Added on Friday, October 13, 2017 6:12:19 PM
The third question is whether a model has the capacity to grow exponentially. As a statistician would put it, can it scale?
Highlight (yellow) page 29 | Location 524-525
Added on Friday, October 13, 2017 6:13:00 PM
scale is what turns WMDs from local nuisances into tsunami forces, ones that define and delimit our lives.
Highlight (yellow) page 30 | Location 526-527
Added on Friday, October 13, 2017 6:13:20 PM
So to sum up, these are the three elements of a WMD: Opacity, Scale, and Damage. All of them will be present, to one degree or another, in the examples we’ll be covering
Think about this for a bit. Are there other potential characteristics?
Highlight (yellow) page 31 | Location 540-542
Added on Friday, October 13, 2017 6:18:52 PM
You could argue, for example, that the recidivism scores are not totally opaque, since they spit out scores that prisoners, in some cases, can see. Yet they’re brimming with mystery, since the prisoners cannot see how their answers produce their score. The scoring algorithm is hidden.
This is similar to anti-class action laws and arbitration clauses that prevent classes from realizing they’re being discriminated against in the workplace or within healthcare. On behalf of insurance companies primarily, many lawmakers work to cap awards from litigation as well as to prevent class action suits which show much larger inequities that corporations would prefer to keep quiet. Some of the recent incidences like the cases of Ellen Pao, Susan J. Fowler, or even Harvey Weinstein are helping to remedy these types of things despite individuals being pressured to stay quiet so as not to bring others to the forefront and show a broader pattern of bad actions on the part of companies or individuals. (This topic could be an extended article or even book of its own.)
Highlight (yellow) page 31 | Location 542-544
Added on Friday, October 13, 2017 6:20:59 PM
the point is not whether some people benefit. It’s that so many suffer.
Highlight (yellow) page 31 | Location 547-547
Added on Friday, October 13, 2017 6:23:35 PM
And here’s one more thing about algorithms: they can leap from one field to the next, and they often do. Research in epidemiology can hold insights for box office predictions; spam filters are being retooled to identify the AIDS virus. This is true of WMDs as well. So if mathematical models in prisons appear to succeed at their job—which really boils down to efficient management of people—they could spread into the rest of the economy along with the other WMDs, leaving us as collateral damage.
Highlight (yellow) page 31 | Location 549-552
Added on Friday, October 13, 2017 6:24:09 PM
Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Red–Example to work through
I’m reading this as part of Bryan Alexander’s online book club.