Tuesday, November 29, 2016

The Big Questions We'd Better Figure Out, Part 1: How Do We Humanize The Algorithms That Will Rule Us?--UPDATED

Update below.
Original post:

A post by Izabella Kaminska at FT Alphaville earlier today reminded me of a piece at Quanta I'd meant to link.
I'll be coming back to the Alphaville riff tomorrow, there are just so many ideas popping out of it I can't do it during the work day. Here's one example in the first three paragraphs:

Algorithmic discrimination
Conversations across the political spectrum are exploding into emotionally charged and heated rampages — many of them focused on calling out x, y or z as bigoted, racist or discriminatory.

Deeming x, y or z as either or the other is beyond the scope or remit of this blog. What we’re concerned with is how this fits into the fake news paradigm and why it ultimately jars with the sacred notion that people are supposed to be presumed innocent until proven guilty — and that guilt should be determined through a fair evaluation of actions rather than words. This practice is a hallowed facet of liberal representative democracy because once someone is accused of holding an unpopular view, it’s often impossible to wage a convincing defence outside of this formal process.

Proving a negative is largely futile. Proving you don’t secretly think x, y or z, meanwhile, is nigh on impossible unless you support a hitherto only seen in sci-fi literature scale of personal intrusion (thought police, ahem), undermining not just the sanctity of the privacy of one’s mind but also the scope and feel of what it means to be an individual....
Well, what she's pointing out is such a threat to representative government that Machiavelli in his discourses on the first ten books of Livy's history of the Roman Republic mentions calumny (The making of false and defamatory statements about someone in order to damage their reputation; slander...O.E.D.) at least a hundred times and in fact dedicates a chapter (XIII of book I) to making the distinction between accusation as a tool for finding the truth and calumny as a method of destruction:

XIII In proportion as accusations are useful in a republic, so are calumnies pernicious

As you can see, we haven't even gotten past the intro, much less mentioned algos, and we're already off into some seriously important stuff and so, by your leave, I'll do some cut-and-paste from Quanta, who in any event are smarter than I by orders of magnitude and come back to Algo Discrimination maƱana.

From Quanta Magazine, Nov. 23:

How to Force Our Machines to Play Fair
The computer scientist Cynthia Dwork takes abstract concepts like privacy and fairness and adapts them into machine code for the algorithmic age.
Theoretical computer science can be as remote and abstract as pure mathematics, but new research often begins in response to concrete, real-world problems. Such is the case with the work of Cynthia Dwork.

Over the course of a distinguished career, Dwork has crafted rigorous solutions to dilemmas that crop up at the messy interface between computing power and human activity. She is most famous for her invention in the early to mid-2000s of “differential privacy,” a set of techniques that safeguard the privacy of individuals in a large database. Differential privacy ensures, for example, that a person can contribute their genetic information to a medical database without fear that anyone analyzing the database will be able to figure out which genetic information is hers — or even whether she has participated in the database at all. And it achieves this security guarantee in a way that allows researchers to use the database to make new discoveries.

Dwork’s latest work has a similar flavor to it. In 2011 she became interested in the question of fairness in algorithm design. As she observes, algorithms increasingly control the kinds of experiences we have: They determine the advertisements we see online, the loans we qualify for, the colleges that students get into. Given this influence, it’s important that algorithms classify people in ways that are consistent with commonsense notions of fairness. We wouldn’t think it’s ethical for a bank to offer one set of lending terms to minority applicants and another to white applicants. But as recent work has shown — most notably in the book “Weapons of Math Destruction,” by the mathematician Cathy O’Neil — discrimination that we reject in normal life can creep into algorithms.
Privacy and ethics are two questions with their roots in philosophy. These days, they require a solution in computer science. Over the past five years, Dwork, who is currently at Microsoft Research but will be joining the faculty at Harvard University in January, has been working to create a new field of research on algorithmic fairness. Earlier this month she helped organize a workshop at Harvard that brought together computer scientists, law professors and philosophers.

Quanta Magazine spoke with Dwork about algorithmic fairness, her interest in working on problems with big social implications, and how a childhood experience with music shaped the way she thinks about algorithm design today. An edited and condensed version of the interview follows.

QUANTA MAGAZINE: When did it become obvious to you that computer science was where you wanted to spend your time thinking?
CYNTHIA DWORK: I always enjoyed all of my subjects, including science and math. I also really loved English and foreign languages and, well, just about everything. I think that I applied to the engineering school at Princeton a little on a lark. My recollection is that my mother said, you know, this might be a nice combination of interests for you, and I thought, she’s right.

It was a little bit of a lark, but on the other hand it seemed as good a place to start as any. It was only in my junior year of college when I first encountered automata theory that I realized that I might be headed not for a programming job in industry but instead toward a Ph.D. There was a definite exposure I had to certain material that I thought was beautiful. I just really enjoyed the theory.

You’re best known for your work on differential privacy. What drew you to your present work on “fairness” in algorithms?
I wanted to find another problem. I just wanted something else to think about, for variety. And I had enjoyed the sort of social mission of the privacy work — the idea that we were addressing or attempting to address a very real problem. So I wanted to find a new problem and I wanted one that would have some social implications.

So why fairness?
I could see that it was going to be a major concern in real life.

How so?
I think it was pretty clear that algorithms were going to be used in a way that could affect individuals’ options in life. We knew they were being used to determine what kind of advertisements to show people. We may not be used to thinking of ads as great determiners of our options in life. But what people get exposed to has an impact on them. I also expected that algorithms would be used for at least some kind of screening in college admissions, as well as in determining who would be given loans.

I didn’t foresee the extent to which they’d be used to screen candidates for jobs and other important roles. So these things — what kinds of credit options are available to you, what sort of job you might get, what sort of schools you might get into, what things are shown to you in your everyday life as you wander around on the internet — these aren’t trivial concerns.

Your 2012 paper that launched this line of your research hinges on the concept of “awareness.” Why is this important?
One of the examples in the paper is: Suppose you had a minority group in which the smart students were steered toward math and science, and a dominant group in which the smart students were steered toward finance. Now if someone wanted to write a quick-and-dirty classifier to find smart students, maybe they should just look for students who study finance because, after all, the majority is much bigger than the minority, and so the classifier will be pretty accurate overall. The problem is that not only is this unfair to the minority, but it also has reduced utility compared to a classifier that understands that if you’re a member of the minority and you study math, you should be viewed as similar to a member of the majority who studies finance. That gave rise to the title of the paper, “Fairness Through Awareness,” meaning cross-cultural awareness....MORE
Update: "The Big Questions We'd Better Figure Out, Part 2: Algorithmic Discrimination and Empathy"