AI: Face recognition and AI ethics

This is another longform that merits your attention. I have included summary below the blurb.

We’re now having much the same conversation about AI in general (or more properly machine learning) and especially about face recognition, which has only become practical because of machine learning. And, we’re worrying about the same things – we worry what happens if it doesn’t work and we worry what happens if it does work. We’re also, I think, trying to work out how much of this is a new problem, and how much of it we’re worried about, and why we’re worried.

Face recognition and AI ethics — Benedict Evans
  • People worried about this, a lot, and wrote books about it, a lot.
    Specifically, we worried about two kinds of problem: 
    We worried that these databases would contain bad data or bad assumptions, and in particular that they might inadvertently and unconsciously encode the existing prejudices and biases of our societies and fix them into machinery.
  • We worried people would screw up.And, we worried about people deliberately building and using these systems to do bad things
  • And, we’re worrying about the same things – we worry what happens if it doesn’t work and we worry what happens if it does work.
  • We’re also, I think, trying to work out how much of this is a new problem, and how much of it we’re worried about, and why we’re worried.
  • When good people use bad data And, the system is being used by people who don’t have the training, processes, institutional structure or individual empowerment to recognise such a mistake and react appropriately.
  • Databases gave us a new way to express it on a different scale, and so now does machine learning.
  • Machine learning changes these from logic problems to statistics problems.Instead of writing down how you recognise a photo of X, you take a hundred thousand examples of X and a hundred thousand examples of not-X and use a statistical engine to generate (‘train’) a model that can tell the difference to a given degree of certainty.Then you give it a photo and it tells you whether it matched X or not-X and by what degree.
  • Instead of telling the computer the rules, the computer works out the rules based on the data and the answers (‘this is X, that is not-X) that you give it.
  • First, what exactly is in the training data – in your examples of X and Not-X?But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale – so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes.
  • The structural thing to understand here is that the system has no understanding of what it’s looking at – it has no concept of skin or cancer or colour or gender or people or even images.
  • It’s just doing a statistical comparison of data sets.
  • So, again – what is your data set?And what might be in your data that has nothing to do with people and no predictive value, yet affects the result?
  • You can see both of these issues coming together in a couple of recent publicity stunts: train a face recognition system on mugshots of criminals (and only criminals), and then take a photo of an honest and decent person (normally a politician) and ask if there are any matches, taking care to use a fairly low confidence level, and the system says YES!
  • To a computer scientist, this can look like sabotage – you deliberately use a skewed data set, deliberately set the accuracy too low for the use case and then (mis)represent a probabilistic result as YES WE HAVE A MATCH.