In 2016, Michal Kosinski – a psychologist and data scientist from the Stanford University Graduate School of Business – was mentioned in 2016’s most widely shared German-language article on social media, a Das Magazin article that discussed his research on big data and recent populist victories (english version: here). According to this article, US President Donald Trump’s election campaign and Brexit advocates both used big data culled from social media to specifically target, and sway, voters. They used potential voters' digital traces, including personal data. This way, they could target people very specifically. Voters were not even aware they were being targeted. The big data company directly involved in these election campaigns is called Cambridge Analytica. The person whose academic research was linked to the technique in the media: Stanford researcher Michal Kosinski, who shared the Das Magazin article on the homepage of his website.
Michal Kosinski: „Wishful thinking will not stop tornados from coming“
So were you linked to the Trump campaign’s victory?
I wasn’t involved. I developed an algorithm that worked in a similar way, but I didn’t share my tool with Cambridge Analytica. It’s not my method, I just said it’s possible to break into your head and infer your personality from your Facebook likes. I just tried to warn people. It can be used for great purposes, but also for bad purposes.
Could you please describe your method?
An algorithm could take a digital footprint of your Facebook “likes,” or the words you use in your tweets, or in your email, for instance. Then, the algorithm would look at millions of people doing millions of things and find really subtle connections between, for instance, your propensity to vote for a given candidate and your intelligence.
And you are the first one to do this research linking psychometrics and big data?
I don’t think so, in fact. In particular, using specific personality tests in connection to big data is unique. Doing this research and publishing it in the public domain is unique.
Do you think you as a researcher have a responsibility for the information you give to the public?
I certainly have a responsibility for the information that I give to the public, and I also have a responsibility to inform the public against the risks that are out there. Me not informing the general public doesn’t stop companies from using technologies that might not benefit people. But me warning people gives us an ability to have a discussion and change laws, or maybe accept the fact that these things seem to be happening. Maybe we should think about how to change societies to protect them against such tactics.
How do you think people will adjust to the knowledge that their social media use is so revealing?
People don’t care. Come on, people talk about this, and in the end they just keep using Facebook and the internet and credit cards. There’s no other way.
Have you changed your ways on Facebook as a result of your research?
I keep my Facebook [profile] completely open. I believe that going forward there is going to be no privacy, so it’s way better to act and behave under the assumption that everything you do is public.
Would you say privacy is a lost cause?
It’s a lost cause. And I am not happy about it. It’s a bit like tornados. I don’t like tornados, I think that tornados should be illegal, and we should give everyone a choice of opting out from a tornado, but this wishful thinking will not stop tornados from coming. And the sooner we move from talking about illegalising tornados to discussing how we organise our society to make it a habitable place when tornados – the post-privacy world, in this metaphor – comes, the better.
What are you working on now?
I am working on how we determine intimate human traits, like sexual orientation, from faces.
Does it work?
With perfect accuracy.
Aren’t you worried about the ethical implications of that?
Aren’t you worried about every single person having their profile picture up on the internet now, and not knowing that a computer vision specialist can do anything with that?
That is a different question.
Well, yes. I am totally worried, but as I said before, we are quickly moving towards the post-privacy age, so even if you somehow manage to clean your data and remove yourself from the internet I can snap a picture of your face with my phone. That could happen if I were a border guard, for instance. With that one picture, I could know everything about you.
In many countries, homosexuality is still a crime. Do you think your research might be used to target gay people?
Again, when I tell you people can break into your house through your door, I am not giving them the key, I am just telling you that your door is a weak link. I am not a computer scientist. I am not even a computer vision specialist. So if I can do it on my laptop, then a company whose only job it is to look for faces and check them for hidden traits is doing it already. They are just not telling you they are doing it.
You address possible dangers stemming from algorithms and big data. How would you classify your research?
Well, in fact, what we were talking about in the beginning is not really my research, it’s just a side-effect of my research. My main interest is in what can we learn about the phenomenon of political views, homosexuality, personality, intelligence, and other human traits from looking at our language, looking at our Facebook likes, or looking at our faces.
The face is a good proxy for your hormonal levels, for your health, for your genes, for your developmental history. Basically the face is a good proxy for many underlying processes. And it’s also very easily accessible, because you can go online and get billions of publicly available images. So for me, as a scientist, it’s interesting to see that by looking at faces, we can perhaps find links between hormones, genes, and personalities.
Isn’t that what used to be called phrenology, which held that by measuring the skull we could infer personality traits?
It used to be called phrenology and physiognomy, and it’s long been considered pseudoscience. Humans can recognise emotion [from facial expressions], but not always intimate personality traits. That might mean the traits are not on the face, or that human beings just didn’t evolve to read these traits. Algorithms can and do read these traits now.
What are some positive applications?
It means robots can adjust to emotions, and computers can do the same. They could change the tone or the type of information they are telling you. For example: When a 45-year-old googles “jaguar” and a 12-year-old googles “jaguar,” they are probably not looking for the same thing. Cars could also adjust: they could put on relaxing music when they read that you are stressed, or control the speed or stop the car when they read you are not fit to drive.
You’re saying the algorithm is better at reading other people than humans are. How do you feel about that?
Machines are outcompeting humans in most of the things we do, and this process is going to continue. I think this is a much larger question than privacy in terms of how the relationship between us and artificial intelligence will look in the future.
Increasingly, we rely to a heavy extent on artificial intelligence (AI) – to fly our planes, to manage our information networks, detect our illnesses, solve our societal problems and so on. We have this symbiotic relationship with it, and we just cannot live without it. But soon, interestingly, AI can probably easily live without us.
Maybe not yet, but fast-forward a few years. AI will learn how to write its own code. It doesn’t need any input from a human being. A human just has to press start. Soon maybe we won’t even be needed for that.
Where does that leave humans?
Well, it leaves us increasingly dependent on AI. We cannot step back and we are going even further because we know that AI is amazing at solving those problems.
Politicians make their decisions based on experts, and these experts use big data: Machine learning models, or statis- tical models that look at gazillion of data points and tell politicians what to do. In other words, even decisions on how to run countries are being made by comput- ers. For now they’re kind of hiding behind the politician. And it’s great because computers are way better at decision-making than humans are.
Michal Kosinski studies big data and psychometrics, or the study of people’s psychological traits, as an assistant professor at Stanford University Graduate School of Business. Before that he was director at the University of Cambridge psychometrics centre, and worked as a researcher with Microsoft to study machine learning. He has two Masters, one in social psychology, with a focus on consumer behaviour, and one in psychology, specifically in psychometrics. Before gaining his PhD in psychology at the University of Cambridge, he founded his own consulting company and served as brand officer for a computer company.