The extraordinary social responsibility dilemma of Facebook
26 April 2018
We all know that Facebook has been under fire in recent weeks. We’ve seen Zuckerberg being quizzed (mostly incompetently, but the symbolism remains) in Congress. We’ve seen mainstream media outlets queueing up to bash the platform, and some advertisers taking fright as a result.
What we seem not to have noticed is that some sort of a line has been crossed. Not by Facebook, but by us. And that puts us in the position now where we have to ask - is it possible to uninvent the nuclear bomb that is influence via social media?
Two things have been going on at the same time over the last ten years. One, we have come to understand in much greater detail how programmable we are as human beings. Think you wander about the face of the planet exercising free will every time a choice confronts you? Think again. Our choices are governed by habits, and where those aren’t enough, a set of attitudes and beliefs that are strong enough for us to leap to conclusions about most things and then post-rationalise as best we can. The more we’ve come to understand how the mind works, the more manipulable we seem.
Marketers have known this in theory for decades, of course. But only in the last decade has our knowledge become so complete and profound.
Two - we have also progressed light years over the same decade in how we can analyse large chunks of data and identify patterns. As with the previous point, we’ve known about this for a long time. Every since Tesco introduced Clubcard as a way of gathering data on real buying behaviours at a huge scale, we’ve learnt that the careful analysis of ‘big data’ can give insights into how we function that may become astonishingly prescient (for example, knowing that someone is pregnant before they do). But at least that was restricted to shopping behaviours in one store.
Facebook has brought these two together, and has been merrily chugging along over the last decade following the avenues of possibility with wide-eyed wonder and a pioneer’s spirit.
To be honest, I’m not much interested in the argument that Facebook is some profit-hungry monster that is cynically twisting us around its digital channels without a care to any harm that may be done.
Honestly, what I see is a company that is opening up a completely different model that, when pushed to scale, turns out to have all sorts of unforeseen implications and unintended consequences.
I don’t believe someone that’s insatiably profit-hungry decides to give away 99% of his fortune. I assume that Facebook is generally intended to be benign, but like all organisations can make mistakes when faced with novel problems, can create perverse incentives for its own people, can mess things up in ways that have real consequences. And we notice every time it happens because they are under a micro-lens of scrutiny, in a way that most of us fortunately aren’t when we have our own mess-ups.
But it doesn’t matter. If you’re of a cynical mind, then you’ll find evidence to endorse your cynicism if you look hard enough. And since there are plenty of cynics amplifying every scrap of evidence onto front pages everywhere, you’ll have plenty of validation to hand. The point is this - whatever you think of the company concerned, the individuals who run it, and their motivations - it doesn’t change the nature of the dilemma.
Facebook has become the single best aggregator of data about millions of millions of people. Lots of those people voluntarily gave information about their interests, their relationships, their friends, even their mindset and political / religious convictions. And since people love taking personality tests, third party agents also discovered that they could persuade people to give them information that helped them to analyse what people did, and how they could be influenced.
In the first six months after the Trump election, stories began to come out about how the really clever people behind the Trump campaign had used Facebook to understand the attitudes and priorities - the sensitive buttons - of different groups of people to the extent that they could accurately place you in a certain personality type and belief system based on what bands you liked, what food you enjoyed - all sorts of things that you wouldn’t think would be giving away anything profound about you. And so they ran very smartly targeted campaigns to those groups accentuating the issues or the angles that they knew would play best. And it worked.
And we read articles about this with a degree of admiration of how clever these systems had become, along with a fair dose of apprehension because I, for one, thought the logic of this situation was that the validity of democracy was severely undermined. But hardly anyone was talking in those terms - it just seemed to be the reality of what the technology could now do. And it was accepted that if it could be done, it would be done.
Oh, of course - Facebook executives were distinctly uncomfortable with the suggestion they may have been an unwitting conduit for a Trump win, since it wasn’t a fit for their own political preferences. But the embarrassment seemed to be one of ‘uh oh, what have we done?’ rather than one of ‘uh oh, this system presents massive ethical and moral challenges’.
Suddenly that changed. The fact that Cambridge Analytica had promised to destroy data and then simply not carried it out seemed to be the trigger moment that attached the phenomenon of ‘influence via social media’ into the category of ‘corporate scandal’. And we all love one of those, so suddenly we gave ourselves permission to be outraged.
And before long there were screaming headlines pointing to things that we knew perfectly well were possible before, but now framed in a way that made them sound sinister and even downright evil. Facial recognition. Advert retargeting. Personality profiling. All things we’d known about and had up until this point failed to frighten many horses.
And once the functions that had been completely taken as being accepted and legitimate were subjected to that harsher lens, things began to change pretty rapidly.
A couple of weeks ago, Instagram (part of Facebook) abruptly dismantled analytical functions that overnight broke a whole bunch of third party apps used by creators on that platform. Then some of the sophisticated functionality that Facebook offered to advertisers (for instance, being able to target adverts to people via Facebook that are on other third party lists) was announced as being withdrawn. Basic capabilities that had led the new generation of digital entrepreneurs to believe Facebook to be the most sophisticated ad platform of all time were being pulled. Because they work too well.
And this was all happening at the same time as the EU is introducing its GDPR rules that provide a stricter framework of consent for how people’s personal information can be used. It will not be a surprise if the bringing of these rules into force on May 25th is followed by a concerted attempt to use them to force Facebook into further restrictions on how it can operate.
The thing is - I’m not sure whether this genie is one that can actually be put back into the bottle. Facebook makes money by being the most attractive platform out there for advertisers. But it can only do so if it can retain the trust of its users - and if functionality that was previously uncontroversial (and therefore became embedded in its model) becomes socially unacceptable, it may find it impossible both to retain trust and satisfy advertisers - and could even fail on both fronts at once.
The definition of a dilemma is a situation where no solutions are available that don’t have some significant negative cost attached. Facebook is faced by a genuine ethical dilemma, and I have no idea how I would resolve that if I was Mark Zuckerberg. He will be all too aware, I imagine, of the case of Uber which showed how a company seen as one of the new progressive challenger firms can quickly become widely perceived as tainted by irresponsibility. Regardless of the reality, once that has become a mainstream perception, it is likely to stick.
This may be a temporary flicker of concern. The last crie de coeur of the old fuddy duddys who think that all this new technology is against the natural order of things. Once their positions get taken by the youngsters who grew up as the selfie (and the sexting) generation, a different attitude to privacy will evolve and progress will be resumed.
Ironically, it may be a successor to Facebook that gets to enjoy the fruits of that progress.