It’s time for us to take back control from computers, says Erik Spiekermann
It’s time for us to take back control from computers, which are being relied on to make decisions that, in the extreme, are costing innocent lives — regardless of the nature of ‘collateral damage’, says Erik Spiekermann. Erik Spiekermann set up MetaDesign and FontShop, and is a teacher, author, designer and partner at Edenspiekermann.

Computers are infallible. While this may not be what we experience at our desks every day, it is the premise behind collecting more and more data which eventually is supposed to enable machines themselves to make decisions, purely based on facts -- objectively, without moral considerations.
Take our little vacuum cleaner robot: it knows nothing about vacuum cleaning or about hygiene and hasn't a clue about where it actually works. But it does its job very well. Intelligence is not the reason for its effectiveness, but the sheer amassing of data: about the size and shape of the surface, the dust already collected, the position of furniture and other obstacles. It never learns and starts its routine every day as dumb as it was the day before. Fuzzy logic is not intelligence.
Machines like this little sucker are ubiquitous and have not just been gathering dust, but all sort of information about you and me: 'We know where you are. We know where you've been. We more or less know what you're thinking about.' This was not something NSA chief Keith Alexander said, but Google's CEO, Eric Schmidt. As Joseph Weizenbaum (computer scientist at MIT who wrote the influential book Computer Power and Human Reason) pointed out, there is a difference between deciding and choosing. Deciding can eventually be programmed, while choice is the product of judgement, not calculation.
BAE Systems' latest drone, Taranis, named after the Celtic god of thunder, is well over budget and behind schedule and may not be operative before 2030. It is supposed to be a 'fully autonomous intelligent system', albeit controlled by a human operator. At least that is what BAE says on its website. The decision to bomb objects and thus kill people is based on a new paradigm in data collection: pattern-of-life analysis. This can encompass anything in an individual's life, from internet browsing habits to a record of instances of choices made in order to determine a statistical 'favourite'. At some point in the process of such analysis a certain limit (imposed by whom?) will be crossed and a potential terrorist will be considered a real terrorist. They may just have been using the wrong SIM card in the wrong place, at the wrong time, but now the computer has certainty. And people more faith in a computer -- more than in a person, as Weizenbaum discovered.
'We've killed 4,700 [people],' Republican senator Lindsay Graham said of the 'war against terror'. 'Sometimes you hit innocent people, and I hate that, but we're at war and we've taken out some very senior members of Al-Qaeda.' Such collateral damage included 12 members of a wedding ceremony in Yemen killed by a drone strike in 2013 that 'failed to comply with rules imposed by President Obama last year to protect civilians', as the official statement put it.
So far, Google et al only target our purses. But we know that the NSA and its buddies abroad (notably the UK) accumulate just as much data. And there is good reason to suspect that the gathering does not stop at their own servers but suck information from all the 'clouds' out there. As far as the data is concerned, a shopper and a terrorist leave the same traces. A computer cannot choose between them but still makes decisions.
We can't undo data gathering to date nor live without the internet, but we do need transparency, rules and clearly defined rights. The recent judgement by the European Court of Justice that ordered Google to provide people with the right to be forgotten is a first step to getting ourselves back in charge of our own decisions and choices.
