Futurism is a mug’s game: if you’re right, it seems banal; if you’re wrong, you look like the founder of IBM, Thomas Watson, when he declared in 1943 that there is room in the world “for maybe five computers”.
David Adams knew these risks when he wrote about the future of technology in the Guardian in 2004 – even citing the very same prediction as an example of how they can go awry.
And from our vantage point in 2020, Adams certainly did a better job than Watson. When he looked ahead to today, he avoided many of the pitfalls of technology prediction: no promises about flying cars nor sci-fi tech such as teleportation or faster-than-light travel.
But in some ways, the predictions were overly pessimistic. Technology really has made great leaps and bounds in the past 16 years, nowhere more clearly than AI. “Artificial intelligence brains simply cannot cope with change and unpredictable events,” wrote Adams, explaining why robots would be unlikely to interact with humans any time soon.
“Fundamentally, it’s just very difficult to get a robot to tell the difference between a picture of a tree and a real tree,” Paul Newman, then and now a robotics expert at Oxford University told Adams. Happily, Newman proved his own pessimism to be unwarranted: in 2014, he co-founded Oxbotica, which has hopefully solved the problem he mentioned, because it makes and sells driverless car technology to vehicle manufacturers around the world.
If we move on from worrying over details, there are two key points at which the 2020 predictions fall apart: one about tech, the other about society.
“Gadget lovers could use a single keypad to operate their phone, PDA [tablet] and MP3 music player,” Adams wrote, “or combine the output of their watch, pager and radio into a single speaker.” The idea of greater convergence and connectivity between personal electronics was correct. But there was a very specific hole in this prediction: the smartphone.
After half a century of single-purpose consumer electronics, it was difficult to perceive how all-encompassing a single device could become, but just three years after Adams published his piece, the iPhone launched and changed everything. Forget carrying around a separate MP3 player; in the real 2020, people aren’t even carrying separate cameras, wallets or car keys.
Failing to foresee the smartphone is an oversight about the progress of technology. But the other missing point is about how society would respond to the changing forces.
The 2004 predictions are, fundamentally, optimistic. Adams writes about biometric healthcare data being beamed to your doctor’s computer; about washing machines that automatically arrange their own servicing based on availability in your “electronic organiser”; and about radio-frequency identification (RFID) chips on your clothes that trigger customised adverts or programme your phone based on where you are. And through it, all is a sense of trust: these changes will be good, and the companies making them well-intentioned.
There is another possibility: that technology really does save the day, and then some. John Maeda, the chief experience officer at the digital consultancy Publicis Sapient, says that by 2050, “computational machines will have surpassed the processing power of all the living human brains on Earth.
The cloud will also have absorbed the thinking of the many dead brains on Earth, too – and we all need to work together to survive. So I predict that we will see lasting cooperation between the human race and the computational machines of the future.”
This sort of thinking has come to be known as the singularity: the idea that there will be a point, perhaps even a singular moment in time, when the ability of thinking machines outstrips those who created them, and progress accelerates with dizzying results.
“If you interview AI researchers about when general AI – a machine that can do everything a human can do – will arrive, they think it’s about 50/50 whether it will be before 2050,” says Tom Chivers, the author of The AI Does Not Hate You.
“They also think that AGI” – artificial general intelligence – “can be hugely transformative – lots of them signed an open letter in 2015 saying ‘eradication of disease and poverty’ could be possible. But also,” he adds, citing a 2013 survey in the field, “on average they think there is about a 15% to 20% chance of a ‘very bad outcome [existential catastrophe]’, which means everyone dead.”
There is, perhaps, little point in dwelling on the 50% chance that AGI does develop. If it does, every other prediction we could make is moot, and this story, and perhaps humanity as we know it, will be forgotten. And if we assume that transcendentally brilliant artificial minds won’t be along to save or destroy us, and live according to that outlook, then what is the worst that could happen – we build a better world for nothing?