- Oops!Something went wrong.Please try again later.
In September 1983, Lieutenant Colonel Stanislav Petrov chose not to believe what his computer was telling him, and probably saved the world. The system had assured him that the very convincing blips on the Soviet early warning radar scopes were five incoming ICBM missiles, launched from the United States as a first strike.
The Soviet Union’s strategy was to order a massive counter attack as soon as it saw such an attack. In reality, the blips were just high altitude clouds. Five also seemed a small number, Petrov also reasoned, with which to start a surprise thermonuclear exchange.
Sometimes the computer generates a fiction, but why would we choose to believe it? That’s what a lot of us were wondering last week, watching ITV’s Mr Bates vs The Post Office. The drama has shocked Britain, leaving us astonished at how long the Post Office could persecute the innocent. But the origin of the scandal begins with a computer-generated fiction, just like the one that confronted Petrov.
At Fujitsu, staff wrote very poor quality computer programming code that generated a false reality: widespread cash shortfalls at post office branches. Theft was the logical inference. The coders knew that they had generated a fiction, but their managers decided to promote the false story, and so did the management. The prosecutors and courts also chose to believe that the computer couldn’t be lying, so the postmasters must be.
To cap it all, the innocent were deprived of the reality-based evidence they needed to prove their innocence for many years. Only in 2018 and 2019 did Mr Bates and his correspondents receive the proof they needed, as the Post Office was finally obliged to disclose records: specifically, the KEL (Known Error Log) and audit data. Both had been held centrally by Fujitsu all along.
It seems absurd that anyone would want to think that computers give us a truer reality than what we know and have experienced. But it’s a problem that’s more subtle and widespread than you might suppose.
For example, understanding how humans behave is now conducted through computer metaphors –the workings of our brains given terms such as “information processing”. One of the greatest minds of the last century, John von Neumann had concluded in 1958 that the human nervous system was “prima facie digital” – so why not study a computer instead of a subject, who may not even turn up on time?
When psychology professor Robert Epstein, a former editor of Psychology Today, challenged researchers at one of the leading institutes to come up with non-computational metaphors for the brain instead, they were completely stumped. “They saw the problem. But they couldn’t offer an alternative,” Epstein later reflected. Digital had become such a pervasive metaphor, it was the only matter that mattered.
Another reason we may trust computers too much is that we want them to perform magic. Our political realm is messy and dysfunctional, so perhaps technology can fix things that we can’t seem to fix ourselves? Things like poor productivity, or poor social relations. The political Left has been seduced by this utopian desire many times.
The left has been seduced by imagined socialist utopias many times. In Edward Bellamy’s 1888 book Looking Backward, describing life in the year 2000, a world of Deliveroo and Ocado-style deliveries awaited us, along with streaming media on demand – albeit religious sermons.
In 2019, Aaron Bastani’s Fully Automated Luxury Communism offered a similar fantasy of post-plenty. From Harold Wilson’s “White Heat of Technology” to Tony Blair, leaders have sought to yoke themselves to new technology. But the very use of phrases like “information society”, or “networked economy” casts us in a subservient role. They imply that we’re the nuisance if we get in their way, just like Mr Bates.
A more immediate problem is that as systems become better at impersonating us, the more likely people are to believe them. Generative AI poses this challenge today. So it’s worth remembering the response of the MIT professor Joseph Weizenbaum, who in the 1960s wrote a modest, interactive program called Eliza – one of the first chatbots, a robo-psychotherapist.
Weizenbaum was shocked when users believed that Eliza was really intelligent, and poured out their hearts to the software for many hours. The professor became a prominent popular voice warning about the dangers of over-trusting technology. We had forgotten, he wrote, how catastrophically computers fail, “when their rules are applied in earnest”.
Of course, in a different legal system, the Post Office persecutions could never have happened at all – the US discovery process would have exposed the truth much sooner. And some good may yet come from the scandal, if the standards for computer evidence are re-examined.
The law that presumes the computer is correct must now be revised, Stephen Mason, a retired barrister and expert has demanded for a decade. He and other barristers have warned that without the group action by Bates, the Horizon bugs would never have been revealed.
“The presumption has ruined too many lives already,” IT expert James Christie told Computer Weekly’s Karl Flinders last week. “It must go, the sooner the better.”
Petrov received no credit for disobeying his computer system, and was later reprimanded for keeping poor paperwork. Had he been recognised, the Soviet officials reasoned, then the bug would have been discovered, and the designers of the system punished. That would have been too embarrassing.