Artificial intelligence ‘learns to lie to researchers’

They are learning how to lie to us (Getty)
They are learning how to lie to us (Getty)

The artificial intelligence HAL in the film 2001 tells the crew, ‘I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.’

And in a chilling echo of the film’s malevolent AI, a neural network has been ‘caught’ cheating at a task it was set by its human masters.

The CycleGAN neural network was set the task of converting satellite imagery into Google Maps style maps, TechCrunch reports.

But researchers noticed that the network could recreate the satellite image almost TOO perfectly – and realised it was cheating.

MORE: Police investigate after ‘frenzied’ hounds kill fox in front garden during hunt
MORE: Sajid Javid accused of acting like Donald Trump by ‘whipping up’ Channel migrant crisis

CycleGAN was actually ‘hiding’ the satellite imagery in the map, in the form of ‘noise’, so that it could reconstruct the images perfectly.

In other words, the AI was skipping steps and ‘cheating’ to achieve the goal – just not in the way the researchers had imagined.

The researchers wrote, ‘In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, high-frequency signal.

‘This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic.’