Google's AIs learn how to encrypt their own messages
Neural networks could create encryption that becomes stronger as you hack it
A Google AI system learned how to build its own encryption key, and to make it stronger after being attacked.
Google's deep learning unit, Brain, built two neural networks, 'Alice' and 'Bob', to test whether they could create their own encryption algorithms and communicate without their messages being intercepted.
Alice sent Bob an encrypted message consisting of 16 zeroes and ones, and Bob decrypted it while Eve was intercepting this information and also trying to decrypt the message, according to the New Scientist.
Alice was able to learn from its failed attempts that were decrypted by Eve, and alter the encrypted message to prevent Eve from decoding it the following time.
The results of this experiment saw the neural networks successfully create a new encryption key advanced enough that even Eve could not break it. The way in which the encryption was devised by the two neural networks is so complex that even the researchers struggle to understand it.
These findings could be important in future, where neural networks could help AIs create encryptions that learn and become stronger as hackers try to break them, ultimately making them ideal for cybersecurity.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.