Google Brain, which began in 2011, by Jeff Dean, Greg Corrado, and Andrew Ng is an Artificial Intelligence system based on open learning that has been capturing headlines all over the world. Only a year after being fully developed, i.e., in 2012, it trained itself to identify the image of a cat based on 10 million images – an event that grabbed headlines and popped eyeballs and as a consequence found a place in the New York Times. Quite evidently, Google Brain then combines open-ended Machine Learning with the vast prowess of Google’s computing resources.
Google Brain, as the name suggests, is meant to replicate, as closely as possible, the functioning of a normal human brain. And the team behind it has been largely successful in doing the same. In October 2016, the people behind the Brain tried to conduct a basic simulation of human communication between three AIs: Alice, Bob and Eve. The purpose was to have Alice and Bob communicate effectively – without Bob misreading Alice’s messages and without Eve intercepting them or with Bob and Alice carrying out proper encryption and decryption, on their respective parts. The study showed that for every round where they failed to communicate properly, the next round showed a significant improvement in the cryptographic abilities of the two AIs.
Even though a normal person might think that cryptography as such is largely absent from normal human communication, nothing could be further from the truth. We communicate not only through words but also gestures – waves, eye rolls, and sighs. Had it not been for the long years that we have spent in society, undergoing the process of socialization, we would never have learned how to decode “these signals” – eyeballs, hand taps, body positioning. These gestures come to us in an encrypted form predicted on the ability of the decoder to decode these messages. Even though this might seem basic to the normal human but there is a great degree of nuance involved in teaching a machine the same.
The Google Brain also contributed to Google Translate. In September 2016, Google Neural Machine Translation was launched. The team behind Google Brain pioneered a Multilingual GNMT System which amplified the previous one by enabling translations between multiple languages, thereby bolstering Google Translate on the whole.
It does not leave little to the imagination than to wonder why Google Brain has received extensive coverage in Wired Magazine, The New York Times, Technology Review and other leading publications. It is no doubt a huge and potentially integral step in the development of artificial intelligence as at the heart of Google Brain lies a question which is very central to AI: how can and how well can the bridge between human intelligence and artificial, machine intelligence be covered? And the answers which the project offers seem to be very promising indeed.
Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out - check it out now!
Share your thoughts in the comments
Please Login to comment...