Google Brain, which began in 2011, by Jeff Dean, Greg Corrado, and Andrew Ng is an Artificial Intelligence system based on open learning that has been capturing headlines all over the world. Only a year after being fully developed, i.e., in 2012, it trained itself to identify the image of a cat based on 10 million images – an event that grabbed headlines and popped eyeballs and as a consequence found a place in the New York Times. Quite evidently, Google Brain then combines open-ended Machine Learning with the vast prowess of Google’s computing resources.
Google Brain, as the name suggests, is meant to replicate, as closely as possible, the functioning of a normal human brain. And the team behind it has been largely successful in doing the same. In October 2016, the people behind the Brain tried to conduct a basic simulation of human communication between three AIs: Alice, Bob and Eve. The purpose was to have Alice and Bob communicate effectively – without Bob misreading Alice’s messages and without Eve intercepting them or with Bob and Alice carrying out proper encryption and decryption, on their respective parts. The study showed that for every round where they failed to communicate properly, the next round showed a significant improvement in the cryptographic abilities of the two AIs.
Even though a normal person might think that cryptography as such is largely absent from normal human communication, nothing could be further from the truth. We communicate not only through words but also gestures – waves, eye rolls, and sighs. Had it not been for the long years that we have spent in society, undergoing the process of socialization, we would never have learned how to decode “these signals” – eyeballs, hand taps, body positioning. These gestures come to us in an encrypted form predicted on the ability of the decoder to decode these messages. Even though this might seem basic to the normal human but there is a great degree of nuance involved in teaching a machine the same.
The Google Brain also contributed to Google Translate. In September 2016, Gooogle Neural Machine Translation was launched. The team behind Google Brain pioneered a Multilingual GNMT System which amplified the previous one by enabling translations between multiple languages, thereby bolstering Google Translate on the whole.
It does not leave little to the imagination than to wonder why Google Brain has received extensive coverage in Wired Magazine, The New York Times, Technology Review and other leading publications. It is no doubt a huge and potentially integral step in the development of artificial intelligence as at the heart of Google Brain lies a question which is very central to AI: how can and how well can the bridge between human intelligence and artificial, machine intelligence be covered? And the answers which the project offers seem to be very promising indeed.
- How to search faster with Google
- How to prepare for Google Asia Pacific University (APAC) Test ?
- How Google Search Works!!
- How ranking in Google Search Works !
- Speech Recognition in Python using Google Speech API
- How Google Updates Itself!
- Performing Google Search using Python code
- Kotlin | Language for Android, now Official by Google
- First tweet of Google in cryptic way
- How To Add Google Maps With A Marker to a Website
- How to Add Google Charts on a Webpage?
- Flutter | An introduction to the open source SDK by Google
- Google’s method for preventing Phishing attacks
- Google Summer of Code Preparation
- Cracking Google Summer of Code 101
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.