British computer scientist Geoffrey Hinton has left Google with a warning about the potential dangers of artificial intelligence, having spent his entire career driving the technology forward. Let’s takes a look back at his life to find out why he became known as the “Godfather of AI”. From Sky News
“He is considered one of the most important figures in the history of artificial intelligence – a visionary leader who has helped to shape the future of AI.”
That’s the glowing assessment of British computer scientist Geoffrey Hinton provided by Google’s Bard, the technology giant’s nascent chatbot powered by systems that he helped pioneer.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The New York Times, concerned both about the dangers of disinformation, fuelled by convincingly generated photos, videos, and stories, and the transformative impact of AI on the jobs market, potentially making many roles redundant.
Dr Hinton’s worrying outlook comes some five decades after he earned a degree in experimental psychology at the University of Cambridge and a PhD in AI at Edinburgh, followed by postdoctoral work in computer science at other leading universities on both sides of the Atlantic.
Born in Wimbledon in 1947, the path he found himself on was perhaps inevitable, given he heralded from a family of scientists including great-grandfather George Boole, a mathematician whose invention of Boolean algebra laid the foundations for modern computers; cousin Joan Hinton, a nuclear physicist who worked on the Manhattan Project, which produced the world’s first nuclear weapons during the Second World War; and father Geoffrey Taylor, a respected scholar who became a member of the Royal Society, the world’s oldest scientific academy.
“Be an academic or be a failure,” Dr Hinton once recalled his mother having told him as a child – advice he certainly seemed to run with.
The ‘key breakthrough’
Dr Hinton himself was inducted into the Royal Society in 1998. By then, he had co-authored a landmark paper with David Rumelhart and Ronald Williams on the concept of backpropagation – a way of training artificial neural networks hailed as “the missing mathematical piece” needed to supercharge machine learning. It meant that rather than humans having to keep tinkering with neural networks to improve their performance, they could do it themselves.
This technique is key to the chatbots now used by millions of people every day, each based on a neural network architecture trained on massive amounts of text data to interpret prompts and generate responses.
ChatGPT itself is well aware of how vital backpropagation is to its development, describing it as a “key breakthrough” that “helps ChatGPT adjust its parameters so that its predictions (responses) become more accurate over time”.
Asked how backpropagation helps ChatGPT function, it says: “In essence, backpropagation is a way for ChatGPT to learn from its mistakes and improve its performance. With each iteration of the training process, ChatGPT becomes better at predicting the correct output for a given input.”
rom ‘nonsense’ to success
Dr Hinton’s pioneering research didn’t stop there, instead he would continue “popping up like Forrest Gump“ at points in time that would prove crucial to where we are now with AI in 2023, a drastic period of technological advancement he recently compared to “the Industrial Revolution, or electricity… or maybe the wheel”.
A year after the publication of the backpropagation paper in 1986, Dr Hinton started a programme dedicated to machine learning at the University of Toronto. He continued to collaborate with like-minded colleagues and students, fascinated by how computers could be trained to think, see, and understand.
Dr Hinton told CBS News it was work sceptics once dismissed as “nonsense”. But in 2012, another milestone, as he and two other researchers – including future OpenAI co-founder Ilya Sutskever – won a competition for building a computer vision system that could recognise hundreds of objects in pictures. Eleven years later, OpenAI’s latest version of GPT software boasts the same feature on a scale once impossible to imagine.
Along with grad students Alex Krizhevsky and Sutskever, Dr Hinton founded DNNresearch to concentrate their joint work on machine learning. The success of their image recognition system, dubbed AlexNet, attracted the interest of search giant Google, and it acquired their company in 2013.