The Journey Towards True Artificial Intelligence

The Journey Towards True Artificial Intelligence

In 1958, a young psychologist named Frank Rosenblatt unveiled the Perceptron, a program that was hailed as the first machine capable of having an original idea. Running on a five-ton IBM mainframe, the Perceptron was a neural network that could learn to distinguish between different punch cards. Despite its limited capabilities, Rosenblatt believed that it was the dawn of a new era. However, nearly 70 years later, there is still no serious rival to the human brain. According to Professor Mark Girolami, chief scientist at the Alan Turing Institute, what we have today are “artificial parrots” - impressive machines, but not on par with human intelligence.

The history of artificial intelligence (AI) has many fathers, all of whom have made significant contributions to the field. Alan Turing, the Bletchley Park codebreaker and founder of computer science, is considered a father of AI. In a 1948 report, Turing explored how machines could mimic intelligent behavior, including the idea of a “thinking machine”. He proposed the Imitation Game, later known as the Turing test, as a way to determine if a machine can pass as a human in written exchanges. Turing also made contributions to AI through his work on Bayesian statistics, which laid the groundwork for generative AI programs.

The term “artificial intelligence” didn’t appear until 1955, when computer scientist John McCarthy used it in a proposal for a summer school. McCarthy was optimistic about the progress that could be made in AI, but the initial efforts fell short. However, researchers continued to develop programs and sensors that allowed computers to perceive and respond to their environments. The field experienced a boom in the 1970s, but funding cuts and a focus on coding human expertise directly into computers led to a decline in progress.

The breakthrough came in the 1980s with the development of multi-layered neural networks and the introduction of “backpropagation” as a way to train them. Thanks to more powerful processors and vast amounts of data, AI made significant strides in the 2000s. DeepMind, a company founded in 2010, achieved notable successes, including a program that learned to play Atari games and AlphaGo, which beat the Go champion. The most recent breakthrough has come in the form of generative AI, which uses transformers to process and generate text. OpenAI’s ChatGPT, released in 2022, is a prime example of the power of generative AI.

However, the advancements in AI also come with a cost. Training models like ChatGPT requires massive amounts of computing power and generates a significant amount of carbon emissions. While the possibilities for generative AI are immense, it’s important to use AI for what is truly useful and not waste resources. As Dr. Jonnie Penn of the University of Cambridge puts it, “Instead of over-engineering our society to run on AI all the time, every day, let’s use it for what is useful, and not waste our time where it’s not.”

In conclusion, the journey towards creating true artificial intelligence has been a long one. While there have been significant advancements in the field, we are still far from creating a machine that can rival the human brain. However, the progress made in AI has given us powerful tools that can be used for the betterment of humanity. As we move forward, it’s important to use AI responsibly and ensure that we consider the environmental impact of these advancements. With the right approach, we can continue to push the boundaries of what AI can achieve.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.