top of page

From Theory to Reality: How AI Transformed the World

Updated: Jan 28




Artificial Intelligence (AI) is a broad field of computer science that aims to create machines or systems that can perform tasks that typically require human intelligence.


Early Concepts and Foundations (1940s - 1950s)

The idea of “thinking machines” had been a subject of speculation accelerated by the technological developments during WWII. At the beginning of 1950s, the theoretical underpinnings of AI began to form. Pioneers like John Von Neumann and Alan Turing transformed computers from decimal logic to binary logic, formalising the architecture of the contemporary computer.


Turing raised the question of possible intelligence of the machine in his controversial paper Computing Machinery and Intelligence (1950) and developed the Turing Test as an attempt to measure machine intelligence against human intelligence. The Turing test is used more generally to refer to behavioural tests for the presence of mind, thought or intelligence in entities, the likes of which was prefigured in Descartes’ Discourse on the Method (1637). Applying this concept to machines was the starting point for the idea of machines imitating humans. Initial research centred around basic language processing algorithms and machine translation, which marked the beginning of Natural Language Processing.


The concept of Artificial Intelligence and Early Enthusiasm (1956)

In the summer of 1956 at the Dartmouth Conference, John McCarthy of MIT, coined the term “Artificial Intelligence.” Early AI research was characterised by optimism and significant investments, focusing on symbolic methods and problem-solving. The popularity of the topic, however, fell back due to the computer’s technological limitations; lack of memory delaying initial predictions in the development of AI by 30 years.


The AI Winters and Introspection (Late 1970s, Late 1980s to Early 1990s)

AI experienced periods of stagnation and reduced funding, known as the “AI winters.” These were due to inflated expectations, technological limitations, and challenges in scaling AI methods. At the end of 1970 with the advent of the first microprocessor, AI research took off again, entering a ‘golden age’.


In 1972 Stanford University developed MYCIN; a system specialised in the diagnosis of blood diseases and prescription drugs, based on an inference engine. This rush of research and development stagnated again at the end of 1980 due to the complexity of developing and maintaining these systems becoming far too expensive and time consuming. By 1990 the term Artificial Intelligence had become ‘taboo’ and replaced in academia with “advanced computing”


The Rise of Machine Learning and Big Data (late 1990’s-2000s)

In May 1997, IBM’s expert system Deep Blue won a chess game against Garry Kasparov. Giving hope to the furthering of AI research but still not providing enough support for the financing of this form of AI.


A resurgence in AI was then fuelled by the advent of Google, sudden mass access to the internet, the explosion of digital data (big data), and advancements in algorithms. Machine learning began to show remarkable capabilities. In 2003 Geoffrey Hinton of the university of Toronto Yoshua Bengio of the University of Montreal and Yann LeCun of University of New York came together to bring neural networks up to date, experimenting simultaneously at Microsoft, Google and IBM showing great strides and potential in deep learning algorithms.


Breakthroughs and Mainstream Adoption (2010s)

This era was marked by significant advancements in deep learning and neural networks, a development largely fueled by the innovative use of computer graphics card processors. These processors drastically improved the calculation speed and cost-efficiency of learning algorithms, leading to several noteworthy accomplishments. These accomplishments underscored a paradigm shift from relying on expert systems to leveraging vast datasets for correlation and classification, enabling computers to uncover insights independently.


2011: IBM’s Watson gained fame by winning Jeopardy, highlighting the potential of AI in understanding, and processing natural language at a level competitive with human intelligence.


2012: Google X made headlines by recognizing cats in videos, demonstrating the capability of neural networks to identify and categorize images with high accuracy.


2016: Google’s AlphaGo defeated a world champion at Go, a game noted for its complexity and the vast number of possible positions. This victory underscored the advanced strategic thinking and learning capabilities of AI systems.


2017:

• Sophia, a humanoid robot developed by Hanson Robotics, became the first robot to be granted citizenship by a country and the first non-human to receive a United Nations title, highlighting the growing societal and ethical considerations surrounding AI.

• Google researchers introduced the Transformer neural network architecture, revolutionizing the field of text parsing for Large Language Models (LLMs), facilitating advancements in natural language understanding.


2018:

• OpenAI released GPT-1, equipped with 117 million model parameters, pushing the boundaries of language models in generating coherent and contextually relevant text.

• IBM, Airbus, and the German Aerospace Centre (DLR) developed Cimon, an AI-powered space robot designed to assist astronauts, showcasing AI’s utility in space exploration and support.


2019:

• Microsoft launched the Turing Natural Language Generation model, which boasts 17 billion model parameters, further advancing the capabilities of AI in generating human-like text.

• A collaboration between Google AI and Langone Medical Centre resulted in a deep learning algorithm that outperformed radiologists in detecting lung cancer, illustrating AI’s potential to revolutionise medical diagnostics.


Current Trends (2020s)

With extensive research and experimentation being done into deep learning and significant developments in Generative AI, AI is now becoming an integral part of many industries. Advancements in Large Language Models (LLMs) and Natural Language Processing (NLP), autonomous systems, and more personalised AI are leading to a wider active usage of AI.


2020:

• The University of Oxford develops Curial, an AI test for rapid COVID-19 detection in emergency rooms.

• Open AI releases GPT-3, with 175 billion model parameters for human-like text generation, marking a significant advancement in NLP.


2021: OpenAI introduces DALL-E, a text to image generator.


2022: OpenAI launched ChatGPT, offering a chat-based interface with

GPT- 3.5. Within five days the application had acquired over 1 million users.


2023: OpenAI introduced GPT-4, a multimodal LLM for text and image prompts.


Ethical and societal implications of AI, such as bias, privacy, and job displacement, continue to be key discussions as experts in the field strive towards developing artificial general intelligence.

bottom of page