What is Artificial Intelligence: Types, History, and Future

Technological developments in the era of the industrial revolution 4.0 are increasingly developing and are widely used in everyday life. This technology can help every human activity to be more effective, efficient, and optimal.

Artificial Intelligence is one of the terms you often hear today in the industrial revolution era, both in developed and developing countries.

Of course, various countries are competing to develop more advanced devices that can be automated wirelessly. Finally, the term

Artificial Intelligence became a new breakthrough in IT to build products and software that can solve problems more intelligently without having to wait for human commands or instructions.

In this article, we will discuss further the application of human intelligence, starting from its definition, history, types, and how it works to its benefit.

In addition, we will also describe several examples of the implementation of this technology to make it easier for you to get to know this technology more closely.

A Brief History of AI

Artificial Intelligence (AI) has a long history dating back to the 1950s, when researchers first began exploring the concept of creating machines that could think and learn like humans.

Early AI research focused on developing “expert systems,” which were designed to mimic the decision-making abilities of human experts in specific fields.

In the 1980s and 1990s, AI experienced a resurgence of interest, driven by advances in computer technology and the growing availability of data.

Today, AI is used in a wide range of applications, from self-driving cars to virtual personal assistants.

The development process of AI (1943 – 1952)

1. 1943

The work that is now known as AI was done by Warren McCulloch and Walter Pitts in 1943. They proposed an artificial neuron model.

2. 1949

Donald Hebb demonstrates the updating rule to modify the strength of the connections between neurons. His rules or methods are now known as Hebbian Learning.

3. 1950

Alan Turing was a British mathematician who pioneered Machine Learning in 1950. Alan Turing published “Machine Computing and Intelligence.” Where he proposes a test. The test can check the ability of machines to show intelligent behavior on par with human intelligence, known as the Turing Test.

Birth of AI (1952 – 1956)

1955: Allen Newell and Herbert A. Simon created “The first Artificial Intelligence program,” which was later named “Logic Theorist.” This program has proven 36 out of 52 mathematical principles or theorems and found new, more elegant proofs for some of the principles or theorems.

1956: American computer scientist John McCarthy first put forward the word “Artificial Intelligence” at the Dartmouth conference.

For the first time, AI is counted as an academic field.
At that time, high-level computer programming languages such as FORTRAN, LISP, or COBOL had been invented. And enthusiasm about AI was very high at that time.

Birth of AI (1952 – 1956)

1. 1955

Allen Newell and Herbert A. Simon created “The first Artificial Intelligence program,” which was later named “Logic Theorist.” This program has proven 36 out of 52 mathematical principles or theorems and found new, more elegant proofs for some of the principles or theorems.

2. 1956

American computer scientist John McCarthy first put forward the word “Artificial Intelligence” at the Dartmouth conference.

For the first time, AI is counted as an academic field. At that time, high-level computer programmings languages such as FORTRAN, LISP, or COBOL had been invented. And enthusiasm about AI was very high at that time.

The golden era of AI enthusiasm (1974 – 1980)

1. 1966

Scientists emphasize the development of algorithms that can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, nicknamed ELIZA.

2. 1972

The first intelligent humanoid robot was built in Japan, named WABOT-1.

First AI Winter phase

Around 1974 to 1980 was the first AI winter phase. The meaning of AI Winter is when computer scientists face a crisis of government funds to research AI. During AI Winter, public interest in AI wanes.

The boom era of AI (1980 – 1987)

After the first AI Winter, AI returned with “Expert Systems.” Expert Systems are programmed to mimic human decision-making abilities like an expert.

In 1980, the American Association of Artificial Intelligence at Stanford University held the first national conference.
Second AI Winter phase (1987 – 1993)

The duration between 1987 and 1993 was the second phase of AI Winter.

Investors and governments have again stopped spending on AI research because the costs are high, but the results could be more efficient. However, Expert Systems like XCON are very cost-effective.

AI rise era (1993-2011)

1. 1997

In 1997, IBM Deep Blue beat world chess champion Garry Kasparov and became the first computer to win the world chess champion.

2. 2002

For the first time, AI was integrated into a vacuum cleaner called Roomba.

3. 2006

AI entered the business world in 2006. Companies such as Facebook, Twitter, and Netflix also started using AI.

Deep Learning, Big Data, and Artificial General Intelligence (2011 – Present)

1. 2011

IBM’s AI Watson wins a Jeopardy quiz show, in which he must solve complex questions and puzzles. Watson has proven that he can understand natural language and can solve complex questions quickly.

2. 2012

Google launched the Android application feature “Google Now”, which is able to provide information to users as predictions.

3. 2014

Chatbot “Eugene Goostman” won the famous competition, the Turing Test.

4. 2018

IBM’s AI Project Debater debates a complex topic with two master debaters and also performs very well. Google demonstrated one of its AI programs “Duplex” which is a virtual assistant that can arrange make-up appointments, and the woman using Duplex was not aware that she was talking to AI.

Nowadays AI

Today, the field of AI has advanced significantly and continues to evolve rapidly. AI is being used in a wide range of industries, such as healthcare, finance, transportation, and manufacturing.

Advancements in machine learning and deep learning have led to the development of powerful AI systems that can perform tasks such as image and speech recognition, natural language processing, and decision-making.

These technologies are being used to improve efficiency, accuracy, and decision-making in various industries.

Additionally, there is an increasing focus on developing AI systems that can work in collaboration with humans to augment human capabilities and improve overall productivity.

Types of AI

After knowing the meaning, history, and categories of artificial intelligence, we will then discuss its types.

1. Neural AI

The first type was prevalent among computer scientists in the 1980s. Neural AI is a knowledge system not represented in symbols but in the form of artificial neurons, such as well-reconstructed brains.

Then, the knowledge that has been collected will be broken down into smaller parts and will later be linked into a group. This approach is known as the bottom-up method and works from the bottom. So, the nervous system must be trained to gather more experience and knowledge.

2. Neural Networks

The second type is a Neural Network, which is a type of system that is organized into layers that are connected to each other through simulation. The input here is the top layer which has the same function as the sensor.

And at least two or more systems are in a more extensive collection of designs arranged hierarchically. This layer will later send and classify information over the connection.

3. Symbol-Manipulating AI

The last type is an AI system that works with abstract symbols. This type is included in a design requiring many experiments or trials. The essence of the experimental stage is to test the reconstructed human intelligence system at a more structured and logical level.

And then, the information obtained will work with symbols that will be read by the developer (developers). Where the connections formed are abstract, and the conclusions are logical.

Leave a Comment