History of AI

1. Tracing History and Foundations (1950s):

The roots of artificial intelligence research go back to the middle of the 20th century. During this period, Alan Turing developed the “Turing Test”, which aimed to measure the human-like intelligence of machines. The term “AI” was introduced by John McCarthy in 1956 at the Dartmouth Conference, where the development and foundations of the field were discussed.

2. Abstract Theories and Symbolic AI (1960s - 1970s):

In the first decades, researchers used symbolic AI techniques in which machines used symbols to process knowledge and data. During this period, algorithms were written by hand and machines often performed well in a limited field of expertise.

3. The Decline of Artificial Intelligence (1980s - 1990s):

The roots of artificial intelligence research go back to the middle of the 20th century. During this period, Alan Turing developed the “Turing Test”, which aimed to measure the human-like intelligence of machines. The term “AI” was introduced by John McCarthy in 1956 at the Dartmouth Conference, where the development and foundations of the field were discussed.

4. Statistically Based Approaches and Machine Learning (2000s):

In the late 1990s and early 2000s, with the increase in data and computing capacity, machine learning, especially deep learning, came to the fore again. cost. Machine learning algorithms were able to discover patterns in our data and learn complex tasks, such as image recognition, speech recognition and language processing.

5. General and Strong Artificial Intelligence (2010s - present):

 In recent years, in the field of AI research, we have reached a point where steps towards general and strong artificial intelligence have become central. General AI aims at machines that can serve general purposes, rather than just one specific task. Important elements of this development are awareness, personal emotional attachment and the handling of ethical issues.