Pricing Academy Article

The history of AI

Artificial intelligence is based on the assumption that human thought as a process can somehow be recreated with computers. The study of this idea has long standing roots and philosophers in China, India and Greece all developed structured methods of formal deduction in the first millennium BCE. Their ideas were further developed by philosophers such as Aristotle, Euclid and European scholastic philosophers such as William of Ockham and Duns Scotus.

In the 17th century Gottfried LeibnizThomas Hobbes and René Descartes were all exploring the possibility that all thought could be made as systematic as algebra or geometry. As Hobbes famously said “reason is nothing but reckoning” Leibniz envisioned a universal language for reasoning which would make argumentation more like calculation.

In the 20th century the study of mathematical logic provided the necessary breakthrough that made artificial intelligence seem possible and plausible. In 1913 Bertrand Russel and Alfred North Whitehead presented their masterpiece, Principia Mathematica that would lay the foundation for the question “can all of mathematical reasoning be formalized?”. Many famous scholars started answering these questions and their answers led to two surprising and important conclusions; they proved there were limits to what mathematical logic could accomplish and (more important for AI) they proved that within these limits, any forms of mathematical reasoning could be recreated with computers.

The birth of artificial intelligence, 1952-1956

In 1956, the field of artificial intelligence was founded as an academic discipline after a handful of scientists from different fields started discussing the possibility of creating an artificial brain. Artificial intelligence became its own field right around the time when research had first proven that the human brain was composed of an electrical network of neurons that signaled all-or-nothing impulses. Throughout the 1930s, 1940s and 1950s, several scientists worked on different projects, making discoveries that would start to prove the possibility of actually creating an artificial brain.

In 1950 Alan Turing published his famous paper where he speculated that it would be possible to create machines that think. Since according to Turing, “thinking” was difficult to define, he developed his famous Turing Test. In this test, if a machine could carry on a conversation over a teleprinter that was indistinguishable from a conversation with a human being, it was reasonable to say that the machine was “thinking”. In the following years after Turing’s test, several machines would be built to play games, one was made to play checkers, another to play chess and they even succeeded in challenging a respectable amateur. The Turing test and Alan Turing’s findings would become a cornerstone in the philosophy of artificial intelligence and we will look at them a little closer in the following chapter. In 1956, which is considered by many to be the true birth year of artificial intelligence, the Dartmouth Conference was organized. The conference was attended by the foremost scholars in the field and it was during this conference that AI gained its name, its mission and even the first successes. 

The fruitful years 1956-1974

The Dartmouth Conference launched the work surrounding AI into a whole other level and many successful programs would be developed in the coming years. One of the first programs was a paradigm called “reasoning as a search”. This included an algorithm that in order to achieve a certain goal would proceed step by step towards it and backtrack whenever it reached a dead end. The problem in this case was that the amount of possible paths was often astronomical so researchers would use rules to limit the amount of paths to where the solution was most likely to be found.

An important goal when it came to developing artificial intelligence was to get the computers to communicate in a natural language. Several successful projects were carried out and this was when the concept of semantic nets was first used. Semantic nets are net-like structures composed of words that allow the formation of grammatical structures.

After the breakthroughs in the 60s and 70s what followed was a recession of 10 years where the development slowed down and no major steps forward were taken. This was due to problems like limited computer power, lack of funding and the lack of common sense knowledge and reasoning which is required to teach the computer all possible outcomes of a situation.

The 80s boom

The 1980s was a big decade for AI with the most prominent discovery being the development of artificial neural networks. The development of artificial neural networks, mimicking the neural network of the animal brain would pave the way for the discipline of deep learning and reinforcement learning, both of which are looked at later on in this book.

Sniffies AI-articles are based on the e-book "AI in pricing"

Intelligent agents

Fast Forward almost 20 years and we land at the following major development in the form of Moore’s law and intelligent agents. Computing power had been a challenge for quite some time until the problem was slowly starting to be overcome, relying here on Moore’s law which predicts that the speed and memory capacity of computers doubles every two years, as a result of metal-oxide-semiconductor transistor counts doubling every two years. During this time was when the world famous Deep Blue became the first computer to beat a reigning world chess champion, Garry Kasparov.

During the 1990s a new paradigm, known as “intelligent agents” became widely accepted. The intelligent agent didn’t reach its modern form until researchers brought in concepts from decision theory and economics. When the definition of “rational agent” known in economics and that of an object or module from computer science were married, the intelligent agent paradigm was complete. An intelligent agent is in essence a system that perceives its environment and takes actions to maximize its success.

Deep learning

From 2011 until today, the key topics in artificial intelligence have been deep learning, big data and artificial general intelligence. By 2016 the New York Times reported that the interest in AI had reached a “frenzy”. Computers were cheaper, getting your hands on huge datasets was easier and advanced machine learning techniques were successfully applied to many problems throughout the economy.

Deep learning refers to a branch of machine learning that uses a deep graph with many processing layers. It allows for simulation of much more complex models as compared to their shallow counterparts. Language processing engines powered by smart search engines are now able to easily beat humans at answering general trivia questions and some have even, quite controversially, proven really good at playing first-person-shooter games.

When it comes to big data, the capabilities of conventional softwares simply wasn’t enough and therefore new, more powerful tools were designed to capture, manage and process the massive amounts of data in a certain time frame. The key in big data processing is to use all the data for analysis but to specialize only on the meaningful data.

Artificial general intelligence refers to artificial intelligence that is close to that of the human mind in that it is capable of processes like human thinking and may even exceed human intelligence. Artificial general intelligence is also known as “strong AI” or “full AI” or as the ability of a machine to perform “general intelligence action”. The term “strong AI” is reserved for machines capable of experiencing consciousness.

Want to read more about how AI will transform pricing?

Sniffie Blog

Check out our latest blogs

Product Release – General Widget

NEW: Vastly improved analytics capabilities! We at Sniffie are beyond excited to tell you about our newest product release which includes the biggest improvements to

Read more