AI: The Path To a New Future

Aishani Singh
7 min readMar 1, 2021

By: Aishani Singh

If you’ve been keeping up with recent technological developments, like I have, then you might have heard something called AI, better known as Artificial intelligence. You might not know it, but AI is a part of our technological daily lives. Whenever you talk with your Amazon Alexa, or say “Hey, Siri,” into your iPhone, you’ve had a basic introduction towards AI. Even your “Daily Mix 1” and “Release Radar” on Spotify all constantly use some sort of AI. It’s clear to see that artificial intelligence is on the rise in today’s maturing society, but the concept of AI itself has wavered, and sometimes completely flipped on its head, throughout the years. So what exactly is AI? How do you build it? How do you use it? And most importantly, how can AI help build us a path to a new future?

The Basics of AI

To answer the first question, AI can be described through one question: “Can machines think?” Alan Turing, famed mathematician, and computer scientist, asked this question in his paper, “COMPUTING MACHINERY AND INTELLIGENCE,” (1950) in which Turing addresses the problem of artificial intelligence and proposes something called the Turing test, an attempt to define a standard which a machine can be called “intelligent.”

AI, in its most basic form, is intelligence demonstrated by machines.

It’s a wide-ranging branch of computer science in which the main goal is to build “smart” machines that are capable of doing tasks that usually require human intelligence to be completed. It’s based on the principle that human intelligence can be mimicked in a way machines can understand, and at its central core, it’s really about providing an answer to Turing’s question.

A Quick History of AI

In order to really understand the concept of AI and how it’s matured over the years, let’s travel to the first time AI was mentioned in documented history: 1308. In 1308, Ramon Llul, philosopher and mathematician, published, “Ars generalis ultima,” (The Ultimate General Art) in which he “perfects” his methods of using mechanical means to “create” new knowledge. This is very similar to the mainstream of AI today, which as previously mentioned, is to basically build machines that are intelligent.

Four hundred years later, Jonathan Swift, in his book Gulliver’s Travels, mentioned something called “the Engine” on the Island of Laputa; “a Project for improving speculative Knowledge by practical and mechanical Operations.” Again, this is almost identical to the modern-day idea of artificial intelligence and was probably the first thoroughly developed example of AI back then.

However, debatably one of the most important years of our AI timeline is 1955, when two key events occurred that would revolutionize the field of AI. The first event was on August 31, when the term “artificial intelligence” was coined in a proposal for a “2 month, 10 man study” of artificial intelligence, which was proposed by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The two-month study, which occurred between July and August in 1956, is generally regarded as the birth of the new field. The second major event happened in December, when scientists Herbert Simon and Allen Newell developed the first artificial intelligence program, the Logic Theorist, which went on to prove 38 out of the 52 theorems in Principia Mathematica.

Continuing on in 1959, John Mcarthy invented the programming language, LISP, which became the most popular language used in artificial intelligence. In 1964, David Brobow, while completing his MIT PhD developed STUDENT, a “natural language understanding computer program.” In the following years, two robots were created using artificial intelligence: WABOT-1 and MYCIN.

In 1984, the term “AI winter” arose, coined by Roger Schank and Marvin Minsky. The term referred to a point in time where there would be a huge recession in the funding and interest of AI, similar to the “dot-com bubble crash” of 2002. This chain reaction would start off initially from pessimism within the AI community, and then with the negativity in the press, and then the inevitable cutback in funding, and then followed by a lack of serious research and the expansion of the field.

Lastly, in 2008, Apple unveiled a Google app which had a small feature: speech recognition. Speech recognition had been a major goal in the field for years, but nobody could ever get the accuracy about 80%. The Google app was a huge breakthrough in the development of speech recognition, and generally considered a huge step forward in the field of AI.

Narrow AI

As mentioned before, AI is a wide-ranging branch of computer science, so it’s important to differentiate all the unique types of AI from each other. There are mainly two types of AI: narrow AI (mainly referred to as AI) and artificial general intelligence (AGI).

Narrow AI is designed to only make the machine do one single task. To put it simply, let’s pretend you have a robot that’s designed to play chess with you. If it plays chess with you, and specifically nothing else, then your robot is a perfect example of narrow AI.

It’s also important to make the distinction between narrow AI and actual intelligence. AI is not conscious or emotionally logical like humans are. One good example of this would be Siri. When you talk with Siri, Siri doesn’t actively talk with you, like an actual human being would. Instead, Siri does what it was made to do: process the human language, and give us back our sought-out results. This is why whenever you talk with Siri and ask her a question, you can sometimes get a vague response with a messed up answer. However, if you ask Siri what the forecast is, you will get a straight and direct answer. This is because knowing what the weather is outside is in Siri’s range of machine intelligence, compared to her possible answer if you ask her, “Hey Siri, why do people love me?”

Every single machine intelligence around us is AI, from Alexa to Google Translate to even a self-driving car, as again, narrow AI is any AI that, while it can demonstrate some sort of intelligence, cannot demonstrate human intelligence. This is why narrow AI is generally considered as “weak” AI in the field.

General AI (AGI)

AGI (artificial general intelligence) is what is portrayed in books, the media, movies, and it’s also probably the first thing that comes to mind when we think of artificial intelligence.

AGI is artificial intelligence that exactly mimics human intelligence. Some media examples of AGI are Tars from the movie Interstellar, and C3PO from the Star Wars franchise. Basically, any machine that can plan, think, feel emotions, learn, and be creative and imaginative is an example of AGI. For example, say you had a robot that could actively (actively being the keyword here) have a conversation with you, and could feel what you could feel, and could think of plans and stories right off the top of their head. That robot would be a perfect example of artificial intelligence, because they are just like you; they can think and feel and reason logically.

However, before machines can demonstrate exact human intelligence, they must be sentient and conscious, which is why AGI has actually not been implemented yet. In order to actually reach AGI, computer hardware has to dramatically increase the total of cps (calculations per second) it can make. Fugaku, a Japanese supercomputer, holds the world record for cps at 0.54 exaflops (a billion billion of cps). However, the human brain is estimated to run at one exaflop, which means current technology is still a ways off from demonstrating exact AGI.

And now a brief introduction to the two subsets of AI.

Machine Learning

Machine learning is a subset of artificial intelligence (more specifically AGI). It’s the development of computer programs that advance manually based on experience. The process begins with either observations or data, known as “training data,” in order to allow the computer to make predictions and/or decisions without being specifically programmed to. There are three types of machine learning: supervised learning, unsupervised learning, and reinforcement learning, which all are used to teach computers to achieve tasks where no exact algorithm is available.

Deep Learning

Deep learning is a subset of machine learning, based on artificial neural networks. Artificial neural networks are computer systems that are vaguely programmed in order to resemble the biological neural network that exist in an animal’s brain. Deep learning is essentially a class of machine learning algorithms that “uses multiple layers to progressively extract higher-level features from the raw input.” (Wikipedia) One example may be for image processing; lower layers could recognize edges, while higher levels could recognize human concepts such as digits and faces.


It’s clear to see the complex concepts about artificial intelligence, but it’s important to recognize the infinite ways of how artificial intelligence can create new paths for our future. If we can continue our current progression in the field of narrow AI and AGI, our future will be limitless as to what we as humans can do and achieve. While AI is no more than an emerging field in science right now, it is no doubt the key to unlock the future that our world holds.