The Push: Predicting the Future of Artificial Intelligence

When people think about the future of artificial intelligence (AI), they generally fall under two schools of thought. “The first one”, says guest speaker Dr. James Fan, “is that robots will start taking over the world and kill people and the other generally says that AI is awesome. The reality is that AI researches don’t think any of those two ways. There is a big disconnect.”

On May 10, 2017, I went back to Galvanize for another event. The other event I attended there was Big Data in Startups, was about how small businesses can think about their internal data structures and troubleshooting common problems for data in startups. Brian Rogers from the Hike with a Data Scientist meetup was there as well, which shows how tightly-knit the community is.

The event was called, AI and Life in 2030 (Mis)Interpreted. Compared to the previous event, it felt totally different in a good way. It was part one of a three part series and I plan on attending all of them. Dr. Fan’s talk was a well-organized college-like lecture. I’m going to use the structure of his lecture to format the things I learned:

  1. The History of AI
  2. Overview of a Stanford Report
  3. Defining AI
  4. Trends in AI

The History of AI

The inception of AI was during a conference at Dartmouth in 1956. There were no ground-breaking discoveries. Dr. Phan said, “the one outcome was the name ‘Artificial Intelligence’”. After three months of intense work, all that the founders came up with was a name for AI. Most of the men at the conference were deep believers of logic, meaning that intelligence arose from many logic rules. The dissenter in this group was Arthur Samuel. Samuel created the first self-playing checker board that could beat a human. He did not use logic. Instead he used a simple linear equation that kept tweaking itself after every move.

The 60s era of the cold war brought the need for machine translation from Russian to English and the government heavily funded progress in AI. This was when the first preliminary robots were invented, including a little robot named Shakey, which had the same, rudimentary, parts as the self-driving car.

The period from 1974  to 1980 is considered the “first winter” of AI. Funding had decreased dramatically because, while people has high hopes for AI, they underestimated the difficulties involved in making progress with new technology. We didn’t have enough processing power and computers had a limited vocabulary of twenty words.

In the 80s we got more advancements in AI because Japan began the 5th Generation Project, which was a system that solicited knowledge by coding many rules for humans. In a 1992 article from the New York Times, it talks about the day that Japan decided to scrap this project.

The period of 1987 to 1993 was the “second winter” of AI. The system that the 5th Generation Project was really fragile because the system of rules that it was based off of was extremely susceptible to breaking. This failure disheartened the AI community and few significant advancements were made.

1996 was the year that super computer Deep Blue beat the chess world champion, Garry Kasparov. In the early 2000s an ambitious government mandate was released requiring that one third of military vehicles be autonomous by 2015. You can read part of it here.

DARPA issued a grand challenge with the prize of 1 million dollars for the car that could drive 200 miles across a dessert. The first time they tried, 14 miles in there was a ditch and every single car that made it to the ditch got stuck in it. In 2005, they repeated the challenged and this time they moved the starting line further away from the ditch to give the cars more of a chance to win. That time, many cars successfully completed the challenge.

In 2006, Dr. Fan entered a competition to create a Question and Answer AI to compete in Jeopardy. Dr. Fan said, “when we asked people if we should enter, they told us, ‘Don’t ever do that.’ We started in 2007 and competed in 2011 and won. No one thought it was possible”.

When Dr. Fan summarized the progress of AI since its inception in Dartmouth in 1956, he quoted Rudiger Dornbusch, “The crisis takes a much longer time coming than you think, and then it happens much faster than you would have thought.”

Overview of the Stanford Report

The Stanford report titled, Artificial Intelligence in 2030: One Hundred Year Study on Artificial Intelligence, concludes that there is no imminent threat to mankind in developing AI. Here is the excerpt:

“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly. Application design and policy decisions made in the near term are likely to have long-lasting influences on the nature and directions of such developments, making it important for AI researchers, developers, social scientists, and policymakers to balance the imperative to innovate with mechanisms to ensure that AI’s economic and social benefits are broadly shared across society.”

Defining AI

There is no precise definition for AI.

Trends in AI

Here is a list of the following things where AI is very present in our lives:

  • GPS navigation
  • Book and product recommendation
  • Playing games
  • Self-driving vehicle
  • Web-research
  • Wikipedia
  • Internet of things
  • Operation research
  • Tax filing software

If you’re interested in attending another event like this, and I recommend that you do, check out part two of the 3 part series here.

Leave a Reply