Intro to Artificial Intelligence
If I asked you to describe the first thing that pops into your head when you hear Artificial Intelligence (A.I.), what would it be? Do you imagine a world where machines take over the planet like Skynet in The Terminator movie series? Or maybe something simpler such as when you use your phone to ask Siri or your Echo speaker to ask Alexa, “What will the weather be like today?”
Regardless of what first pops into your head, most of you probably imagine a world where a machine both thinks and acts like a human autonomously. In reality, A.I. is a computing device that can mimic human thought and decision-making ability. The device can perceive its own environment and act on its own to reach the goal.
The field of A.I. has been around since 1956. A Dartmouth College research project conducted during 1950’s described A.I. as human intelligence “so precisely described that a machine can be made to simulate it.” There is an extensive list of A.I. research that includes reasoning, knowledge, planning, learning, and perception. A.I. also applies principles from statistics, probability, economics, mathematics, psychology, and neuroscience.
A.I., in its simplest form, is gathering data points, recognizing a pattern, then acting upon the pattern to achieve the desired result. If you are on Facebook you already see the results of this within the ads on your news feed. Facebook has so many data points on you and your interests that when a company purchases an advertisement, Facebook knows who to show the ad to. Then, as the ad is shown over a period of time, Facebook will fine tune the audience to similar types of people that will also find the ad relevant. Afterwards, if the company chooses to run another ad, it can select what is called a “lookalike audience”. Lookalike audiences are brand new people that share similar traits (data points) as existing customers.
Cybersecurity is going thru a renaissance now with advances in A.I. Cloud hosted security programs can now detect irregular patterns in data traffic that represent a variety of new types of viruses or intrusions relatively on the fly. Within a couple of hours, a new zero-day virus will be automatically stopped in its tracks from spreading, protecting the rest of the computer network from infection as the security system is continually learning. Stopping a new threat like this use to take days to resolve, if not longer, was very labor intensive, and that’s if it was detected in the first place.
The biggest change we will see in our day to day life in the next couple of years will be self-driving cars. Autonomous vehicles have been getting a lot of main stream media attention with leaps in technology by Google, Tesla, and Uber. Believe it or not, this is a field that has been running experiments since the 1920s. Virginia and Washington D.C. have allowed autonomous car testing on public roads since 2015. The A.I. involved for driverless vehicles includes sensors to detect the vehicles environment, tracking other objects around the vehicle, GPS positioning, even visual object recognition.
The computer in the car must interpret all those data points and continually make its own decisions to get from point A to point B, while monitoring other traffic around them, other unexpected obstacles, such as a pedestrian or a child running into the street after a ball, and follow the general rules of the road
As computing power increases, the advances in Artificial Intelligence will continue to make huge strides. Honestly, who wouldn’t prefer to sleep on their commute to work in the morning or stream a show that Netflix A.I. recommends to you, rather than fighting road rage inducing traffic anymore?
Bring it on!
Latest posts by John Barker (see all)
- ZOOMpocalypse: 6 Tips for Secure, Efficient Virtual meetings - April 13, 2020
- Rural Broadband in a Socially Distant World - April 3, 2020
- Can You Really Fix that Broken Tablet? - March 12, 2020