March 5, 2020
Some argue that intelligence is just a description of mechanical behaviour that we haven’t decoded (yet).
The argument runs along these lines: If a child were to describe how a thermostat works, she might say: ‘it feels cold and decides to turn on the heating, or it feels warm and so turns down the heating.’
She uses what we call intentional language to describe what we know is purely mechanical. The thermostat, of course, feels nothing and decides nothing.
So too, with all intelligent behaviour, it only looks intelligent because we are so close to it and don’t properly understand it.
Step far enough away and throw enough analytical power and the patterns and rules will emerge, which allows one to perfectly describe and predict what we do.
Whether you subscribe to the idea that behaviour can appear intelligent when actually it isn’t, is central to AI.
In 1996, Garry Kasparov was then a Chess Grandmaster and reigning world champion. He was challenged to a six-match tournament against an IBM machine called Deep Blue.
Deep Blue won, just. It used what is known as GOFAI (Good Old-Fashioned Artificial Intelligence) and is simply massive parallel processing and pruning of a search tree to find a maximising score.
As it evaluated each possible branch of gameplay, the moment it found that a particular move had a worse outcome than a previously evaluated move, it discarded the inferior move from further analysis.
Again there is no real intelligence here; instead, there is a vast computation of the possibilities available within the rules of chess and an algorithm to make the search for an optimal move more efficient.
Once this is understood, there is no mystery to Deep Blue.
So what has happened in the 20 or so years since that victory of a machine over a person? There have been huge leaps forward in computational power and miniaturisation, there have been great advances made in terms of programming feedback, and there has been a veritable explosion of freely available data.
These three things have enabled software developers to write code that effectively learns from experience and improves its responses based on that learning in real-time. Effectively they have found a way to simulate providing semantic context to programmes.
In 2012, Google software managed to teach itself to identify images of cats (as opposed to dogs) from observing ten million YouTube stills over three days.
While that sounds impressive, this is a task well within reach of a two-year-old human. Just as we said the thermostat didn’t actually feel or decide anything; so too with modern AI.
In Brewin IT, we’ve got hands-on experience and skills in a number of areas of AI.
We have, of course, written a few chatbots. We’ve given them a semantic context to know about, we’ve trained them and they’re good at doing what we’ve designed them to do.
Our most recent experiment was to utilise AI to understand large sources of text data, such as handbooks or policies, and provide fast and easy answers to relevant questions.
The idea is to apply our learning in this space to create solutions that help support people with their day-to-day work.
Similarly, by utilising anonymous data captured in our AML process, we have built algorithms that predict high-risk prospects.
In building these, we exploit a range of tools and techniques common within the world of AI. These include natural language processing and supervised learning for chatbots, and regression analyses of various types such as Decision Trees, Naive Bayes and Neural Networks on data sets to generate algorithms for doing predictive analytics.
– Advertising feature –