Thursday 15 May 2014

Exploring Artifical Intelligence - Can Machines Think?

Artificial Intelligence is both a topic of Computer Science and Philosophy, and begins to ask what really makes us human and if we are just a complex biological machine? Alan Turing first asked the question in his 1950 paper named Computing Machinery and Intelligence. The paper is much more accessible in comparison to his paper about Computability Theory and computable numbers, which are real numbers that can be calculated with a terminating algorithm, the computable numbers are also considered countable (number of axioms for this).

This blog post won't be long, and I'll probably conclude with a link to my OneDrive account which has a copy of Alan Turing's paper. I'm going to be mostly be talking about the philosophy of Artificial Intelligence.

There is one concept which has been of great interest to philosophers since the beginning of civilization, and that is the concept of consciousness, which is the ability to be able to be aware of our own existence. This concept of consciousness is what separates us from being a very complex biological machine. In essence, our behavior is typically determined through learning and our own past experiences. Yes, we can incorporate this concept of learning into machines and measure how well it performs when learning, but a machine will never be aware of it's own existence naturally like a human, unless we find some method of enabling the machine to be aware of it's own existence, however, that would be cheating since it is still relying on predetermined behaviors created by us.

A machine is usually regarded as being intelligent if it is able to imitate a human successfully (it able to play the imitation game in Turing's paper); that is to say that a human is unable to distinguish a machine from another person. Turing outlines in his paper the similarities between a person and a computer. However, can a machine think? What is thinking? Computer programs are based upon a defined set of rules which can't be broken, unless the rules weren't correctly defined to begin with, and therefore is the machine thinking about a problem or just going based upon a defined set of rules? 

Behaviorists will suggest that all thoughts are based on defined set of rules, and these rules give the false impression of free will and thought. This doesn't explain the idea of consciousness and what exactly creates consciousness. I believe machines will be able to solve mathematical problems and think, but they won't ever possess the strange and rather beautiful consciousness. Most robots aren't able to intelligently move around objects yet.

At the moment, most machines can only given Boolean valued answers to decision problems, they aren't able to think about the another answer, just give a simple "Yes" or "No" based upon some algorithm. There are learning algorithms, though I don't believe you can consider that as true thought. There is a popular idea of that a machine can only do what it is told to do, and this seems to very much true for almost all machines as of now.

I would suggest reading the paper Computing Machinery and Intelligence, which is available here.

It would be nice to hear some of my readers thoughts on the concept, and any ideas they may have about Machine Learning.






No comments:

Post a Comment