The objective of producing a machine capable of performing actions comparable to humans has long been an intriging and aggressively sought after ideal. The area of computer science known as Artifical Intelligence, AI, encompasses the many methods used to try to make computers (machines) acquire human attributes such as the ability to learn continually and adaptively. While there is a plethora of AI models, methods, programming languages, and so on, there is one particularly interesting branch of AI that we will focus on during Summer Institute: Artificial Neural Networks, or ANNs.
Rather than trying to model high level intelligence like many AI methods do, the originators of the ANN AI method took inspiration from the human brain and modelled a very low level of the mechanics of human intelligence. In 1943 Warren McCulloch and Walter Pitts published a paper describing a simplified model using electrical circuits that demonstrated how neurons in the human brain might function *. Since that time the McCulloch and Pitts model has been improved upon and used as the basis for many more sophisticated and capable ANNs. These ANNs have been implemented mostly in software, but have also been made into ANN microchips which offer tremendous increases in speed. Despite the staggering amount of variation in ANN algorthms now present, they share the ability to allow a computer to adaptively and continually "learn" from examples. Some of the algorithms are powerful enough that they are currently being commercially used in several industries.
In the ANN project we will be studying and implementing in software the most powerful and widely used ANN training algorithm, Back Propagation, in a multi-layer neuron structure. We will employ a method of learning called supervised learning to "teach" our ANN to recognize many variations of a small set of example patterns. The general structure of the network is displayed in Figure 1. There are three layers in this type of ANN: the input layer, the hidden layer, and the output layer. The input layer is made up of one or more neurons that collectively represent the information in a particular pattern of a training set. The hidden layer also consists of one or more neurons. Its purpose, simplistically, is to transform the information from the input layer to prepare it for the output layer. The output layer, which has one or more neurons, uses input from the hidden layer (which is a transformation of the input layer) to produce an output value for the entire network. The output is used to interpret the training and classification results of the network. The neurons between the input and hidden layers and the hidden and output layers are connected by weights (the lines in Figure 1).
The type of "learning" employed here is similar to human learning in that the ANN is shown a small set of examples, say each upper case letter of the alphabet typed in Times font, multiple times until it can tell each letter from all of the others. If this training is completed successfully, the ANN will not only be able to recognize each training letter correctly, it will be able to classify variations of each training letter correctly! For instance, a properly trained ANN can recognize the letters in Figures 2 and 3 as the letter 'W'. The power of ANNs is seen vividly here because the ANN was only trained using the typed letter in Figure 2; however, the ANN "learned" enough about the different patterns of the upper case letters in the alphabet that it was also able to classify mishaped letters.
Through this project students will be introduced to a type of machine learning through the popular framework of ANNs. Students will learn the basics of ANN structure and interpretation as they pertain to the multi-layer neuron model and the Back Propagation training algorithm. The students will then have to use their new-found knowledge to produce a working implementation in C or C++ of an ANN (using the methods and structures above) that can correctly perform pattern classification.
The field of ANNs is very large and can be, at first, overwhelming. If you think you will pick this project please visit the ANN FAQ to learn a little more about ANNs before Summer Institute begins. The ANN FAQ is an excellent resource but a very broad and technical one. Considering that, it is suggested that you just look around on this FAQ and read what interests you, especially in the Part 1: Introduction and Part 2: Learning sections.
* Neurons communicate in the brain via electical impulses creating large groupings of neurons or neural networks. These networks are considered to be the learning mechanism in humans that helps us achieve our level of intelligence.
Nick Watts is the group leader for the Neural Networks project. His office is in OSC, cubicle 420-27, phone 292-6066.
For assistance, write firstname.lastname@example.org or call 614-292-0890.