Re: Perceptron in Euphoria
- Posted by Alfredo Brand <abrand at ALUMINA.COM.CO> Nov 29, 1998
- 504 views
------=_NextPart_000_0042_01BE1BC4.12543100 charset="iso-8859-1" -----Mensaje original----- De: Ralf Nieuwenhuijsen <nieuwen at XS4ALL.NL> Para: EUPHORIA at LISTSERV.MUOHIO.EDU <EUPHORIA at LISTSERV.MUOHIO.EDU> Fecha: domingo 29 de noviembre de 1998 8:42 Asunto: Re: Perceptron in Euphoria >>my first contibution to this list. >>If you think It, you can make it. >>After several days of programming I finally got It, >>enjoy it. >>brand_01.e is the main program. > > >Looks nice, but its all Spanish, what does it do ? The perceptron is a program that learn concepts, i.e. it can learn to respond with True (1) or False (0) for inputs we present to it, by repeatedly "studying" examples presented to it. The Perceptron is a single layer neural network whose weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the perceptron learning rule. The perceptron generated great interest due to its ability to generalize from its training vectors and work with randomly distributed connections. Perceptrons are especially suited for simple problems in pattern classification. The perceptron calculates its output using the following equation: P * W + b > 0 where P is the input vector presented to the network, W is the vector of weights and b is the bias. The Learning Rule The perceptron is trained to respond to each input vector with a corresponding target output of either 0 or 1. The learning rule has been proven to converge on a solution in finite time if a solution exists. The learning rule can be summarized in the following two equations: For all i: W(i) = W(i) + [ T - A ] * P(i) b = b + [ T - A ] where W is the vector of weights, P is the input vector presented to the network, T is the correct result that the neuron should have shown, A is the actual output of the neuron, and b is the bias. Training Vectors from a training set are presented to the network one after another. If the network's output is correct, no change is made. Otherwise, the weights and biases are updated using the perceptron learning rule. An entire pass through all of the input training vectors is called an epoch. When such an entire pass of the training set has occured without error, training is complete. At this time any input training vector may be presented to the network and it will respond with the correct output vector. If a vector P not in the training set is presented to the network, the network will tend to exhibit generalization by responding with an output similar to target vectors for input vectors close to the previously unseen input vector P. Limitations Perceptron networks have several limitations. First, the output values of a perceptron can take on only one of two values (True or False). Second, perceptrons can only classify linearly separable sets of vectors. If a straight line or plane can be drawn to seperate the input vectors into their correct categories, the input vectors are linearly separable and the perceptron will find the solution. If the vectors are not linearly separable learning will never reach a point where all vectors are classified properly. The most famous example of the perceptron's inability to solve problems with linearly nonseparable vectors is the boolean exclusive-or problem. ------=_NextPart_000_0042_01BE1BC4.12543100 name="perceptron2.gif"