NeuralNet Programming Project

Artificial Intelligence 2012

Haoqing Geng 

Xuli Song 

Stanley Cheng 


Problem Definition

In this assignment we are implementing Neural Networks. We use back propagation algorithms in order to achieve the most optimal results, minimizing the error rate. We are given two different sets of data, we train the program with the training set then test it on the testing set. This program can be useful for the field of AI since we can use these tests to run other tasks ranging from medicine to manufacturing.


Method and Implementation

We were a given skeleton code by Zhiqiang Ren. The group added multiple methods to minimize the error rate. From the class, we created a training algorithm which takes in the inputvs, outputvs, double r (rate). We then created the arrays of inputs, results, and target. If we are running the outer beta, we subtract the results from the evaluation. If it's another layer, then we get the OutputConnection for the node. We have the summation of the beta and weights. We then implement the equation which is the summation of Beta. We create another loop which which will get us the Delta weight. Afterwards, we sum up all the weight changes. We also created an error rate for the lenses case since the given one would only accept 2 outputs. From the class, we added a setBeta to set the beta value, and getBeta which will give us the beta value that was set. From, we created a setDeltaWeight, to set the change of weight. From the, we normalized the inputs for the lenses test. We also created a while loop with the case if the final error is over .44, to run the test as a way to give the program correct training. For the cases of lenses, we created a while loop to also get correct training. It'll continue on to the test case after the training is complete. We also created an output normalization for the lenses since it has three output arguments.



We ran the training data for both the Credit Approval data and Lenses data. We have the program run the test after the training is complete. 


Learning rate: Above is the trials of changing learning rate with our program: Generally, When learning rate is under 0.5, we can get relatively stable good error rate. When it's over 0.5, though we could sometimes get good error rate, but the error is not stable. The most common bad result situation is 0.45 and 0.54, which indicates that the output of our net is all 0s or all 1s. By setting learning rate under 0.5, the result gets more stable. We fixed this problem by running a while loop with an upper bound to change our error rate afterwards from the results displayed above.

Initial data for weights and threshold: We tried to initialize weights all as 0.1, instead of random number, and the result is always bad. We also tried to initialize threshold as 0.1, which does not change the result too much. It indicates that our train method could take care of the threshold change, but random weights initialization makes a big change. 

Train number: For our net, according to the table, 200 is a good number for training. Under 200, the more you train, the better the error rate is.



Click here and see clear picture

The results displayed above with the error and error rate.



This project has both strengths and weaknesses. Our results show that our error rate is very low though not perfect. The use of back propagation and normalization dramatically decreased our error rate. We believe some lenses cases failed due to over fitting which we believedis caused by a low sample set. Overall, this project was a success.



This is a really cool project. I believe that using these methods of training and putting into test cases to run real world application will make this idea blossom into other fields ranging from medical to manufacturing. Putting this to use can expedite the process of manufacturing goods and catching tumors in cells more quickly than a human being. I believe if we could make this program without knowing the final results would make this program extremely useful in the future.



Haoqing Geng

Stanley Cheng

Xuli Song