Open source Artificial Neural Network Computing Library FANN Learning Note 1

Source: Internet
Author: User
Tags fann

Open source Artificial Neural Network Computing Library FANN Learning Note 1

These days machine learning is very fire, neural network is the machine learning algorithm is a more important one. This time I also took some effort, learned a little fur, by the way to do some study notes.

There are many textbooks about the basic theory of artificial neural networks. I am looking at Professor Engineering's "Introduction to Artificial neural networks," the reason why this book, mainly this is relatively thin, too thick book is really gnawing. This book is also relatively plain, used to get started is appropriate.

Reading the same time on the Internet to find the artificial neural network library code. Feel FANN This library is not bad, I stopped to learn how to use this library.

FANN is an open source C language implementation of the artificial neural network library, because it is written in standard C language, so the requirements of the operating system, and so on, can be run under each platform. And this library supports fixed-point operations, and running on CPUs without a floating-point processor can be much faster than other libraries that do not support fixed-point operations.

Although FANN is a pure C language, but according to the object-oriented thinking framework, interface design is very good. Have more detailed documentation, easy to use. and has been supported in more than 20 programming language environments, such as C #, JAVA, Delphi, PYTHON, PHP, PERL, RUBY, Javascript, Matlab, R and so on.

The following is a very simple example. We use a neural network to simulate the operation of two Boolean variables. Our training data is placed in a file with the file name "And.data".

The contents are as follows:

4 2 1
0 0
0
0 1
0
1 0
0
1 1
1

Of these, 4 2 1 indicate that our training data set has 4 training data. Each of the data has 2 inputs and one output. followed by the input and output data.

We know that operations can be implemented with a single-layer perceptron. That means it can be implemented with a 2-layer neural Network (1-layer input layer, 1-layer output layer).

#include <doublefann.h>Const unsigned intNum_input =2;Const unsigned intNum_output =1;Const unsigned intNum_layers =2;Const unsigned intNum_neurons_hidden =1;Const floatDesired_error = (Const float)0.0001;Const unsigned intMax_epochs = +;Const unsigned intEpochs_between_reports =Ten;intMainintargcChar*argv[]) {structFann *ann;structFann_train_data *data;printf("Creating network.\n");    Ann = Fann_create_standard (num_layers, Num_input, Num_neurons_hidden, num_output); data = Fann_read_train_from_file ("Q:\\and.data");printf("Training network.\n"); Fann_train_on_data (Ann, data, Max_epochs, Epochs_between_reports, Desired_error);printf("Testing network. %f\n ", Fann_test_data (Ann, data));//Fann_save (Ann, "Q:\\and_float.net");Fann_type input[2];    Fann_type *calc_out; input[0] =0; input[1] =0; Calc_out = Fann_run (ann, input);printf("and test (%f,%f),%f\n", input[0], input[1], calc_out[0]); input[0] =0; input[1] =1; Calc_out = Fann_run (ann, input);printf("and test (%f,%f),%f\n", input[0], input[1], calc_out[0]); input[0] =1; input[1] =0; Calc_out = Fann_run (ann, input);printf("and test (%f,%f),%f\n", input[0], input[1], calc_out[0]); input[0] =1; input[1] =1; Calc_out = Fann_run (ann, input);printf("and test (%f,%f),%f\n", input[0], input[1], calc_out[0]);}

The program is very simple, one line I commented out:

fann_save(ann, "q:\\and_float.net");

This is used to save the trained neural network. For later use. In fact, for most neural network applications, the network training process is separate from the use of the network. Because it is usually time-consuming to train a neural network, once you have trained it, you can always use it. There is no need to train again before each use.

Once the state of the Fann is saved. It is convenient to use in the back. such as the following.

    struct fann *ann;    ann = fann_create_from_file("q:\\and_float.net");

The results of the post-run output are as follows:

Creating Network.
Training Network.
Max epochs 1000. Desired error:0.0001000000.
Epochs 1. Current error:0.2458046824. Bit fail 4.
Epochs 10. Current error:0.1185930669. Bit fail 2.
Epochs 20. Current error:0.0314348340. Bit fail 0.
Epochs 30. Current error:0.0145781031. Bit fail 0.
Epochs 40. Current error:0.0055742161. Bit fail 0.
Epochs 50. Current error:0.0024458752. Bit fail 0.
Epochs 60. Current error:0.0015401742. Bit fail 0.
Epochs 70. Current error:0.0008485225. Bit fail 0.
Epochs 80. Current error:0.0004349701. Bit fail 0.
Epochs 90. Current error:0.0002433052. Bit fail 0.
Epochs 100. Current error:0.0001541690. Bit fail 0.
Epochs 103. Current error:0.0000989792. Bit fail 0.
Testing network. 0.000095
and test (0.000000,0.000000), 0.000000
and test (0.000000,1.000000), 0.012569
and test (1.000000,0.000000), 0.007818
and test (1.000000,1.000000), 0.987361

You can see that the result of fitting is very good. We also know that a single-level perceptron cannot express an XOR operation. Here you can do an experiment to change the operation to an XOR operation. The training data are as follows:

4 2 10 000 111 011 10

The program basically does not need to change. The result of the operation is this.

Creating Network.
Training Network.
Max epochs 1000. Desired error:0.0001000000.
Epochs 1. Current error:0.2500112057. Bit fail 4.
Epochs 10. Current error:0.2502280176. Bit fail 4.
Epochs 20. Current error:0.2500012517. Bit fail 4.
Epochs 30. Current error:0.2500000000. Bit fail 4.
Epochs 40. Current error:0.2500000000. Bit fail 4.
Epochs 50. Current error:0.2500000000. Bit fail 4.
Epochs 60. Current error:0.2500000000. Bit fail 4.
Epochs 70. Current error:0.2500000000. Bit fail 4.
Epochs 80. Current error:0.2500000000. Bit fail 4.
Epochs 90. Current error:0.2500000000. Bit fail 4.
Epochs 100. Current error:0.2500000298. Bit fail 4.
Epochs 110. Current error:0.2500000000. Bit fail 4.
Epochs 120. Current error:0.2500000000. Bit fail 4.
Epochs 130. Current error:0.2500000000. Bit fail 4.
Epochs 140. Current error:0.2499999851. Bit fail 4.
Epochs 150. Current error:0.2500000000. Bit fail 4.
Epochs 160. Current error:0.2500000000. Bit fail 4.
Epochs 170. Current error:0.2500000000. Bit fail 4.
Epochs 180. Current error:0.2500000000. Bit fail 4.
Epochs 190. Current error:0.2500000000. Bit fail 4.
Epochs 200. Current error:0.2500000000. Bit fail 4.
Epochs 210. Current error:0.2500000000. Bit fail 4.
Epochs 220. Current error:0.2500000000. Bit fail 4.
Epochs 230. Current error:0.2500000000. Bit fail 4.
Epochs 240. Current error:0.2500000000. Bit fail 4.
Epochs 250. Current error:0.2500000000. Bit fail 4.
Epochs 260. Current error:0.2500000000. Bit fail 4.
Epochs 270. Current error:0.2500000000. Bit fail 4.
Epochs 280. Current error:0.2500000000. Bit fail 4.
Epochs 290. Current error:0.2500000000. Bit fail 4.
Epochs 300. Current error:0.2500000000. Bit fail 4.
Epochs 310. Current error:0.2500000000. Bit fail 4.
Epochs 320. Current error:0.2500000000. Bit fail 4.
Epochs 330. Current error:0.2500000000. Bit fail 4.
Epochs 340. Current error:0.2500000000. Bit fail 4.
Epochs 350. Current error:0.2500000000. Bit fail 4.
Epochs 360. Current error:0.2500000000. Bit fail 4.
Epochs 370. Current error:0.2500000000. Bit fail 4.
Epochs 380. Current error:0.2500000000. Bit fail 4.
Epochs 390. Current error:0.2500000000. Bit fail 4.
Epochs 400. Current error:0.2500000000. Bit fail 4.
Epochs 410. Current error:0.2500000298. Bit fail 4.
Epochs 420. Current error:0.2500000000. Bit fail 4.
Epochs 430. Current error:0.2500000000. Bit fail 4.
Epochs 440. Current error:0.2500000000. Bit fail 4.
Epochs 450. Current error:0.2499999851. Bit fail 4.
Epochs 460. Current error:0.2500000000. Bit fail 4.
Epochs 470. Current error:0.2500000000. Bit fail 4.
Epochs 480. Current error:0.2500000000. Bit fail 4.
Epochs 490. Current error:0.2500000000. Bit fail 4.
Epochs 500. Current error:0.2500000000. Bit fail 4.
Epochs 510. Current error:0.2500000000. Bit fail 4.
Epochs 520. Current error:0.2500000000. Bit fail 4.
Epochs 530. Current error:0.2500000000. Bit fail 4.
Epochs 540. Current error:0.2500000000. Bit fail 4.
Epochs 550. Current error:0.2500000000. Bit fail 4.
Epochs 560. Current error:0.2500000000. Bit fail 4.
Epochs 570. Current error:0.2500000000. Bit fail 4.
Epochs 580. Current error:0.2500000000. Bit fail 4.
Epochs 590. Current error:0.2500000000. Bit fail 4.
Epochs 600. Current error:0.2500000000. Bit fail 4.
Epochs 610. Current error:0.2500000000. Bit fail 4.
Epochs 620. Current error:0.2500000000. Bit fail 4.
Epochs 630. Current error:0.2500000000. Bit fail 4.
Epochs 640. Current error:0.2500000000. Bit fail 4.
Epochs 650. Current error:0.2500000000. Bit fail 4.
Epochs 660. Current error:0.2500000000. Bit fail 4.
Epochs 670. Current error:0.2500000000. Bit fail 4.
Epochs 680. Current error:0.2500000000. Bit fail 4.
Epochs 690. Current error:0.2500000000. Bit fail 4.
Epochs 700. Current error:0.2500000000. Bit fail 4.
Epochs 710. Current error:0.2500000000. Bit fail 4.
Epochs 720. Current error:0.2500000298. Bit fail 4.
Epochs 730. Current error:0.2500000000. Bit fail 4.
Epochs 740. Current error:0.2500000000. Bit fail 4.
Epochs 750. Current error:0.2500000000. Bit fail 4.
Epochs 760. Current error:0.2499999851. Bit fail 4.
Epochs 770. Current error:0.2500000000. Bit fail 4.
Epochs 780. Current error:0.2500000000. Bit fail 4.
Epochs 790. Current error:0.2500000000. Bit fail 4.
Epochs 800. Current error:0.2500000000. Bit fail 4.
EPOCHS 810. Current error:0.2500000000. Bit fail 4.
Epochs 820. Current error:0.2500000000. Bit fail 4.
Epochs 830. Current error:0.2500000000. Bit fail 4.
Epochs 840. Current error:0.2500000000. Bit fail 4.
Epochs 850. Current error:0.2500000000. Bit fail 4.
Epochs 860. Current error:0.2500000000. Bit fail 4.
Epochs 870. Current error:0.2500000000. Bit fail 4.
Epochs 880. Current error:0.2500000000. Bit fail 4.
Epochs 890. Current error:0.2500000000. Bit fail 4.
Epochs 900. Current error:0.2500000000. Bit fail 4.
Epochs 910. Current error:0.2500000000. Bit fail 4.
Epochs 920. Current error:0.2500000000. Bit fail 4.
Epochs 930. Current error:0.2500000000. Bit fail 4.
Epochs 940. Current error:0.2500000000. Bit fail 4.
Epochs 950. Current error:0.2500000000. Bit fail 4.
Epochs 960. Current error:0.2500000000. Bit fail 4.
Epochs 970. Current error:0.2500000000. Bit fail 4.
Epochs 980. Current error:0.2500000000. Bit fail 4.
Epochs 990. Current error:0.2500000000. Bit fail 4.
Epochs 1000. Current error:0.2500000000. Bit fail 4.
Testing network. 0.250000
XOR Test (0.000000,0.000000), 0.500042
XOR Test (0.000000,1.000000), 0.500058
XOR Test (1.000000,0.000000), 0.500064
XOR Test (1.000000,1.000000), 0.500080

You can see that the training is not convergent. If you change the neural network to 3 levels, you have the ability to express different or more.

The program only needs to change two lines:

constunsignedint3;constunsignedint2;

These two lines mean to change the network layer to 3, the middle layer contains 2 neurons. The following results are then run:

Creating Network.
Training Network.
Max epochs 1000. Desired error:0.0001000000.
Epochs 1. Current error:0.2500394285. Bit fail 4.
Epochs 10. Current error:0.2503248155. Bit fail 4.
Epochs 20. Current error:0.2500005960. Bit fail 4.
Epochs 30. Current error:0.2500000596. Bit fail 4.
Epochs 40. Current error:0.2500004768. Bit fail 4.
Epochs 50. Current error:0.2487613261. Bit fail 4.
Epochs 60. Current error:0.2448616028. Bit fail 4.
Epochs 70. Current error:0.2345527709. Bit fail 4.
Epochs 80. Current error:0.1928165257. Bit fail 2.
Epochs 90. Current error:0.0843421519. Bit fail 1.
Epochs 100. Current error:0.0168493092. Bit fail 0.
Epochs 110. Current error:0.0046176752. Bit fail 0.
Epochs 120. Current error:0.0023012348. Bit fail 0.
Epochs 130. Current error:0.0014233238. Bit fail 0.
Epochs 140. Current error:0.0008981032. Bit fail 0.
Epochs 150. Current error:0.0005876040. Bit fail 0.
Epochs 160. Current error:0.0003150564. Bit fail 0.
Epochs 170. Current error:0.0001641331. Bit fail 0.
Epochs 175. Current error:0.0000839768. Bit fail 0.
Testing network. 0.000091
XOR Test (0.000000,0.000000), 0.011072
XOR Test (0.000000,1.000000), 0.993215
XOR Test (1.000000,0.000000), 0.992735
XOR Test (1.000000,1.000000), 0.011975

The results are very good.

Open source Artificial Neural Network Computing Library FANN Learning Note 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.