BP neural Network and its application in teaching quality evaluation

Source: Internet
Author: User

This paper study notes is their own understanding, if there are errors in the place, please correct criticism, common progress, thank you!

Before the evaluation of teaching quality, only through the simple processing of teaching indicators, such as averaging or artificially given the weights of the indicators to sum weighted, the evaluation results with a great deal of subjectivity. Based on the BP neural network, the model of teaching quality evaluation system is established, the teaching evaluation index is obtained through investigation and analysis, the data of which is quantified as its input and the actual output is trained by BP Neural Network, and the teaching effect is expected to be output. Compare the expected output with the actual output error. When the error reaches the desired minimum value, the training is considered to be successful. After successful training, we can get more accurate weights and thresholds, and use the network after training successfully to deal with another set of new teaching evaluation indexes, and get the result of teaching quality evaluation. This method is used in teaching quality evaluation, which overcomes the subjective factors of the expert in the evaluation process, and obtains the satisfactory evaluation result, which has wide applicability.

1. Introduction of BP neural networkBP neural Network is an artificial neural network learning algorithm proposed by Rumelhart and McClelland of the University of California, which is a neural network trained according to the error inverse propagation algorithm. The learning rules are as follows: Using gradient descent method, the weights and thresholds of the network are continuously adjusted by error reverse propagation, so that the network error is squared and minimized. In essence, this is a kind of dynamic information processing system which is composed of a large number of information processing units through extensive coupling.
BP Neural network includes two processes of forward propagation of information and reverse propagation of errors. Forward propagation of information: input the data of the teaching evaluation index into the network through the input layer of the network,



Into the layer, the cycle, until the error reaches the desired minimum, that the network training success. After that, we can use the trained network to deal with the new teaching quality index and get the accurate evaluation result of teaching quality.
The BP neural network logical structure diagram is as follows:


2. Application of BP neural network teaching quality evaluation ModelTeaching Evaluation Index (scoring range of each indicator 0-10): x1: Paragon, affect the students by their own behavior; X2: The right amount of homework, correcting serious, patient answer; X3: Stimulating students ' interest and inspiring creative thinking; X4: Teachers ' clothing, manners and mental state ; X5: Teaching attitude and teaching skills; X6: The lectures are focused and clear; X7: able to express complex problems clearly, X8: Guide students to explore and solve problems; X9: Pay attention to teaching interaction, teachers and students exchange; x10: Make full use of modern teaching methods.
(1) Determination of the number of input layer neurons
According to our survey of teaching evaluation indicators, a total of 10 indicators, the 10 indicators can be used as the input neurons of the model, so the number of input layer neurons n=.

(2) Determination of the number of neurons in the output layer
we will evaluate the results as the output of the network, the number of output layer M=1
(3) Determination of the number of hidden layers in the network
The hidden layer can be a layer or multilayer, according to the previous theory, in the evaluation model of teaching quality, we choose the hidden layer of 1 layers
(4) Determination of the number of hidden layer neurons
in general, the number of hidden layer neurons is determined according to the convergence performance of the network. The number of hidden layer neurons may not be trained in the network or the network is not strong enough, but the number of hidden layer neurons too much, will make learning time is too long, the error is not necessarily the best, so there is a problem how to determine the appropriate number of hidden layer neurons. In general, the number of hidden neurons can be determined by comparing the error between the output value of the network and the expected output value by using the test method. In this paper, we s=8 the number of hidden layer neurons based on relevant experience.
after that, all the evaluation index data and the better teaching quality evaluation results were entered into the network to train the network. We take the learning rate = 0.5, the minimum value of the constant error = 0.00001. After the training is finished, the appropriate weight threshold is obtained, and the evaluation index which is then investigated by this weight threshold is processed and the appropriate teaching quality evaluation result is obtained.

3. Analysis of Evaluation Resultsthe scores of the teaching indicators obtained from the questionnaire are as follows:


10 teaching indicators, such as 8 samples, are saved in TXT document, the data is read into the input layer of the network, after 5,116 network training to achieve the minimum value of the set error, the correct weights and thresholds are obtained. Use the trained network to process the new data (5.5 7.5 4 5 8 4.5 7 8 8.5 6) to get the actual output of the teaching quality (6.901607).

4. Conclusion

Because of its highly nonlinear function mapping and self-adaptive and self-learning ability, the BP neural network model can effectively overcome the shortcomings of traditional teaching quality evaluation methods, and reduce the artificial influence factors of the index weight determination in traditional evaluation methods, and the precision is higher. After the above training, we find that the error between the output value and the real value of BP neural network model is relatively small, and the performance can meet the requirements of practical application. In addition, the output accuracy of the network depends on the number of training samples input, the more training samples, the output of the evaluation of the teaching effect of the more close to the actual evaluation value.

In a word, using BP neural network to establish teaching quality evaluation model can provide useful reference for the teaching management departments to seek scientific teaching quality evaluation solutions.

5. C Language Code implementation and detailed explanation of neural network
Bp.cpp: Defines the entry point of the console application. #include "stdafx.h" #include "stdlib.h" #include "stdio.h" #include "math.h" #include "conio.h" #include "time.h" # Define X 8//Number of Samples # define X1 10//Input layer Neuron number # # X2 8///number of hidden neurons # X3 1//Output layer Neuron number # # Y 20//Weight adjustment number Double W1[x2][x 1];//the value of the input layer to the hidden layer double w2[x3][x2];//the weight of the hidden layer to the output layer double y[x2];//the threshold of the input layer to the hidden layer double y2[x3];//the threshold of the hidden layer to the output layer double p[x1];//the re-assignment of the sample , so that it is convenient to refer to the double t[x3];//sample re-assignment, so that it is convenient to refer to double t1[x3];d ouble yci[x2];//The input value of the hidden layer double yco[x2];//the output value of the hidden layer double sci[x3] ;//input value of output layer double sco[x3];//output value of double em[x];//K sample Total error double dao1[x3];//derivation of output value using gradient descent method double dao2[x2];// Derivation of the calculated output value of the hidden layer using gradient descent method: dao2= Derivation (YCO) * Summation (dao1* error e* weight w2) Double zsco[x3];d ouble l1;//hidden layer to output layer learning factor double l2;// Input layer to hidden layer learning factor char c= '; FILE *fp;//The structure of the sample data xuexishuju{double shuru[x1];//input neuron data double qiwangshuchu[x3];//expected output data}xuexishuju[x] ;//The weight of the structure struct quanzhi{double qz1[x2][x1];//the input layer to the hidden layer and the weighted value of double qz2[x3][x2];//hidden layer to the output layer}quanzhi[y];struct Yuzhi{double yz1[x2];//input layer to hidden layer threshold double yz2[x3];//hidden layer to output layerThe threshold value of}yuzhi[y];//is obtained from the sample data int Huoqushuju () {int i=0;int j=0;int k=0;double data;int m=0;//x value changes, m,n corresponding program also changes int n=0;//if ( Fp=fopen ("D:\\BP neural network\\ input data. txt", "+") ==null)//Open File//cannot be placed in the IF??? {//printf ("Sorry! The file won't open! //getch ();//Screen paused, waiting for keyboard time//exit (1),//}fp=fopen ("input data. txt", "R"), if (fp==null)//Open File {printf ("Sorry! The file won't open! "); Getch ();//Screen paused, waiting for keyboard time exit (1);} while (FSCANF (FP, "%lf", &data)!=eof)//Pass data once to Data{j++;if (j<= (x*x1)) {if (i<x1) {xuexishuju[k].shuru[i]= data;} if (k== (X-1) &&i== (x1-1)) {k=0;i=-1;} if (i== (x1-1)) {k++;i=-1;}} else if ((j>x*x1) && (j<= (x*x1+x*x3))) {if (i<x3) {xuexishuju[k].qiwangshuchu[i]=data;} if (k== (X-1) &&i== (x3-1)) {k=0;i=-1;} if (i== (x3-1)) {k++;i=-1;}} i++;} Fclose (FP);p rintf ("\ n Sample data entered successfully! ");p rintf (" \ n sample data as follows: "); for (k=0;k<x;k++) {for (i=0;i<x1;i++) {printf (" \ n ") input data [%d]=%f], K,i,xuexishuju [K].shuru[i]);} for (j=0;j<x3;j++) {printf ("\ n) expected data [%d]=%f", K,j,xuexishuju[k].qiwangshuchu[j]);}} printf ("\ n start calculation ... \ n"); Getch (); return 1;} Initialize first-time weights, threshold intChushihuaquanyu () {int a1,a2,a3,a4,a5,a6;//start input layer to hidden layer's weight initialization for (a1=0;a1<x2;a1++) {for (a2=0;a2<x1;a2++) {W1[A1 ][a2]= (Double) ((rand ()/32767.0) *2-1), or//random function to generate -1~1 between data, as the initial weight of printf ("input layer to the hidden layer of the initial weight w1[%d][%d]=%f\n", a1,a2,w1[a1 ][A2]);}} Initialization of the weight of the implicit layer to the output layer at the beginning for (a3=0;a3<x3;a3++) {for (a4=0;a4<x2;a4++) {w2[a3][a4]= () (rand ()/32767.0) *2-1);// Generate random data between -1~1 using random functions as the initial weight of printf ("hidden layer to output layer initial weight w2[%d][%d]=%f\n", a3,a4,w2[a3][a4]);}} Initial input layer to hidden layer threshold initialization for (a5=0;a5<x2;a5++) {y[a5]= (double) ((rand ()/32767.0) *2-1);//random function to generate random data between -1~1, As the initial threshold printf ("input layer to hidden layer initial threshold y[%d]=%f\n", A5,y[a5]);} Initialization of the threshold at the beginning of the hidden layer to the output layer for (a6=0;a6<x3;a6++) {y2[a6]= (double) ((rand ()/32767.0) *2-1);//random function to generate random data between -1~1, As the initial threshold printf ("hidden layer to output layer initial threshold y2[%d]=%f\n", A6,y2[a6]);} return 1;} The sample is re-assigned so that it is convenient to refer to int zaishurup (int k) {for (int i=0;i<x1;i++) {p[i]=xuexishuju[k].shuru[i];//printf ("p[%d]=%f\n") , I,p[i]);} return 1;} int Zaishurut (int k) {for (int j=0;j<x3;j++) {t1[j]=xuexishuju[k].qiwangshuchu[j];t[j]=1.0/(1.0+exp (-t1[j]));// printf ("t[%d]=%f\n", J,t[j]);}return 1;} The weighted sum of the input layer to the hidden layer int Ru_yin_quan () {double sum;for (int j=0;j<x2;j++) {sum=0.0;for (int i=0;i<x1;i++) {sum+=w1[j][i]* p[i];//the input layer to the hidden layer of the weighted sum}yci[j]=sum-y[j];//the input value of the hidden layer yco[j]=1.0/(1.0+exp (-yci[j]));//The output value of the hidden layer, SIGMOD 1 with}return function;} The weighted sum of the hidden layer to the output layer int Yin_chu_quan () {double sum;for (int j=0;j<x3;j++) {sum=0.0;for (int i=0;i<x2;i++) {Sum+=w2[j][i] *yci[i];//the weighted sum of the hidden layer to the output layer}sci[j]=sum-y2[j];//the input value of the output layer sco[j]=1.0/(1.0+exp (-sci[j]));//output value of output layer, SIGMOD 1 with}return function ;} Seek total error int wc_chu_yin (int k) {double e[x3];d ouble fange=0.0;for (int j=0;j<x3;j++) {e[j]=t[j]-sco[j];fange+= (E[j]) * ( E[J]);d ao1[j]=sco[j]* (1-sco[j]) *e[j];//derivative em[k]=fange/2;} return 1;} int Wc_yin_ru () {double summ;for (int i=0;i<x2;i++) {summ=0.0;for (int j=0;j<x3;j++) {summ+=dao1[j]*w2[j][i];} dao2[i]=summ*yco[i]* (1-yco[i]);} return 1;} Assigns the weight after each change to the struct so that the next learning algorithm calls int baocunw (int k) {for (int i=0;i<x2;i++) {for (int j=0;j<x1;j++) {quanzhi[k].qz1[ I][J]=W1[I][J];}} for (int ii=0;ii<x3;ii++) {for (int jj=0;jj<x2;jj++) {QUANZHI[K].QZ2[II][JJ]=W2[II][JJ];}} return 1;} Assign the threshold value after each change to the struct so that the next learning algorithm calls int baocuny (int k) {for (int i=0;i<x2;i++) {yuzhi[k].yz1[i]=y[i];} for (int j=0;j<x3;j++) {yuzhi[k].yz2[j]=y2[j];} return 1;} Change the output layer to the hidden layer of the new weight, threshold int Xin_chu_yin () {for (int. k=0;k<x3;k++) {for (int j=0;j<x2;j++) {W2[k][j]=w2[k][j]-l1*dao1 [k]*yco[j];//change after the new weight value}y2[k]=y2[k]-l1*dao1[k];//changes after the new threshold value}return 1;} Change the hidden layer to the input layer's new weight, threshold int Xin_yin_ru () {for (int. j=0;j<x2;j++) {for (int i=0;i<x1;i++) {w1[j][i]=w2[j][i]-l2*dao2[ J]*p[i];} Y[J]=Y[J]-L2*DAO2[J];} return 1;} Save the last Known good threshold value to txt file void Baocunquan () {File *fp;fp=fopen ("Save the value. txt", "w"); if (fp==null) {printf ("Sorry! The file won't open! \ n "); Getch (); exit (1);} fprintf (FP, "Save the Last Known good weight data as follows: \ n"); fprintf (FP, "input layer to hidden layer weight data: \ n"); for (int i=0;i<x2;i++) {for (int j=0;j<x1;j++) {fprintf (FP, "w1[%d][%d]=%f\n", I,j,w1[i][j]);}} fprintf (FP, "\ n"), fprintf (FP, "hidden layer to output layer weight data: \ n"), for (int ii=0;ii<x3;ii++) {for (int jj=0;jj<x2;jj++) {fprintf (FP , "w2[%d][%d]=%f\n", Ii,jj,w2[ii][jj]);}} fprintf (FP, "\ n"); fclose (FP);p rintf ("Last Value saved successfully!") \ n "); getCH ();} void Baocunyu () {FILE *fp;fp=fopen ("Save threshold. txt", "w"); if (fp==null) {printf ("Sorry! The file won't open! \ n "); Getch (); exit (1);} fprintf (FP, "Save the Last Known good threshold data as follows: \ n"); fprintf (FP, "threshold value data for input layer to hidden layer: \ n"); for (int i=0;i<x2;i++) {fprintf (FP, "y[%d]=%f\n ", I,y[i]);} fprintf (FP, "\ n"), fprintf (FP, "hidden layer to output layer threshold data: \ n"), for (int j=0;j<x3;j++) {fprintf (FP, "y2[%d]=%f\n", J,y2[j]);} fprintf (FP, "\ n"); fclose (FP);p rintf ("Last-time threshold saved successfully! \ n "); Getch ();} void Main () {Double wc1=0.00001;//set error maximum value double wc2;//calculate actual error int s1=20000;//set error learning max int s2=0;//actual error learning frequency l1=0.5 ;//Assign a value to the Learning Factor L2=0.5;huoqushuju ();//Get data from a sample chushihuaquanyu ();//Initialize first-time weights, thresholds (generated using random functions) Do{int k;++s2;wc2=0;for (k=0;k <x;k++) {Zaishurup (k);//The Re-assignment of the sample Zaishurut (k); Ru_yin_quan ();//The weighted sum of the input layer to the hidden layer Yin_chu_quan ();//The weighted sum of the hidden layer to the output layer Wc_chu_ Yin (k);//The total error Wc_yin_ru (); baocunw (k);//assigns the weight of each change to the structure so that the next learning algorithm calls Baocuny (k);//assigns the threshold value after each change to the struct for the next learning algorithm call Xin _chu_yin ();//change the output layer to the hidden layer of the new weight, threshold xin_yin_ru ();//change the hidden layer to the input layer of new weights, thresholds wc2+=em[k];//The total error of M samples added up}printf ("%d calculation error =%f\ T ", S2,WC2);p rintf (" Set Error =%f\n ", WC1); if (S2&GT;S1) {printf ("The count has exceeded 20,000 times, knock any key to exit the loop!") \ n "); Getch (); break;}} while (WC2&GT;WC1);p rintf ("Trained%d!\n", S2); Baocunquan ();//Save last correct weight to TXT file Baocunyu ();// Save the last Known good threshold to the TXT file printf ("Neural network has been trained \ n");p rintf ("Use the weights and thresholds of the trained network to solve a new set of instructional metrics data!") \ n ");p rintf (" Please enter a new set of instructional indicator data: \ n ");d ouble sc[x1];for (int a=0;a<x1;a++) {scanf ("%lf ", &sc[a]);} Double sum1,sum2;for (int j=0;j<x2;j++) {sum1=0.0;for (int i=0;i<x1;i++) {sum1+=w1[j][i]*sc[i];//input layer to hidden layer weighted sum} yci[j]=sum1-y[j];//the input value of the hidden layer yco[j]=1.0/(1.0+exp (-YCI[J));//The output value of the hidden layer, sigmod (int}for), which is processed with the jj=0;jj<x3;jj++ function {sum2 =0.0;for (int ii=0;ii<x2;ii++) {sum2+=w2[jj][ii]*yci[ii];//The weighted sum of the hidden layer to the output layer}sci[jj]=sum2-y2[jj];//the input value of the output layer sco[jj]= 1.0/(1.0+exp (-SCI[JJ)));//output layer output value, SIGMOD function for processing}for (int b=0;b<x3;b++) {zsco[b]=log (sco[b])-log (1-sco[b])-2; printf ("Teaching quality results from new indicator data evaluation:%f\n", Zsco[b]);}}






Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

BP neural Network and its application in teaching quality evaluation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.