Neural Network Model

**I. Neural Network Model**

The Research on Network Models began in 1940s. As a cross-discipline, it is a network model that humans constructed to implement certain functions based on their understanding of the brain and neural networks. After nearly 70 years of development, the neural network model has become a typical example of machine learning. Instead of following any probability distribution, it imitates the functions of the human brain for abstract operations. Neural Network (Neutral Network) imitates human brain thinking through mathematical algorithms. It is a typical example of machine learning in data mining. Neural networks are an abstract computing model of the human brain. We know that there are tens of billions of neurons in the human brain that process information. These neurons are interconnected with each other, yes, the human brain produces precise logic thinking. The "Neural Network" in data mining is also composed of a large number of parallel distribution of artificial neurons (micro-Processing Units). It has the ability to learn from empirical knowledge by adjusting the connection strength, this knowledge can also be applied.

To put it simply, a "Neural Network" is to input multiple non-linear models and the weighted interconnection between different models (the weighted process is completed at the attention layer) to finally complete an output model. Specifically, a "Neural Network" is a set of interconnected input/output units. Each connection is associated with a coupon. In the learning phase, you can adjust the weights of these connections to predict the correct class labels of input observations. Therefore, it can be understood that the artificial neural network is an information processing system formed by a large number of neural network elements through a wide range of connections, sampling, simplification and simulation.

**Ii. Neural Network Model principles**

The structure of the manual network model is roughly divided into two categories: Forward network and feedback network.

Specifically, the so-called forward network refers to the transmission direction from the input end to the output end without any feedback; feedback network refers to the existence of loop or feedback in addition to transmitting data from the input end to the output end in the propagation direction. The two network schematics are shown in:

In the preceding typical structure,**Neural Networks input multiple non-linear models and weighted interconnectivity between different models to generate an output model. Specifically, the multivariate input layer refers to some independent variables. These independent variables are weighted and combined to the intermediate layer to become the primary layer. The hidden layer mainly contains non-linear functions, footer conversion functions or compression functions.**The hidden layer is the so-called black box. Almost no one is able to combine the non-linear function types in the hidden layer in all circumstances, this is a typical case where computer thinking replaces human thinking.

Because "Neural Network" has its own large-scale parallel structure and parallel processing of information, it has good self-adaptability, self-organization and high fault tolerance, it also has strong learning, memory and recognition functions. Currently, neural networks have been widely used in many fields, such as signal processing, pattern recognition, expert systems, and prediction systems. Currently, the most popular "Neural Network" algorithm is Backpropagation. This algorithm is learned on a multi-layer forward (Multilayer Feed-forward) neural network, multi-layer forward neural networks are composed of one input layer, one or more hidden layers, and one output layer." Neural Networks are mainly used in data-driven operations: As an important technical support for classification and prediction, it has broad application prospects in user division, behavior prediction, and marketing response.

**Iii. Five factors affecting the Neural Network Model**

The main feature of "Neural Networks" is that their knowledge and results cannot be interpreted. No one knows how the non-linear functions of the hidden layer are dealing with independent variables, "Neural Networks" are often invisible to the logical relationships in applications. However, this shortcoming does not affect the wide application of the technology in data-driven operations. It can even be considered that the result is unexplanatory, on the contrary, it is more likely that we will discover new unrealized rules and relationships. In the process of modeling using the neural network technology, the following five factors have a significant impact on the model purchase results:

**Layers.**For certain input and output layers, the number of Operation layers is very important theoretically or practically. Although there are no unchanging rules, what experienced data analysts usually need to try is to find a satisfactory model structure.
**The number of input variables in each layer.**Too many independent variables may lead to over-fitting of the model, making it very stable to build the model. However, once new data is used, the prediction of the model differs greatly from the actual result, this makes the model the value and significance of prediction. Therefore, before using neural networks for modeling, it is important to select and streamline input variables.
**The type of the contact.**In the contact network model, input variables can be combined in different ways, forward, backward, or parallel. Different combinations may have different effects on the model results.
**The degree of contact.**In each layer, its elements can be fully or partially associated with the elements in the layer. Some links can reduce the risk of over-fitting the model, but may also weaken the prediction capability of the model.
- A Conversion Function, also known as an activation function or an extrusion function, can squeeze all input variables from positive infinity to negative infinity into an input structure in a very small range. This kind of nonlinear functional relationship contributes to the stability and reliability of the model. It is easy to select the conversion function, that is, to provide the best result function in the shortest time. Common conversion functions include threshold logical functions, hyperbolic tangent functions, and S-curve functions.

**Iv. Design Principles of forward Network Models**

The learning process of most neural network models is to continuously change the coupon type to make the error reach the minimum absolute value of the total error. For example, the design principle of a common forward network model is as follows:

**The number of layers in the upper layer.**Theoretically, two layers are enough. In practice, it is often enough to have one layer of workers.
**Input variable in each layer.**The variables in the input layer are determined by the specific analysis background. The number of input layers is the square that is equal to the product of the input and output numbers. The number of input layers should be as simple as possible and follow the principle of less precision.
**The degree of contact.**Generally, all links between all layers are selected.
**Conversion Function.**All logical functions are the primary functions, because all logical functions provide the best fit in the shortest time.
- Model Development should be sufficient to avoid over-fitting.

**5. Advantages of Neural Networks**

Of

**Vi. Disadvantages and precautions of the neural network model**

Of