CNTK v2.2.0 provides c#api to build, train and evaluate CNTK models. This section provides an overview of CNTK C#api. The C # Training sample can be found in CNTK GitHub respository.
Use c#/. NET management API to build deep neural networks
CNTK C#API provides basic operations through the Cntklib namespace. The CNTK operation requires one or two input variables with the necessary parameters and produces a CNTK function. The CNTK function maps the input data to the output. The CNTK function can also be considered a variable and is used as input to another CNTK operation. With this mechanism, deep neural networks with basic CNTK operations can be constructed through linking and grouping. As an example:
Private StaticFunction Createlogisticmodel (Variable input,intnumoutputclasses) {Parameter bias=NewParameter (New int[]{numoutputclasses}, Datatype.float,0} Parameter Weights=NewParameter (New int[]{input. shape[0], numoutputclasses}, Datatype.float, Cntklib.glorotuniforminitializer (cntklib.defaultparaminits Cale, Cntklib.sentinelvalueforinferparaminitrank, Cntklib.sentinelvalueforinferparaminitrank,1)); varz =cntklib.plus (bias, cntklib.times (weights, input)); Function Logisticclassifier= Cntklib.sigmoid (Z,"Logisticclassifier"); returnLogisticclassifier;}
Cntklib.plus,cntklib.times,cntklib.sigmoid is the basic CNTK operation. The input parameter can be a CNTK variable that represents the Data feature. It may also be another CNTK function. The code constructs a simple computational network whose parameters are adjusted during the training phase to create a decent multi-class classifier (Multi-Class classifier).
CNTK C#api provides the option to build convolutional neural networks (CNN) and recurrent neural Networks (RNN). For example, build a 2-layer CNN Image classifier:
varCONVPARAMS1 =NewParameter (New int[] {kernelWidth1, kernelHeight1, Numinputchannels, outFeatureMapCount1}, Datatype.float, Cntklib.glorotuniform Initializer (Convwscale,-1,2), device); varConvFunction1 =Cntklib.relu (cntklib.convolution (convParams1, input,New int[] {1,1, numinputchannels})); varPooling1 =cntklib.pooling (ConvFunction1, Poolingtype.max,New int[] {poolingWindowWidth1, poolingWindowHeight1},New int[] {hStride1, vStride1},New BOOL[] {true }); varCONVPARAMS2 =NewParameter (New int[] {kernelWidth2, kernelHeight2, OutFeatureMapCount1, OutFeatureMapCount2}, Datatype.float, cntklib.glorotunif Orminitializer (Convwscale,-1,2), device); varConvFunction2 =Cntklib.relu (Cntklib.convolution (CONVPARAMS2, Pooling1,New int[] {1,1, outFeatureMapCount1})); varPooling2 =cntklib.pooling (ConvFunction2, Poolingtype.max,New int[] {poolingWindowWidth2, poolingWindowHeight2},New int[] {hStride2, vStride2},New BOOL[] {true }); varImageclassifier = Testhelper.dense (pooling2, numclasses, device, Activation.none,"Imageclassifier");
An example of building a rnn with long and short memory (LSTM) is also provided.
Through c#/. NET Prep Data
CNTK provides a data preparation tool for training. CNTK C#api exposes these tools. It can accept data in various preprocessing forms. Data loading and batching data are highly efficient. For example, suppose we have the following data in CNTK text format called "TRAIN.CTF":
|features 3.854499 4.163941 |labels 1.000000|features 1.058121 1.204858 |labels 0.000000|features 1.870621 1.284107 |labe LS 0.000000|features 1.134650 1.651822 |labels 0.000000|features 5.420541 4.557660 |labels 1.000000|features 6.042731 3.3 75708 |labels 1.000000|features 5.667109 2.811728 |labels 1.000000|features 0.232070 1.814821 |labels 0.000000
A CNTK data source is created in this way:
var minibatchsource = minibatchsource.textformatminibatchsource ( "TRAIN.CTF "), streamconfigurations, true);
Batch data can be retrieved and used professionally during training:
var minibatchdata = Minibatchsource.getnextminibatch (minibatchsize, device);
Use c#/. NET managed API to train deep neural networks
Random gradient descent (SGD) is a method of optimizing model parameters using small training data. CNTK supports many of the SGD variants that are common in in-depth learning literature. They are exposed through CNTK C#API:
- Sgdlearner-A built-in CNTK SGD learner
- Momentumsgdlearner-built-in CNTK momentum SGD learner
- Variants of the Fsadagradlearner-adagrad learner
- Adamlearner-adam Learner
- Adagradlearner-Adaptive Gradient learner
- Rmsproplearner-rmsprop Learning Device
- Adadeltalearner-adadelta Learning Device
For a general overview of the different learning optimizer, see random gradient descent stochastic gradient descent.
The CNTK trainer is used for minibatch training. Here is a c#diamante fragment of Minibatch training:
//Build a learning model varFeaturevariable = Variable.inputvariable (New int[] {Inputdim}, datatype.float); varLabelvariable = Variable.inputvariable (New int[] {numoutputclasses}, datatype.float); varClassifieroutput =Createlinearmodel (featurevariable, numoutputclasses, device); varLoss =Cntklib.crossentropywithsoftmax (Classifieroutput, labelvariable); varEvalerror =Cntklib.classificationerror (Classifieroutput, labelvariable); //Prepare for training varLearningratepersample =NewCNTK. Trainingparameterscheduledouble (0.02,1); varParameterlearners =NewList<learner>() {Learner.sgdlearner (Classifieroutput.parameters (), learningratepersample)}; varTrainer =Trainer.createtrainer (classifieroutput, loss, Evalerror, parameterlearners); intMinibatchsize = -; intNumminibatchestotrain = +; //Train the Model for(intMinibatchcount =0; Minibatchcount < Numminibatchestotrain; minibatchcount++) {Value features, labels; Generatevaluedata (Minibatchsize, Inputdim, numoutputclasses, outFeatures outlabels, device); Trainer. Trainminibatch (NewDictionary<variable, value>() {{featurevariable, features}, {labelvariable, labels}}, device); Testhelper.printtrainingprogress (trainer, Minibatchcount, -); }
This code uses a CNTK built-in SGD learner with a learning rate of 0.02 per sample, which is used by the learner to optimize the model parameters. The trainer is created with the learner, one is the loss function and the other is the evaluation function. During each training iteration, a small batch of data is sent to the trainer to update the model parameters. During training, the training wear and evaluation errors are shown by the auxiliary method.
In the code, we generate two types of statistically separated labels and feature data. In other more practical cases, the public test data is loaded with CNTK Minibatchsource.
Use c#/. NET managed API evaluates deep neural networks
The C # API has an evaluation API for model evaluation. Most training examples require a model evaluation after training.
Getting Started with the C # Training sample
After reading this overview, there are two ways to continue with the C # Training sample: Using GitHub's CNTK source or using CNTK NuGet for Windows to process the CNTK sample.
Through CNTK source code
- Use this page to build CNTK under Windows
- Compile CNTK.sln with VS
- Preparing Sample Data
- Run the sample in Cntklibrarycstrainingtest.csproj as an end-to-end test
Get the CNTK sample with CNTK NuGet
- Download CNTK C # Training Sample Examples
- Prepare the sample data.
- Build and run the sample
Using CNTK with the C#/.net API