Use CNTK through C #/. net api,

Source: Internet
Author: User

Use CNTK through C #/. net api,

(Original article) CNTK v2.2.0 provides C # APIs to establish, train, and evaluate the CNTK model. This section describes cntk c # API. In CNTK github respository, you can find the C # training example.

Use the C #/. NET management API to build a deep Neural Network

Cntk c # API provides basic operations through the CNTKLib namespace. The CNTK operation requires one or two input variables with required parameters and generates a CNTK function. The CNTK function maps input data to the output. The CNTK function can also be considered a variable and used as an input for another CNTK operation. Through this mechanism, deep Neural Networks with basic CNTK operations can be built through links and combinations. For example:

private static Function CreateLogisticModel(Variable input, int numOutputClasses){                 Parameter bias = new Parameter(new int[]{numOutputClasses}, DataType.Float, 0}    Parameter weights = new Parameter(new int[]{input.Shape[0], numOutputClasses}, DataType.Float,              CNTKLib.GlorotUniformInitializer(         CNTKLib.DefaultParamInitScale,         CNTKLib.SentinelValueForInferParamInitRank,         CNTKLib.SentinelValueForInferParamInitRank, 1));    var z = CNTKLib.Plus(bias, CNTKLib.Times(weights, input));        Function logisticClassifier = CNTKLib.Sigmoid(z, "LogisticClassifier");    return logisticClassifier;}

CNTKLib. Plus, CNTKLib. Times, and CNTKLib. Sigmoid are basic CNTK operations. The input parameter can be a CNTK variable that represents data features. It may also be another CNTK function. This code constructs a simple computing network, and its parameters are adjusted during the training phase to create a decent multi-class classifier ).

Cntk c # API provides options for building a convolutional Neural Network (CNN) and a recurrent neural network (RNN. For example, construct a 2-layer CNN image classifier:

var convParams1 = new Parameter(      new int[] { kernelWidth1, kernelHeight1, numInputChannels, outFeatureMapCount1 },       DataType.Float, CNTKLib.GlorotUniformInitializer(convWScale, -1, 2), device);    var convFunction1 = CNTKLib.ReLU(CNTKLib.Convolution(      convParams1, input,       new int[] { 1, 1, numInputChannels } ));    var pooling1 = CNTKLib.Pooling(convFunction1, PoolingType.Max,        new int[] { poolingWindowWidth1, poolingWindowHeight1 }, new int[] { hStride1, vStride1 }, new bool[] { true });    var convParams2 = new Parameter(      new int[] { kernelWidth2, kernelHeight2, outFeatureMapCount1, outFeatureMapCount2 },       DataType.Float, CNTKLib.GlorotUniformInitializer(convWScale, -1, 2), device);    var convFunction2 = CNTKLib.ReLU(CNTKLib.Convolution(      convParams2, pooling1,       new int[] { 1, 1, outFeatureMapCount1 } ));    var pooling2 = CNTKLib.Pooling(convFunction2, PoolingType.Max,        new int[] { poolingWindowWidth2, poolingWindowHeight2 }, new int[] { hStride2, vStride2 }, new bool[] { true });    var imageClassifier = TestHelper.Dense(pooling2, numClasses, device, Activation.None,   "ImageClassifier");

It also provides an example of building an RNN with long-time memory (LSTM.

Prepare data through C #/. NET

CNTK provides data preparation tools for training. The cntk c # API discloses these tools. It can accept data in various forms of preprocessing. Data Loading and batch processing are very efficient. For example, assume that we have data in the CNTK text format "Train. ctf:

|features 3.854499 4.163941 |labels 1.000000|features 1.058121 1.204858 |labels 0.000000|features 1.870621 1.284107 |labels 0.000000|features 1.134650 1.651822 |labels 0.000000|features 5.420541 4.557660 |labels 1.000000|features 6.042731 3.375708 |labels 1.000000|features 5.667109 2.811728 |labels 1.000000|features 0.232070 1.814821 |labels 0.000000

A cntk data source will be created in this way:

var minibatchSource = MinibatchSource.TextFormatMinibatchSource(        Path.Combine(DataFolder, "Train.ctf"), streamConfigurations,        MinibatchSource.InfinitelyRepeat, true);

Batch processing data can be professionally retrieved and used during training:

var minibatchData = minibatchSource.GetNextMinibatch(minibatchSize, device);

Use C #/. NET to host APIs to train deep Neural Networks

Random gradient descent (SGD) is a method for optimizing model parameters using small training data. CNTK supports many SGD variants that are common in deep learning literature. They are made public through cntk c # API:

  • SGDLearner-a built-in cntk sgd learner
  • MomentumSGDLearner-built-in CNTK momentum SGD learner
  • Variants of FSAdaGradLearner-AdaGrad learner
  • AdamLearner-Adam learner
  • AdaGradLearner-Adaptive Gradient learner
  • RMSPropLearner-rmspdrop learner
  • AdaDeltaLearner-AdaDelta learner

For a general overview of different learning optimizers, see Stochastic gradient descent for random gradient descent.

The CNTK trainer is used for minibatch training. The following is a C # diamante segment trained by minibatch:

// build a learning model    var featureVariable = Variable.InputVariable(new int[] { inputDim }, DataType.Float);    var labelVariable = Variable.InputVariable(new int[] { numOutputClasses }, DataType.Float);    var classifierOutput = CreateLinearModel(featureVariable, numOutputClasses, device);    var loss = CNTKLib.CrossEntropyWithSoftmax(classifierOutput, labelVariable);    var evalError = CNTKLib.ClassificationError(classifierOutput, labelVariable);    // prepare for training    var learningRatePerSample = new CNTK.TrainingParameterScheduleDouble(0.02, 1);    var parameterLearners =        new List<Learner>() { Learner.SGDLearner(classifierOutput.Parameters(), learningRatePerSample) };    var trainer = Trainer.CreateTrainer(classifierOutput, loss, evalError, parameterLearners);    int minibatchSize = 64;    int numMinibatchesToTrain = 1000;    // train the model    for (int minibatchCount = 0; minibatchCount < numMinibatchesToTrain; minibatchCount++)    {        Value features, labels;        GenerateValueData(minibatchSize, inputDim, numOutputClasses, out features, out labels, device);        trainer.TrainMinibatch(            new Dictionary<Variable, Value>() { { featureVariable, features }, { labelVariable, labels } }, device);        TestHelper.PrintTrainingProgress(trainer, minibatchCount, 50);    }

This Code uses a CNTK built-in SGD learner with a learning rate of 0.02 per sample. The learner is used to optimize model parameters. The trainer is created with the learner. One is the loss function and the other is the evaluation function. During each training iteration, small batches of data are sent to the trainer to update model parameters. During training, the training loss and evaluation error are displayed by the auxiliary method.
In the code, we generate two types of statistical separation labels and feature data. In other more practical examples, CNTK MinibatchSource is loaded for public test data.

Evaluate deep Neural Networks Using C #/. NET-managed APIs

C # The API has an evaluation API for model evaluation. Most training examples require model evaluation after training.

Start using C # training example

After reading this overview, you can continue with the C # training example in two ways: Use the CNTK source of GitHub or use CNTK NuGet for Windows to process the CNTK example.

Source code through CNTK
  • In windows, follow these steps to create a CNTK
  • Compile CNTK. sln through
  • Prepare sample data
  • Run the example in CNTKLibraryCSTrainingTest. csproj as an end-to-end test.
CNTK example obtained through CNTK NuGet
  • Download cntk c # training example examples
  • Prepare sample data.
  • Build and run an example

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.