Go language Development (eight), go Language program testing and performance tuning

Source: Internet
Author: User
Tags benchmark sprintf


Go language Development (eight), go Language program testing and performance tuning



Introduction of the Go Language Automation testing Framework



1. Introduction to Automated testing framework



The testing of the Go Language standard package provides a framework for unit testing (functional testing) and a common approach to performance testing (stress testing) that can be easily used for automated testing.
The Go language test code only needs to be placed in a file ending in _test.go. Golang tests are divided into unit tests and performance tests, the test cases for unit tests must begin with the tests, the function names cannot begin with a lowercase letter, the performance test must start with benchmark, and the function name after it cannot begin with a lowercase letter. In order to test the method and the readability of the method being tested, general test or benchmark is followed by the function name of the method being tested. The test code is usually in the same directory as the test object file.



2. Unit Test



The test cases for the Go Language Unit test must start with test, and the function name after it cannot begin with a lowercase letter.
Add.go file:


package add

func add(a,b int)int{
   return a + b
}


Unit Test Cases:


package add

import "testing"

func TestAdd(t *testing.T){
   sum := add(1,2)
   if sum == 3 {

      t.Logf("add(1,2) == %d",sum)
   }
}


The code test data is mixed with the test logic, and a table-driven test method is produced according to the features and engineering practice of the Go language. Table-driven testing saves the test data set in a slice, and the test data is decoupled from the test logic.
Table-Driven testing:


package add

import "testing"

func TestAdd (t * testing.T) {
    // define test data
    tests: = [] struct {a, b, c int} {
       {3, 4, 7},
       {5, 12, 17},
       {8, 15, 23},
       {12, 35, 47},
       {30000, 40000, 70000},
    }
    // test logic
    for _, tt: = range tests {
       if actual: = add (tt.a, tt.b); actual! = tt.c {
          t.Errorf ("Add (% d,% d) got% d; expected% d", tt.a, tt.b, actual, tt.c)
       }
    }
}


Advantages of table-driven testing:
A. Separating test data and test logic
B, clear error message
C, can partially fail
D, go language easier to implement table-driven testing
To perform the test:
go test
The results are as follows:


[user@localhost test]$ go test -v
=== RUN   TestAdd
--- PASS: TestAdd (0.00s)
PASS
ok      _/home/user/GoLang/test 0.001s


3. Performance test



A performance test is a stress test (Bmt:benchmark testing).
Performance Test Cases:


func BenchmarkAdd (t * testing.B) {

    // Reset time point
    t.ResetTimer ()
    for i: = 0; i <t.N; i ++ {
       add (1,2)
    }
}


The complete test code is as follows:


package add

import "testing"

func TestAdd (t * testing.T) {
    // define test data
    tests: = [] struct {a, b, c int} {
       {3, 4, 7},
       {5, 12, 17},
       {8, 15, 23},
       {12, 35, 47},
       {30000, 40000, 70000},
    }
    // test logic
    for _, tt: = range tests {
       if actual: = add (tt.a, tt.b); actual! = tt.c {
          t.Errorf ("Add (% d,% d) got% d; expected% d", tt.a, tt.b, actual, tt.c)
       }
    }
}

func BenchmarkAdd (t * testing.B) {

    // Reset time point
    t.ResetTimer ()
    for i: = 0; i <t.N; i ++ {
       add (1,2)
    }
}


To perform the test:
go test -bench=.
The results are as follows:


[user@localhost test]$ go test -bench=.
goos: linux
goarch: amd64
BenchmarkAdd-4      2000000000           0.38 ns/op
PASS
ok      _/home/user/GoLang/test 0.803s


4. Code Coverage test



Test coverage is a term used to identify the degree to which code in a test case is executed by executing a package's test case.
In the Go language test coverage statistics, go test through the parameter Covermode setting can be the following three kinds of settings for the coverage statistics mode:
A, set: Default mode, only record whether the statement has been executed
B, Count: Record the number of times the statement was executed
C, Atomic: Record the number of times the statement is executed and ensure correctness when executing concurrently
Perform coverage testing:
go test -cover
The results are as follows:


[user@localhost test]$ go test -cover
PASS
coverage: 100.0% of statements
ok      _/home/user/GoLang/test 0.001s


Execute the command to generate code coverage test information:
go test -coverprofile=covprofile
View Covprofile file information:




Convert generated code coverage test information to HTML format:
go tool cover -html=covprofile -o coverage.html
Use your browser to view the coverage.html file.



Second, go tool pprof performance analysis tool



1, Go tool pprof Introduction



Golang built-in CPU, MEM, block three profiler sampling tools, allowing the program to use Profiler at runtime to sample data, generate sampling files. The Go tool pprof tools allow you to interactively analyze sampled files and get high readability output information.
Any special tools inside the go command that start with the Go tool are saved in the directory $goroot/pkg/tool/$GOOS _$goarch/directory, the Go Tools directory. Pprof tools are not written in the go language, but are written in the Perl language. The Perl language can read the source code and run it directly. As a result, the Pprof tool's source files are stored directly in the Go tool directory.
The Pprof tool is written in Perl, and it is necessary to implement the Go Tool pprof command in order to install the Perl language in the current environment
The Go Tool pprof command parses the specified profile and makes it possible to access the information in an interactive manner. However, only the profile is not enough and the source of the information in the profile-the executable file of the command source file-is required. You can run the Go Language program can only be compiled command source files generated after the executable file.



2. Profile Sampling file



In the Go language, three profiles containing real-time data, namely CPU profiles, memory profiles, and program blocking profiles, can be generated through the standard library's code package runtime and RUNTIME/PPROF programs.
A, CPU Profile
The CPU clock frequency (CPU clock speed) that the CPU core is operating on. The basic unit of the CPU's frequency is Hertz (Hz). The countdown of the clock frequency is the clock cycle. During a clock cycle, the CPU executes an operation instruction. A CPU operation instruction can be executed every 1 milliseconds at the CPU frequency of the three Hz, and a CPU operation instruction can be executed every 1 at 1 MHz CPU frequency, and a CPU operation instruction can be executed every 1 nanoseconds at the CPU frequency of 1 GHz.
By default, the run-time system of the Go language samples CPU usage at a frequency of 100, which is sampled every second (every 10 milliseconds). It is sufficient to produce useful data without halting the system, and 100 is easy to convert. A sampling of CPU usage is a sampling of program counters on the current goroutine stack. From this, you can analyze what code is the most computationally expensive or CPU-intensive part of a sample record. You can start a record of CPU usage by using the following code.


func startCPUProfile() {
   if *cpuProfile != "" {
      f, err := os.Create(*cpuProfile)
      if err != nil {
         fmt.Fprintf(os.Stderr, "Can not create cpu profile output file: %s",
            err)
         return
      }
      if err := pprof.StartCPUProfile(f); err != nil {
         fmt.Fprintf(os.Stderr, "Can not start cpu profile: %s", err)
         f.Close()
         return
      }
   }
}


In the function Startcpuprofile, you first create a file that holds the CPU usage record, the CPU profile, whose absolute path is represented by the value of *cpuprofile. Then, the instance of the profile file is passed into the function pprof as a parameter. The Startcpuprofile. If Pprof. The Startcpuprofile function does not return an error stating that the record operation has started. Pprof only if the absolute path of the CPU profile is valid. The Startcpuprofile function does not open the record operation.
If you want to stop the CPU usage logging operation at some point, you need to call the following function:


func stopCPUProfile () {
    if * cpuProfile! = "" {
       pprof.StopCPUProfile () // write the recorded profile information to the specified file
    }
}


In the above function, there is no code for CPU profile write operations. When the CPU usage logging operation is started, the runtime system writes the sampled data to the CPU profile at a frequency of 100 times per second. Pprof. The Stopcpuprofile function sets the frequency at which CPU usage is sampled to stop sampling operations. Pprof is only available when all CPU usage records are written to the CPU profile. The Stopcpuprofile function does not exit, guaranteeing the integrity of the CPU profile.
B, Memory profile
Memory profiles are used to store memory usage during the execution of a user program, that is, the allocation of heap memory during program runs. The go language runtime records all heap memory allocations during the run of the user program. Regardless of the time of sampling, the number of bytes used in the heap memory has grown, the parser will sample the bytes as long as they are allocated and the number is sufficient. The following functions can be used to turn on memory usage records:


func startMemProfile() {
   if *memProfile != "" && *memProfileRate > 0 {
      runtime.MemProfileRate = *memProfileRate
   }
}


How easy it is to turn on memory usage records. In function Startmemprofile, subsequent operations are performed only if the values of Memprofile and memprofilerate are valid. the meaning of Memprofile is the absolute path to the memory profile. the meaning of memprofilerate is the sampling interval of the parser, in bytes. When assigning a memprofilerate value to a variable runtime.memprofilerate of type int, it means that the parser will sample memory usage after each allocation of the specified number of bytes. Actually, even if you don't give runtime. The memprofilerate variable is assigned, and the memory usage sampling operation will proceed as well. This sampling operation starts at the beginning of the user program and continues until the end of the user program. Runtime. The default value for the memprofilerate variable is 1024, or 512K bytes. Only when the 0 is explicitly assigned to runtime. The sampling operation is not canceled until the variable is memprofilerate.
By default, the sampled data for memory usage is only saved in runtime memory, and the actions saved to the file can only be done by the developer itself. The de-sampling operation code is as follows:


func stopMemProfile() {
   if *memProfile != "" {
      f, err := os.Create(*memProfile)
      if err != nil {
         fmt.Fprintf(os.Stderr, "Can not create mem profile output file: %s", err)
         return
      }
      if err = pprof.WriteHeapProfile(f); err != nil {
         fmt.Fprintf(os.Stderr, "Can not write %s: %s", *memProfile, err)
      }
      f.Close()
   }
}


The function of the Stopmemprofile function is to stop sampling operations on memory usage. Stopmemprofile only does the operation to save the sampled data to a memory profile. In the Stopmemprofile function, the function pprof is called. Writeheapprofile, and takes the file instance that represents the memory profile as a parameter. If Pprof. The Writeheapprofile function does not return an error, which means that the data has been written to the memory profile.
A program that samples memory usage assumes that the sampling interval is constant during the run of the user program and equals the runtime. The current value of the memprofilerate variable. Therefore, the memory sampling interval should be changed only once in the Go program and should be changed as soon as possible. For example, change the memory sampling interval at the beginning of the main function of the command source file.
C, program block profile
The program blocking profile is used to hold records of Goroutine blocking events in the user program. The code to open the program blocking sample is as follows:


func startBlockProfile() {
   if *blockProfile != "" && *blockProfileRate > 0 {
      runtime.SetBlockProfileRate(*blockProfileRate)
   }
}


In function Startblockprofile, when the values of Blockprofile and blockprofilerate are valid, the sampling interval for goroutine blocking events is set. Blockprofile means that the program blocks the absolute path to the profile. the meaning of blockprofilerate is the sampling interval of the analyzer, in units of times. function Runtime. The only argument for setblockprofilerate is the int type, meaning that the parser will sample the blocking event each time a Goroutine blocking event occurs. If the runtime is not explicitly used. The Setblockprofilerate function sets the sampling interval, so the sampling interval is 1. That is, by default, each time a goroutine blocking event occurs, the parser samples it once. The runtime system's sampling of Goroutine blocking events also runs through the user program's entire run-time. However, if you pass the runtime. The Setblockprofilerate function sets the sampling interval to 0 or a negative number, then the sampling operation is canceled.
Goroutine blocking event records that are saved in runtime memory can be stored in the specified file before the program ends. The code is as follows:


func stopBlockProfile() {
   if *blockProfile != "" && *blockProfileRate >= 0 {
      f, err := os.Create(*blockProfile)
      if err != nil {
         fmt.Fprintf(os.Stderr, "Can not create block profile output file: %s", err)
         return
      }
      if err = pprof.Lookup("block").WriteTo(f, 0); err != nil {
         fmt.Fprintf(os.Stderr, "Can not write %s: %s", *blockProfile, err)
      }
      f.Close()
   }
}


After the program is created to block the profile, the Stopblockprofile function takes out the memory usage record that was saved in the runtime memory by using the function pprof.lookup, and calls the WriteTo method on the logged instance to write the record to the file.



3. Profiling Use Scene



A, Benchmark Test
Use thego test -bench . -cpuprofile prof.cpusample file that generated the benchmark test, and thengo tool pprof [binary] prof.cpuanalyze the sampled file by command.
B. Web Services Testing
If the app is a Web service, you can add the import-Net/http/pprof,web service to the code file that is launched in the HTTP service to automatically turn on the profile feature, which assists the developer in analyzing the sample results directly. You can use it in your browser tohttp://localhost:port/debug/pprof/see the state of the current Web service directly, including CPU usage and memory usage.
C, Application
If the Go program is an application, you cannot use the NET/HTTP/PPROF package and need to use the RUNTIME/PPROF package. Use Pprof. Startcpuprofile, Pprof. The runtime information is sampled by stopcpuprofile or memory sampling, blocking sampling interface, etc. Finally, the sample files are analyzed using the Go tool pprof tools.
D. Service Process
If the Go program is not a Web server, but a service process, you can also choose to use the Net/http/pprof package, also introduce the package net/http/pprof, and then open another goroutine to open the port monitoring.


go func() {
   log.Println(http.ListenAndServe("localhost:6666", nil))
}()


4, Pprof use



Write a simple application that uses pprof. The CPU information is sampled by Startcpuprofile and Pprof.stopcpuprofile.


package main

import (
    "flag"
    "log"
    "os"
    "runtime / pprof"
    "fmt"
)

// Fibonacci sequence
func Fibonacci () func () int {
    back1, back2: = 1, 1
    return func () int {
       // Reassign
       back1, back2 = back2, (back1 + back2)
       return back1
    }
}

func count () {
     a: = 0;
    for i: = 0; i <10000000000; i ++ {
       a = a + i
    }
}

var cpuprofile = flag.String ("cpuprofile", "", "write cpu profile to file")

func main () {
    flag.Parse ()
    if * cpuprofile! = "" {
       f, err: = os.Create (* cpuprofile)
       if err! = nil {
          log.Fatal (err)
       }
       pprof.StartCPUProfile (f)
       defer f.Close ()
    }
    fibonacci: = Fibonacci ()
    for i: = 0; i <100; i ++ {
       fmt.Println (fibonacci ())
    }
    count ()

    defer pprof.StopCPUProfile ()
}


When you sample run-time information, you can specify different sampling parameters:
--cpuprofile: Specify the save path for the CPU profile
--blockprofile: Specifies the path where the program blocks the profile save.
--blockprofilerate: Defines a value of n, which specifies a sampling operation for each n goroutine blocking event that occurs.
--memprofile: Specifies the save path of the memory profile.
--memprofilerate: A sampling operation is defined when its value is N, specifying the heap memory allocated n bytes per second.
Run the Go program to sample the CPU information:
go run fibonacci.go --cpuprofile=profile.cpu
Analyze CPU Sample Files Profile.cpu:
go tool pprof profile.cpu

If the Go program is very simple, such as only the Fibonacci () function call (comment count () function), use Pprof. Startcpuprofile is not printing any information.
The top command lists the top 10 items by default. The top command can be followed by a number that limits the number of items listed.



Three, Go-torch performance analysis tools



1, Go-torch Introduction



Go-torch is an open source of Uber's flame diagram generation tool for the Golang program that collects stack traces, organizes it into flame maps, and visually shows the program to developers. Go-torch is based on the Flame diagram tool created using Brendangregg to generate an intuitive image that easily analyzes the CPU time taken by the various methods of go.



2, Flamegraph installation



git clone https://github.com/brendangregg/FlameGraph.git
sudo cp FlameGraph/flamegraph.pl /usr/local/bin
Test Flamegraph for installation success at Terminal input Flamegraph.pl-h



3, Go-torch installation



go get -v github.com/uber/go-torch
Go-torch is installed by default in the first directory specified by Gopath, located in the bin directory.



4. GO-WRK pressure test



Installing the GO-WRK pressure test tool:
go get -v github.com/adjust/go-wrk
Perform 35s 1W secondary high concurrency scenario simulations:
go-wrk -d 35 -n 10000 http://localhost:port/demo



5, Go-torch use



During a Web Service stress test, a sample file is generated using Go-torch.
go-torch -u http://localhost:port -t 30
Go-torch outputs the following information when the sample is completed:
Writing svg to torch.svg
Torch.svg is the Go-torch automatically generated profile file, which is opened using the browser as follows:

The y-axis of the flame graph indicates the number of times the CPU calls the method, and the x-axis represents the percentage of time that the method occupies in each sample call time, and the wider represents the more CPU time.
Depending on the flame diagram, you can clearly see which method call takes a long time, and then constantly fix the code, resample, and optimize it continuously.



Four, go Language Program performance optimization



1. Memory optimization


A. Reduce memory allocation by merging small objects into structs once allocated
The Go runtime uses a memory pool mechanism, with each span size of 4k, while maintaining a cache. The cache has a list array of 0 to N, each unit of the list array is a linked list, each node of the linked list is a block of available memory, all the node memory blocks in the same list are equal in size, but the memory size of the different lists is unequal, That is, a unit of the list array stores a fixed-size block of memory, and the size of memory blocks stored in different units is unequal. The cache caches memory objects of a non-homogeneous size, and allocates memory blocks when the requested memory size is closest to what type of cache memory block. Assign to Spanalloc when the cache is not enough.
B, buffer content allocate enough space at once, and appropriate reuse
In protocol codec, you need to operate frequently []byte, you can use bytes. Buffer or other byte buffer object.
bytes. Buffer, etc. by pre-allocating large enough memory to avoid dynamically requesting memory when it grows, reducing memory allocation times. The byte buffer object needs to be considered for proper reuse.
C, slice and map mining make when created, estimated size specified capacity
Slice and map are not the same as arrays, there is no fixed space size, can be dynamically expanded according to the addition of elements.
Slice initially specifies an array, which is automatically expanded when there is insufficient capacity when slice is append, and so on:
If the new size is more than twice times the current size, then the capacity increases to the new size;
Otherwise, loop the following operation: if the current capacity is less than 1024, increase by twice times, otherwise each time by the current capacity of 1/4 increase, until the increase in capacity of more than or the new size.
Map expansion is more complex, and each expansion will increase to twice times the last capacity. There is a buckets and oldbuckets in the structure of the map to enable incremental expansion:
Under normal circumstances, the direct use of buckets,oldbuckets is empty;
If you are expanding, oldbuckets is not empty and buckets is twice times the oldbuckets,
Therefore, it is recommended to initialize the estimated size at the specified capacity
D, long call stack to avoid the application of more temporary objects
The default size of the Goroutine call stack is 4K (1.7 modified to 2K), with a continuous stack mechanism, when the stack space is not enough, go runtime will automatically expand:
When the stack space is not enough, increase by twice times, the original stack of the variables will be copied directly to the new stack space, the variable pointer to the new space address;
The stack space is freed by the fallback stack, and when the GC discovers that the stack space occupies less than 1/4, the stack space is reduced by half.
For example, the final size of the stack 2M, the extreme case, there will be 10 times the stack operation, will bring performance degradation.
Therefore, it is recommended to control the complexity of the call stack and function, do not do all the logic in a goroutine, such as the need to long call stack, and consider goroutine pooling, avoid the frequent creation of Goroutine to bring changes in the stack space.
E. Avoid frequent creation of temporary objects
Go in GC will cause stop the world, that is, the whole situation is paused. Go1.8 The worst case GC is 100US. However, the pause time depends on the number of temporary objects, the more the number of temporary objects, the longer the pause time, and the CPU consumption.
Therefore, it is recommended to optimize the GC by minimizing the number of temporary objects: use local variables as much as possible, combine multiple local variables with a large struct or array, reduce the number of scanned objects, and back as much memory as possible.


2, concurrency optimization



A, highly concurrent task processing using the Goroutine pool
Goroutine Although lightweight, but for high concurrent lightweight task processing, frequent to create goroutine to execute, execution efficiency is not too high, because: excessive goroutine creation, will affect the go runtime to Goroutine scheduling, and GC consumption High concurrency if the call exception block backlog, a large number of goroutine short time backlog may cause the program to crash.
B. Avoid high concurrent calls to the synchronization system interface
The implementation of Goroutine is to simulate asynchronous operations through synchronization.
Network IO, locks, channel, Time.sleep, Syscall operations based on asynchronous calls to the underlying system do not block the Go runtime thread scheduling.
Local IO calls, syscall based on underlying system synchronization calls, CGO calls to the C language dynamic library call IO or other blocking creates a new dispatch thread.
Network IO can be based on epoll asynchronous mechanisms (or asynchronous mechanisms such as Kqueue), but for some system functions it does not provide an asynchronous mechanism. For example, in a common POSIX API, a file operation is a synchronous operation. Although there is an open source Fileepoll to simulate asynchronous file operations. But the syscall of go still relies on the API of the underlying operating system. The system API is not asynchronous and go does not do asynchronous processing.
Therefore, it is recommended to isolate the goroutine involved in the synchronous invocation into the controllable goroutine, rather than the direct high-goroutine call.
C. Avoid shared object mutex when high concurrency
In traditional multithreaded programming, performance may have a inflection point when concurrency conflicts occur on 4~8 threads. Go recommends not to communicate through shared memory, go create goroutine is very easy, when a large number of goroutine share the same mutex object, also in a number of goroutine out at the inflection point.
Therefore, it is recommended that Goroutine be executed as independent and conflict-free as possible, and if there is a conflict between the Goroutine, the partition can be used to control the number of concurrency of goroutine and reduce the number of conflicting concurrency of the same mutex object.



3. Other optimization



A, avoid using CGO or reduce the number of CGO calls
Go can invoke the C library function, but go has a garbage collector and the go stack dynamically increases, unable to seamlessly dock with C. Go environment before the C code execution, you must create a new call stack for C, the stack variable is assigned to the C call stack, call the end of the current copy back. The call cost is large and needs to maintain the call context of Go and C, and the mapping of the call stack. Compared to the direct go call stack, the simple call stack may have 2 or even 3 orders of magnitude or more.
Therefore, it is recommended to avoid using CGO as much as possible, and to reduce the number of calls across CGO when unavoidable.
B, reduce []byte and string conversion, try to use []byte to string processing
The string type in Go is an immutable type, in Go []byte and string bottom is two different structures, conversion exists real value object copy, so as far as possible to reduce unnecessary conversions.
Therefore, the proposal: there is string splicing and other processing, as far as possible to use []byte.
C, the concatenation of the string priority to consider bytes. Buffer
The string type is an immutable type, but stitching creates a new string. There are several common ways to string concatenation in go:
string + action: result in allocation of multiple objects and copy of values
Fmt. SPRINTF: Will dynamically parse parameters, efficiency is not where to go
Strings. Join: Inside is []byte's Append
bytes. Buffer: Can pre-allocate size, reduce object assignment and copy
Therefore, the recommendation: for high performance requirements, Bytes is preferred. Buffer, pre-allocated size. Fmt. sprintf can simplify different types of conversions and stitching.



Five, go program document generation



1. Introduction to go Doc Tools



The Go Doc tool extracts the first-line comments from the top-level declaration and the related comments for each object from the Go program and package files, and generates related documents.
Go Doc can also serve as a Web server for online document browsing.



2. Terminal View Document



Go Doc package: Get documentation comments for packages, for example: Go Doc FMT Displays documentation comments for the FMT package generated using Godoc.
Go Doc package/subpackage: Gets the documentation comments for the child package, for example: Go Doc container/list.
Go Doc package function: Gets a document comment for a function in a bundle, for example: Go Doc fmt Printf will show about FMT. Instructions for using Printf ().



3. Browse Documents Online



Godoc supports the launch of a web-online API documentation service, which executes at the command line:
godoc -http=:6666
After you start the Web service,http://127.0.0.1:6666you can see the pages provided by the local document browsing server by using the browser to open.

After testing, Google Chrome does not have access to the Web services that Godoc opens.



4. Generate Documents



The Go Documentation tool enables developers to write their own code, which can be automatically generated as long as the developer follows certain rules.
The Add.go file is as follows:


/*
add function will be add a and b.
return a+b
  */
package add

// add
func add(a,b int)int{
   return a + b
}


The resulting text document is as follows:


[user@localhost add]$ go doc
package add // import "add"

add function will be add a and b. return a+b

[user@localhost add]$ 




The Go Doc tool extracts the first line of comments from the top-level declaration in the package file. If the function is private (the first letter of the function name is lowercase), the Go Doc tool hides the function.


/*
add function will be add a and b.
return a+b
  */
package add

// add
func Add(a,b int)int{
   return a + b
}


The resulting document is as follows:


[user@localhost add]$ go doc
package add // import "add"

add function will be add a and b. return a+b

func Add(a, b int) int
[user@localhost add]$ 




5. Example of adding documents



The steps to add the sample code to the Go language document are as follows:
A, the sample code must be stored separately in a file (the file name is Example_test.go) or in the test code file.
B, in the sample code file, define a function named example, the argument is empty
C, the output of the example takes the form of a comment, starting with//output: Starting with a row, one row of output per line.


package add

import (
    "fmt"
)

func Example(){
    sum := add(1,2)
    fmt.Println(sum)
    // Output:
    // 3
}


The resulting document results in the following:


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.