This is a creation in Article, where the information may have evolved or changed.
In the development process, we always design different data structures and different algorithms to organize and access the data, in some cases we can use hard coding or code generator to generate a set of functions conditionally return data, in other cases we can store the data in the slice, need to look for the traversal of the search, Or some occasions we can use map to store data, with key to index the data. How is the performance consumption of these data access methods in different data volumes? Let's do a comparative experiment.
We know that the map data volume of go, regardless of size can guarantee fixed read time, and slice and hard coding certainly can't deal with big data volume, so here is not large data volume experiment, only small data volume experiment.
Experiment Code (GitHub):
PackageLabs06Import "Testing"typeBigstructstruct{C01intC02intC03intC04intC05intC06intC07intC08intC09intC10intC11intC12intC13intC14intC15intC16intC17intC18intC19intC20intC21intC22intC23intC24intC25intC26intC27intC28intC29intC30int}funcLoop1 (A []bigstruct)int{ forI: =0; I <Len(a); i++ {ifA[i]. C30 = =3{returni}}return-1}funcLoop2 (A []bigstruct)int{ forI: =Len(a)-1; I >=0; i--{ifA[i]. C30 = =1{returni}}return-1}funcLOOP3 (AMap[int]BIGSTRUCT)int{returnA[2]. C30}funcLOOP4 (A []*bigstruct)int{ forI, X: =RangeAifX.C30 = =3{returni}}return-1}funcLOOP5 (A []bigstruct)int{Switch{ CaseA[0]. C01 = =3:return0 CaseA[1]. C01 = =3:return1 CaseA[2]. C01 = =3:return2}return-1}funcBENCHMARK_LOOP1 (b *testing. B) {varA = Make([]bigstruct,3) A[0]. C30 =1A[1]. C30 =2A[2]. C30 =3 forI: =0; i < B.N; i++ {LOOP1 (a)}}funcBENCHMARK_LOOP2 (b *testing. B) {varA = Make([]bigstruct,3) A[0]. C30 =1A[1]. C30 =2A[2]. C30 =3 forI: =0; i < B.N; i++ {LOOP2 (a)}}funcBENCHMARK_LOOP3 (b *testing. B) {varA = Make(Map[int]bigstruct,3) A[0] = Bigstruct{c30:1} A[1] = Bigstruct{c30:2} A[2] = Bigstruct{c30:3} forI: =0; i < B.N; i++ {LOOP3 (a)}}funcBENCHMARK_LOOP4 (b *testing. B) {varA = Make([]*bigstruct,3) A[0] = &bigstruct{c30:1} A[1] = &bigstruct{c30:2} A[2] = &bigstruct{c30:3} forI: =0; i < B.N; i++ {LOOP4 (a)}}funcBENCHMARK_LOOP5 (b *testing. B) {varA = Make([]bigstruct,3) A[0]. C30 =1A[1]. C30 =2A[2]. C30 =3 forI: =0; i < B.N; i++ {LOOP5 (a)}}
Test results:
Dada-imac:labs dada$ Go test-test.bench= "." Labs06testing:warning:no tests to RUNPASSBENCHMARK_LOOP1 500000000
5.73 ns/opbenchmark_loop2 500000000 5.72 ns/opbenchmark_loop3 50000000 68.0 Ns/opbenchmark_ LOOP4 500000000 4.92 ns/opbenchmark_loop5 500000000 4.40 ns/opok labs06 15.970s
Conclusion: Hard Coding < pointer slice range loop < for loop, but the magnitude is the same and depends on the situation. But map has a magnitude difference, and small amounts of data are used sparingly.