This example describes the Python array definition method. Share to everyone for your reference, as follows:
There is no data structure for the array in Python, but the list is much like an array, such as:
a=[0,1,2]
At this point: a[0]=0, A[1]=1, a[[2]=2, but begs the question, that is, what if array a wants to define 0 to 999? This may be achieved by a = range (0, 1000). or omitted as a = range (1000). If you want to define a 1000 length, the initial value is all 0, then A = [0 for x in range (0, 1000)]
The following is the definition of a two-dimensional array:
Direct definition:
a=[[1,1],[1,1]]
This defines a 2*2, and initially 0, two-dimensional array.
Indirect definition:
A=[[0 for x in range (+)] for Y in range (10)]
This defines a two-dimensional array with the initial 10*10 of 0.
There is a simpler way to make a two-dimensional array of meanings:
b = [[0]*10]*10
Defines a two-dimensional array that is initially 0 10*10.
Compare with A=[[0 for x in range (10)] for Y in range (+)]: The result of print a==b is true.
However, after the definition of B instead of a, the previous program can be normal operation also error, after careful analysis to obtain the difference:
A[0][0]=1, only a[0][0] is 1, the others are all 0.
B[0][0]=1, A[0][0],a[1][0], only to a[9,0] all is 1.
This results in a large array of 10 small one-dimensional data is all an identical reference, that is, point to the same address.
So b = [[0]*10]*10] does not conform to our conventional sense of a two-dimensional array.
Also tested: The definition of c=[0]*10 has the same effect as c=[0 for X in range (10), without the same reference as above, when the definition of array c is multiplied by the value type, and the preceding B is multiplied by the type, because a one-dimensional array is a reference (borrowing C # The value type and the reference type in the, do not know whether the appropriate).