Code Flow:
1. read the data from the file .
2. Convert the data into a matrix form.
3. For the matrix processing .
The specific Python code is as follows :
-The file path needs to be set correctly.
-String processing.
-The conversion of a string array to an integer array. ( nums = [Int (x) for x in Nums])
-The construction of matrices. (matrix = Np.array (nums))
-The NumPy module has a great advantage in matrix processing.
List Contents
#-*-Coding:utf-8-*-
import NumPy as NP
def readFile (path):
# Open File (note path)
f = open (path)
# to process row by line
first_ele = True for
the data in F.readlines ():
# # Remove line breaks for each line, ' \ n '
data = Data.strip (' \ n ')
# # to split by space.
nums = Data.split ("")
# # is added to the matrix.
if First_ele:
### Converts a string to integer data
nums = [Int (x) for x in Nums]
### joins the matrix.
matrix = Np.array (nums)
First_ele = False
else:
nums = [Int (x) for x in nums]
matrix = Np.c_[mat Rix,nums]
Dealmatrix (matrix)
F.close ()
def dealmatrix (Matrix):
# # some basic processing.
print "Transpose The matrix"
matrix = matrix.transpose ()
print matrix
print "Matrix Trace"
Print Np.trace (matrix)
# test.
if __name__ = = ' __main__ ':
readFile ("Matrix")
The contents of the matrix file are as follows:
0 0 0 1 1 0 1 0 1 0 1 1 1 1-1 1
python constructs a matrix of m* n
-Generate matrices by way of a list (array).
-The matrix does not apply to sparse matrices. (sparse matrices are not constructed like this)
-Note: If the volume of data is particularly large, this method is equivalent to loading all the things in the matrix into memory, and if the row reaches 10000 +, it is best to consider using sparse matrices. (easy to appear Memoryerror)
-The operation of sparse matrices should also be considered.
Related code:
def fixed_matrix (Row,col): return
[[0 to I in range (COL)] for J in range (row)]