or N * 1 matrices. Therefore, addition and multiplication operations of vectors are the same as those of matrices.
Inner Product Operation:
A = [1 + 5I, 2, 3 + 6i, 7-2I]; B = [2-I, 4 + 3I, 3-I, 6]; S = sum (conj (B ). * A) S = A * B '% solve AB Inner Product S = dot (B, A) % solve AB Inner Product
6. Linear Equations
% Solution of Linear Equations A = [1, 2, 3; 1, 4, 9; 1 8 27]; B = [5,-2, 6] '; X = inv (a) * B % efficiency not high X = A \ B
7. Similarity simplification and decomposition of M
, then the covariance matrix has only a diagonal value, and since X is independent, the covariance between XM and xn is 0 for m≠n. In addition the covariance matrix is symmetric.Be able to refer to the wiki: (http://zh.wikipedia.org/wiki/%E5%8D%8F%E6%96%B9%E5%B7%AE%E7%9F%A9%E9%98%B5)2. Code implementation pseudocode such as the following (excerpt from Machine learning Combat):' @author: Garvin ' from numpy import *import Matplotlib.pyplot as Pltdef loaddataset (fileName, delim= ' \ t '): FR = Op
. %--------------------------------------------------------------------------%% calculate sample covariance r= CoV (z '); %1 means dividing by N to calculate covariance %% whitening Z [UNBSP;DNBSP;~]NBSP;=NBSP;SVD (r, ' econ '); % with EIG, [U,d]=eig (R); %% The following whitening matrix t=u*inv (sqrt (D)) *u ';% is called the inverse RMS of the covariance matrix, The INV calculation is not too time consu
Principle of principal component analysis and its Python implementation preface:This article mainly refers to Andrew Ng's machine learning course handout, I translated, and with a Python demo demo to deepen understanding.This paper mainly introduces a dimensionality reduction algorithm, principal component analysis method, Principal components analyses, referred to as PCA, the goal of this method is to find a sub-space of the approximate concentration of data, as to how to find this subspace, th
sampled.3, some basic functions of the arrayZeros (m): M-Order all zeros policyZeros (m,n): m*n-Order all zeros policyEye (m): M-Order Unit matrixMatrix operations:Left except \ ax=b; X=a-1 times the square is multiplied by BRight divide/xa=b;x=b times A-1 Times SquareIn the operation of matrices and constants, constants are usually only used as divisorThe Inverse (ab=ba=e (unit matrix)) of the matrix is obtained, and there are corresponding methods.Inverse can be obtained by the function invTo
CLC;
Clear all;
A=xlsread (' C:\Users\d e l l\documents\matlab\problem four\problem-Two.xls ', ' c34:af61 ');
A=size (a,1);
B=size (a,2);
For I=1:b SA (:, i) = (A (:, i)-mean (A (:, i)))/std (A (:, i))
,%%% standard processing
end
Cm=corrcoef (SA);
[V,d]=eig (CM);
For j=1:b
DS (j,1) =d (b+1-j,b+1-j);
End for
i=1:b
DS (i,2) =ds (i,1)/sum (DS (:, 1));
DS (i,3) =sum (DS (1:i,1))/sum (DS (:, 1));
End
t=0.85;
For k=1:b
if DS (k,3) >=t
covariance matrix;
2. Calculate the feature values of S and sort them in ascending order;
3. Select the feature vectors corresponding to the first n feature values to form a transformation matrix E = [E1, E2 ,..., En '];
4. Finally, for each n-dimensional feature vector X can be converted to n-dimensional new feature vector Y:
Y = transpose (E) (X-m)
Finally, I have to do it myself to remember: I did it with Python numpy. If I do it with C, it's okay to look for things. It's too t
.
pinv(X)
Returns the pseudo-inverse B of matrix X.
norm(X , ref)
The ref specifies the type of the matrix or vector to be solved.
cond(X, p)
Returns the condition number of the P-norm of matrix X. If P = 2 corresponds to 2 norm
[v,d]=eig(X)
Calculate the matrix feature value and feature vector. If the equation XV = VD has a non-zero solution, V is the feature vector and D is the feature value.
Matrix Creation
I. Matrix Definition
Example:> A = [1 2 3; 4 5 6; 7 8 9]1. The matrix is enclosed by square brackets []. 2. Elements in the same row of the matrix are separated by spaces or commas. 3. columns and rows are separated by semicolons. 4. Direct input., you can use carriage return to replace 2. assign values to matrix elements 1. assign values to matrix elements separately.
Example:> X (5) = ABS (x (1 ))2. A large matrix can use a small matrix as its element.
Example:> A = [A; 11 12 1
weight vector.Attention:In this case, it is possible to replace A with the eigenvector of the maximum feature root, possibly in order to maximize the amount of information (a) of the original data (not sure ...). )3. Conformance TestingConsistency test, the specific also involves the combination of consistency test.Third, the realization of MATLABHere first search the data, see this code, the code is very clear, here directly posted here.Clc;clear; A=[1 1.2 1.5 1.5;0.833 1 1.2 1.2;0.667 0.833
., 2.])
Merge arrays
Use the vstack and hstack functions in numpy:
The code is as follows:
>>> A = np. ones (2, 2 ))>>> B = np. eye (2)>>> Print np. vstack (a, B ))[[1. 1.][1. 1.][1. 0.][0. 1.]>>> Print np. hstack (a, B ))[[1. 1. 1. 0.][1. 1. 0. 1.]
Check whether the two functions involve the shortest copy problem:
The code is as follows:
>>> C = np. hstack (a, B ))>>> Print c[[1. 1. 1. 0.][1. 1. 0. 1.]>>> A [1, 1] = 5>>> B [1, 1] = 5>>> Print c[[1. 1. 1. 0.][1. 1. 0. 1.]
We can see that the
eigenvectors of P, resulting in XX (k,1), resulting in a projection matrix Z (KXN) of X on K eigenvectors.5. Reconstruct xx with Z and compare with X to calculate the reconstruction error4. MATLAB implementation of PCA[V, E] = Eig (cov (X ')) [E index] = sort (Diag (e), ' descend '); v = V (:, index); Meanx = mean (X ') '; P=v (:, [1:k]) [r,c] = size (X); Y = P ' * (X-repmat (meanx,1,c)); [R,c] = size (Y); XX = P * Y + repmat (Meanx, 1, c);5. PCA mai
1 #import2 3 intMainintargcConst Char*argv[]) {4 @autoreleasepool {5 //Create and set the number of array elements6Nsmutablearray *arr1=[nsmutablearray arraywithcapacity:7];7 //Copying an array8Nsarray *[email protected][@"Mon",@"Tue",@"Wed",@"Thu",@"Fri",@"Sat",@"Sun"];9Nsmutablearray *arr3=[Nsmutablearray arraywitharray:arr2];Ten //add an element to an array One[Arr3 AddObject:@"Eig"]; A //inserts an element according
11111 00000000-01001011----------10101
111111 00000000-01001011----------110101
1111111 00000000-01001 011----------0110101
11111111 00000000-01001011----------10110101 td>
If We wanted we could go further, but there would is no point. Inside of a computer The result of this computation would is assigned to an eight bit variable, so any bits beyond the Eig HTH would be discarded.With the fact
image, (I = 1, 2... c; j = 1, 2 ..., ni), the mean of the class I projection feature vector is, within the projection space, the nearest classification rule is: if the sample y meets:
At the same time, the minimum distance classification rule is: If sample y meets
Just compile it:
Allsamples = []; Global Pathname; Global Y; Global X; Global P; Global Train_num; Global M; Global N; m = 112 ; % Rown = 92 ; % Columntrain_num =200 ; Gt = Zeros (n, n); pathname = ' C: \
-power method. Basically, I can follow this idea to write it down smoothly. I wrote it myselfCodePut it in the idempotence (this is one of the reasons why I later gave up using my own anti-idempotence ). The algorithm I wrote can also be used for exercises, and the matrix with smaller sizes cannot be seen, but it is unreliable to solve large-scale matrices, therefore, this is just a record of your work.
Later, I found that the problem I solved was not to find the feature values of a general la
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.