)B =-0.0971-0.0178 0.06361.3591 1.5820-1.52660.0149-0.0178 0.06361.1351 1.0487-0.8905-0.0971-0.0178 0.0636-0.0971-0.0178 0.0636-0.9932-0.8177-0.8905-0.9932-0.8177 1.0178-1.6653-1.8842 1.9719-1.2173-1.3509 1.65392.0312 1.8486-1.52660.6870 0.5155-0.2544-0.0971-0.0178 0.06360.0149-0.0178 0.06360.0149-0.0178 0.0636Calculating the coefficients of the principal components and their respective variances are done by finding the Eigenfuncti ONS of the sample covariance matrix:>> [V D] =
\ foreigners \ Desktop \ orl \ s', num2str (I ), '\', num2str(j),'.bmp '); % imshow (a); B = a (* 92); % B is the row vector 1 × n, where n = 10304, the extraction sequence is to first run the column, that is, from top to bottom, from left to right B = double (B); allsamples = [allsamples; B]; % allsamples is an M * n matrix, each row of data in allsamples represents an image, where M = 200 endendsamplem EAN = mean (allsamples); % average image, 1 × Nfor I = xmean (I, :) = allsamples (I, :)-sam
ascending order:-sort (-y) or filplr (sort (r ))
Find: locate the position (not the element value) of the vector matrix element that meets the conditions or expressions specified by the user ). Y = [-1 2-3 4]. S = Y [find (Y Ones: One = ones (R, c ). Create a (RXC) matrix with 1 elements.
Zeros: ZER = zeros (R, c ). Create a (RXC) matrix with zero elements.
Magic: Magic (n ). Generates a special matrix, that is, the sum of elements in any row or column in the matrix, and the sum of elements on
going on here?Well, first of all, (i + (i >> 4)) 0x0F0F0F0F does exactly the same as the previous line, except it adds the adjacent four-bit Bitcou NTS together to give the bitcounts of each eight-bit block (i.e. byte) of the input. (Here, unlike to the previous line, we can get away with moving the outside the addition, since we know that the Eig Ht-bit Bitcount can never exceed 8, and therefore would fit inside four bits without overflowing.)Now
)
generates a sample value for the beta distribution, and the parameter must be greater than 0
chisquare ()
generates sample values for Chi-square distribution
gamma ()
produces a gamma distribution of sample values
uniform ()
produces sample values that are evenly distributed in [0,1]
2.1.c.1 Random Common functionsD Numpy.linalg functions and properties:
Function
Description
/intersect1d/union1d/setdiff1d/setxor1d
file input and OUTPUT functions
Loadtxt/savetxt
Save/load
Saves the array as a binary format disk or read (NPY)
Savez
Save multiple arrays to a compressed file
Linear algebraic functions (LINALG)
Dot
Matrix Inner Product XTX
Qr
QR decomposition
Inv
Inverse matrix
Svd
, it is generally necessary to use the direct search method when the objective function of the problem is difficult to be represented by the analytic formula of the Guide function. At the same time, because these methods are generally more intuitive and easy to understand, they are often used in practical applications.
Powell method: Basic Search Accelerated Search Adjust Search
Specific steps ^ See page 54 ^ Matlab to seek unconstrained extremum problem
Symbolic Solution:
% calculation of the M
function Q=AHP (A)
[M,n]=size (A);
Ri=[0 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.49 1.51];
R=rank (A); The rank of the Judgment matrix
[V,d]=eig (A); % to determine the eigenvalues and eigenvectors of the Matrix, v eigenvalues, D eigenvectors;
Tz=max (D);
B=max (TZ); % Maximum characteristic value
[Row, Col]=find (d==b); % Maximum Eigenvalue location
C=v (:, col); % corresponding feature vector
Ci= (b-n)/(n-1); % Calculation Consistency Test indicator
matrix, the poly (a) command evaluates the characteristic polynomial of a, Det (Lambda*eye (Size (a))-a)Poly (v) when V is a vector, command poly (v) generates a polynomial with V as its rootRoot to find the roots of the polynomialRoot (P)
Cases
Clear
CLC
a=[1 2 3;4 5 6;7 8 0];
P=poly (A)% to find the characteristic polynomial |λe-a|
R=roots (p)%, based on the above characteristic polynomial, to find eigenvaluesResults
p =
1.0000 -6.0000 -72.0000 -27.0000
r =
12.1229
-5.734
Recent exposure to LDA (linear discriminant analysis), LFDA (local discriminant analysis), Flda (Fisher linear discriminant analysis), MMDA (multi-modal discriminant analysis) and other methods for feature extraction, all of which involve the same problem--fisher The Criterion (Fisher discriminant criterion), which requires the minimization of intra-class discretization and the largest inter-class dispersion, describes the problem as shown in the figure:
This leads to the generalized eigenval
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.