Application of SVD in sparse representation

Source: Internet
Author: User
Tags strcmp

Svd:singular value decomposition singular value decomposition

Before you recognize SVD, learn two related concepts: orthogonal matrices and unitary matrices.

If, the N-order matrix A is called an orthogonal matrix. The unitary matrix is the generalization of the orthogonal matrix in the field of reciprocating numbers.
The sufficient and necessary conditions for judging orthogonal matrices and unitary matrices are:. Alternatively, the conjugate transpose of the orthogonal matrix and the unitary matrix is equal to its inverse matrix.

For any matrix, the singular value can be decomposed into

The orthogonal matrix, the orthogonal matrix, is a square of the singular values that are arranged by R diagonally from large to small, which is the rank of the matrix. Singular value decomposition (SVD) is an orthogonal matrix decomposition method.

Singular value decomposition is derived from the eigenvalue decomposition of a square matrix, the geometric meaning of the eigenvalue, in the image processing, the left square of the above equation can be regarded as a transformation matrix, x as a space in the
A point, then the geometric meaning is that, after the geometric transformation, the point still exists at the origin point and the point on the line.
So the eigenvector of a square is such a vector that, after this particular transformation, remains in the direction, just the length of the extension.

In the sparse representation, the dictionary atom is updated with K-SVD, specifically:
1 The first step is to initialize the coefficient matrix, and before updating the coefficient matrix, you need to initialize the dictionary, only after the dictionary to find the sparse representation coefficients, the dictionary initialization method is: 1 directly from the training sample to select some vectors to build a dictionary, you can also define a
Some methods to initialize the dictionary, the dictionary initialization, you can use the OMP algorithm, the orthogonal base tracking algorithm to solve the sparse representation coefficient, of course, there are many other methods can be solved

2 Update the various atoms in the dictionary:
The updated algorithm starts with a formula:

Where Y is a training sample, D is a pre-initialized dictionary, X is a sparse representation coefficient based on the initialized dictionary, the DK is the atom to be updated, the XT is the line multiplied by the DK in the coefficient matrix, Y is the mxn,m is the number of rows, and the dimension of a sample vector in Y, n is the number of samples M is the dimension of the atom, A is the number of atoms in the dictionary, X is the number of rows of x, and N represents the number of columns of x, and each column in x represents the coefficients of each atom selected in dictionary d for the corresponding column sample in Y, where axn,a

Finally, after the formula is obtained, the goal is to minimize the value of the column atom vector and the corresponding coefficients are unknown, then directly to the EK SVD decomposition
The resulting eigenvector of the corresponding eigenvalues is used as the column atom of the dictionary, the eigenvalues are taken as coefficients, and the updated
This updates the atomic and sparse representation coefficients, repeating this step until all the atoms have been updated so that the resulting atom is updated so that the so-called sparse representation is achieved.

It is important to note that if you update with SVD directly in the above formula, SVD can find a matrix with a rank of 1 nearest to Ek, and the coefficients XT is not sparse, in other words, the XT is not the same as the value of the update XT's non-0-dollar position. The intuitive solution is to retain only the non-0 values in the sparse, since the coefficients are already sparse in the previous solution, so only the non-0 items are retained, and then the SVD decomposition will retain the XT sparse solution, the corresponding MATLAB code also embodies:

function [betterdictionaryelement,coefmatrix,newvectoradded] = I_findbetterdictionaryelement (Data,Dictionary,j,
coefmatrix,numcoefused) if (Length (' numcoefused ') ==0) numcoefused = 1; End% is only related to the J-column atoms in the sparse matrix, and the position of the non-zero coefficients is%dictionary to Mxa,coefmatrix as AXN, and the column J in the dictionary is related to the Coefmatrix only, Because only the Coefmatrix of% J in the dictionary is multiplied by the element in column J in the order of intersection, it is only possible to have a nonzero value here, to maintain sparsity relevantdataindices = Find (Coefmatrix (j,:) );
% The data indices that uses the J ' th dictionary element.
    if (length (relevantdataindices) <1)% (length (relevantdataindices) ==0) Errormat = Data-dictionary*coefmatrix;
    Errornormvec = SUM (errormat.^2);
    [D,i] = max (Errornormvec); Betterdictionaryelement = Data (:, i);%errormat (:, i);
    % Betterdictionaryelement = betterdictionaryelement./sqrt (betterdictionaryelement ' *betterDictionaryElement);
    Betterdictionaryelement = Betterdictionaryelement.*sign (betterdictionaryelement (1));
    Coefmatrix (j,:) = 0;
    newvectoradded = 1;
Return

End newvectoradded = 0; % taken TmpcoAll relevantdataindices in the Efmatrix column, the number of columns is the length of the array relevantdataindices%tmpcoefmatrix (J,:) 0 is because in order to solve, so to set 0%data (:, Relevantdataindices) takes only the columns of data corresponding to relevantdataindices, because only these columns are related to Dictionary%j column atoms, so Tmpcoefmatrix = Coefmatrix (:, 
Relevantdataindices); Tmpcoefmatrix (j,:) = 0;% the coeffitients of the element we now improve is not relevant. The errors here is the above Ek, slightly different, only Y is selected here Column errors = (Data (:, relevantdataindices)-Dictionary*tmpcoefmatrix) in relation to the first J atom in the dictionary; % vector of errors that we want to minimize with the new element% the better dictionary element and the values of beta
is found using SVD. % because we would like to minimize | | errors-beta*element | | 
_f^2. % that's, to approximate the matrix ' errors ' with a one-rank matrix.
This% was done using the largest singular value. %SVDS (errors,1) means the singular value decomposition of errors, the decomposition only retains the corresponding eigenvalues of the largest left vector, the eigenvalues, the right vector% if the parameter 1 becomes 2 is the left eigenvalue of the top 2 left vector, the right vector and the first 2 eigenvalues, and so on% SVDS decomposition of the corresponding dimension to the Ax1 vector: betterdictionaryelement% and 1x1 eigenvalues: Singularvalue, the Dimension errors column X1 vector: betavector%singularvalue*betavector ' is actually the coefficient of the element at the relevantdataindies position of the newly obtained Atom Betterdictionaryelement, expressed in the formula:%errors = Betterdictionaryelement*singularvalue*betavector '% each vector size from left to right is AXB = Ax1 * 1*1 * (b*1) '% from above this line of formula is also not difficult to see, here so-called sparsity,
is to achieve [betterdictionaryelement,singularvalue,betavector] = SVDs (errors,1) by using the preceding OMP to take only the line vector of the maximum value of the eigenvector, and the column vector, which only takes the non-0 item%. Coefmatrix (j,relevantdataindices) = Singularvalue*betavector ';% *signoffirstelem

Then repeat the above steps m times, update the dictionary every atom iteration m, until the iteration is complete, the training data y corresponds to the atomic and sparse representation coefficients are obtained; complete KSVD code:

function [Dictionary,output] = KSVD (...
    Data,...% an nXN matrix that contins n signals (Y), and each of the dimension n. param)% =========================================================================% K-SVD Algori tHM% =========================================================================% The K-SVD algorithm finds a Dictionary for linear representation of% signals. Given a set of signals, it searches for the best dictionary so% can sparsely represent each signal.  Detailed discussion on the algorithm% and possible applications can is found in "The K-svd:an algorithm for% designing of Overcomplete dictionaries for Sparse representation ", written% by M. Aharon, M. Elad, and a.m. Bruckstein and Appeare 
D in the IEEE Trans. 
% on Signal processing, Vol. Si, No. one, pp. 4311-4322, November 2006.                         % =========================================================================% INPUT ARGUMENTS:% Data An NXN matrix that contins N Signals (Y), each of dimension n. % param structure that includes all required% parameters for the K
-SVD execution.  % Required fields is:% K, ... the number of dictionary elements to
Train% numiteration,... number of iterations to perform. % Errorflag ... if =0, a fix number of coefficients is% used for represent ation of each signal. If So, Param. L must be% specified as the number of representing Atom. If =1, arbitrary number% of atoms represent each signal, until a specific representation Error% is reached.
If So, Param.errorgoal must is specified as the allowed% error. % Preservedcatom ... if =1 then the first atom of the dictionary is set To is constant, and does not ever change. This% might is useful for working with natural% images ( In this case, only Param.
K-1% atoms is trained).
% (optional, see Errorflag) L,...% maximum coefficients to use in OMP coefficient calculations.
% (optional, see Errorflag) errorgoal, ...% allowed representation error in representing each signal. % Initializationmethod,... Mehtod to initialize the dictionary, can percent be one of the FO                                 llowing arguments:% * ' dataelements ' (initialization by the signals themselves), or:%
* ' Givenmatrix ' (initialization by a given matrix param.initialdictionary).                                 % (optional, see Initializationmethod) initialdictionary,...% If the initialization method% Is ' Givenmatrix ', this is the matrix That'll be used. % (optional) truedictionary, ...% if specified, in each% iteration the Differe
nCE between this dictionary and the trained one are measured and displayed. % displayprogress, ... if =1 progress information is displyed. 
If param.errorflag==0,% the average repersentation error (RMSE) is displayed, while if                                 % Param.errorflag==1, the average number of required coefficients for%
Representation of each signal is displayed.                  % =========================================================================% OUTPUT ARGUMENTS:% Dictionary The extracted Dictionary of size NX (param. K).% output Struct that contains information about the current run. It may include the following fields:% Coefmatrix the final coefficients matrix (it should hold That Data equals approximately dictionary*output.
Coefmatrix. % ratio If The true dictionary is defined (in% synthetic Experim Ents), this parameter holds a vector of length% param.numiteration that includes
tion ratios in each% iteration). % Totalerr The total representation error after each% iteration (def  ined only if% Param.displayprogress=1 and% Param.errorflag  = 0)% numcoef A vector of length param.numiteration that% include The average number of coefficients required for representation% of each signal (in each I teration) (defined only if% Param.displayprogress=1 and% p Aram.errorflag = 1)
% ========================================================================= if (~isfield (param, ' displayProgress ')
) param.displayprogress = 0;
End Totalerr (1) = 99999;
if (Isfield (param, ' errorflag ') ==0) Param.errorflag = 0;
    End if (Isfield (param, ' truedictionary ')) displayerrorwithtruedictionary = 1;
    Errorbetweendictionaries = zeros (param.numiteration+1,1);
Ratio = zeros (param.numiteration+1,1);
    else displayerrorwithtruedictionary = 0;
Ratio = 0;
End if (param.preservedcatom>0) fixeddictionaryelement (1:size (data,1), 1) = 1/sqrt (Size (data,1));
else fixeddictionaryelement = []; End% coefficient calculation method is the OMP with fixed number of coefficients if (size (data,2) < Param. K) disp (' Size of data is smaller than the dictionary size.
    Trivial solution ... ');
    Dictionary = Data (:, 1:size (data,2));
Return ElseIf (strcmp (param. Initializationmethod, ' dataelements ')) Dictionary (:, 1:param. K-param.preservedcatom) = Data (:, 1:param. K-param. Preservedcatom); ElseIf (strcmp (param. Initializationmethod, ' Givenmatrix ')) Dictionary (:, 1:param. K-param.preservedcatom) = Param.initialdictionary (:, 1:param.
K-param.preservedcatom); End% reduce the Dictionary that is spanned by the fixed% elements if (param.preservedcatom) Tmpmat =
    Fixeddictionaryelement \ Dictionary;
Dictionary = Dictionary-fixeddictionaryelement*tmpmat;
End%normalize the dictionary.
Dictionary = Dictionary*diag (1./sqrt (sum (dictionary.*dictionary)); 
Dictionary = Dictionary.*repmat (sign (Dictionary (1,:)), Size (dictionary,1), 1), and% multiply in the sign of the first element.

Totalerr = zeros (1,param.numiteration);

% The K-SVD algorithm starts here. For iternum = 1:param.numiteration% Find the coefficients if (param.errorflag==0)%coefmatrix = Mexompite Rative2 (Data, [Fixeddictionaryelement,dictionary],param.
        L); Coefmatrix = OMP ([Fixeddictionaryelement,dictionary],data, Param.
    L); else%coefmatrix = MexomPerriterative (Data, [fixeddictionaryelement,dictionary],param.errorgoal);
        Coefmatrix = Omperr ([Fixeddictionaryelement,dictionary],data, Param.errorgoal); Param.
    L = 1;
    End replacedvectorcounter = 0;
    Rperm = randperm (Size (dictionary,2));
            for j = rperm [Betterdictionaryelement,coefmatrix,addednewvector] = i_findbetterdictionaryelement (Data,...
            [Fixeddictionaryelement,dictionary],j+size (fixeddictionaryelement,2),... Coefmatrix, Param.
        L);
        Dictionary (:, j) = Betterdictionaryelement;
            if (param.preservedcatom) Tmpcoef = fixeddictionaryelement\betterdictionaryelement;
            Dictionary (:, j) = Betterdictionaryelement-fixeddictionaryelement*tmpcoef;
        Dictionary (:, j) = Dictionary (:, J)./sqrt (Dictionary (:, J) ' *dictionary (:, j));
    End replacedvectorcounter = Replacedvectorcounter+addednewvector; End If (iternum>1 & param.displayprogress) if (param.errorflag==0) Output.totalerr (iterNum-1) = sqrt (SUM (SUM ((Data-[fixeddictionaryelement,dictionary]*coefmatrix). ^2))/pro
            D (Size (Data)));
        DISP ([' Iteration ', Num2str (iternum), ' Total error is: ', Num2str (Output.totalerr (iterNum-1))]);
            else Output.numcoef (iterNum-1) = Length (Find (Coefmatrix))/size (data,2);
        DISP ([' Iteration ', Num2str (iternum), ' Average number of coefficients: ', Num2str (Output.numcoef (iterNum-1))]); End End If (displayerrorwithtruedictionary) [ratio (iternum+1), Errorbetweendictionaries (iternum+1)] = I_ Finddistansebetweendictionaries (param.
        Truedictionary,dictionary);
        Disp (strcat ([' Iteration ', Num2str (iternum), ' ratio of restored elements: ', num2str (ratio)]));
    Output.ratio = ratio;

    End Dictionary = I_cleardictionary (Dictionary,coefmatrix (Size (fixeddictionaryelement,2) +1:end,:), Data); if (Isfield (param, ' waitbarhandle ')) Waitbar (iternum/param.counterforWaitbar); End end output.
Coefmatrix = Coefmatrix;
Dictionary = [Fixeddictionaryelement,dictionary];

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% findbetterdictionaryelement%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [betterdictionaryelement,coefmatrix,newvectoradded] = I_findbetterdictionaryelement (Data,Dictionary,j,
coefmatrix,numcoefused) if (Length (' numcoefused ') ==0) numcoefused = 1; End relevantdataindices = Find (Coefmatrix (j,:));% The data indices that uses the J ' th dictionary element. If (the length (rel
    evantdataindices) <1)% (length (relevantdataindices) ==0) Errormat = Data-dictionary*coefmatrix;
    Errornormvec = SUM (errormat.^2);
    [D,i] = max (Errornormvec); Betterdictionaryelement = Data (:, i);%errormat (:, i);
    % Betterdictionaryelement = betterdictionaryelement./sqrt (betterdictionaryelement ' *betterDictionaryElement);
    Betterdictionaryelement = Betterdictionaryelement.*sign (betterdictionaryelement (1));
    Coefmatrix (j,:) = 0; Newvectoradded = 1;
Return
End newvectoradded = 0; 
Tmpcoefmatrix = Coefmatrix (:, relevantdataindices); Tmpcoefmatrix (j,:) = 0;% the coeffitients of the element we now improve is not relevant. Errors = (Data (:, Relevantdataindi CES)-dictionary*tmpcoefmatrix); % vector of errors that we want to minimize with the new element% the better dictionary element and the values of beta
is found using SVD. % because we would like to minimize | | errors-beta*element | | 
_f^2. % that's, to approximate the matrix ' errors ' with a one-rank matrix.
This% was done using the largest singular value.
[Betterdictionaryelement,singularvalue,betavector] = SVDS (errors,1); Coefmatrix (j,relevantdataindices) = Singularvalue*betavector ';% *signoffirstelem%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%% finddistansebetweendictionaries%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [ratio,totalDistances] = I_finddistansebetweendictionaries (original,new)% First, all the column in oiginal starts with Positive values.
Catchcounter = 0;
totaldistances = 0;
For i = 1:size (new,2) New (:, i) = sign (new (1,i)) *new (:, i);
    End for i = 1:size (original,2) d = sign (original (1,i)) *original (:, i); Distances =sum ((New-repmat (D,1,size (new,2))). ^

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.