Som is a unsupervised learning neural network, first affixed with a recently written simple application that uses the SOM to compress and restore images, leaving a pit: 1. Have time to summarize the concept of SOM, learn the process, and optimize the algorithm. 2. Re-implement the code again in Python and C + + as a programming exercise ...
The training process is generally as follows:
Decomposing the image and forming the input vector x = {Xi}, the competition layer is only one layer, that is, the competition layer is the output layer. For each XI, calculate its distance from all competing nodes and find the minimum distance, the minimum distance corresponding to the node competition wins, the right to modify the weights, (here also defines the winning node of the neighbor, and the winning node and its neighbors to get the right to modify the value).
Picture compression transfer process:
Training completed to obtain the weight matrix as a codebook, for a new picture, we calculate the image of each patch and the competition Layer node distance, the minimum distance corresponding to the node's weight vector can be approximated as the image of the patch representation. However, in the transmission of the picture, the weight vector is not transmitted directly, but the index of the transfer weights vector, so as to achieve the purpose of compressing the image for transmission.
At the other end:
Assuming that the weight vector (codebook) is saved, for the received index, each from the codebook to find the index corresponding weight vector, and the corresponding position to make up the image, that is, the completion of the image recovery.
Quality evaluation of the algorithm:
Usually with PSNR (peak signal to noise ratio) description, generally good should be more than 30, because here only the simplest implementation algorithm, PSNR only 21. The quality of the picture is not very good.
Main.m
clear,clc;%------------------------------------------------------------% represent the image by vectors%---------- --------------------------------------------------foo = Pwd;path = FullFile (foo, ' Image '), CD (path); im = Imread (' LENA. BMP '); % LENA. BMP for the TRAININGCD (foo), im = double (IM),% (256*256)/(4*4)% (256*256) divided into (4*4) image block n = 4;m = 4;block_n = N*ones (1,256/n) ; % Block_n = [4,4....4] 64 4block_m = M*ones (1,256/m); im_block = Mat2cell (im,block_n,block_m);%im_block = Reshape (im_bloc k,1,4096); X = ones (16,4096); for i =1:4096 tmp = Cell2mat (Im_block (i)); %x (:, i) = reshape (tmp,16,1); For a = 1:4 for b = 1:4 X (:, i) = TMP (A, b); End Endend[l,k] = size (X);%------------------------------------------------------------% code Book design%----- -------------------------------------------------------N = 512;d = Ones (1,n), a = 0.3;DQ = ones (1,4096); q = Ones (1,4096); t = 1; W = Randint (16,512,[0,255]); % j=1:512 n=512 l=16 random valuesFor W for initializationpw = ones (16,512); % W (k-1) iter = 10; % Iterationwhile iter > 0 (Norm ((pw-w), 2) > exp ( -10)) while (t<=k)% start to train the SOMFO R j = 1:nd (j) = Dist (X (:, T), W (:, j)); End[d,ix] = min (D); %disp (sprintf ('%d ', IX (1)));d Q (t) = D; % find the MIN{DJ} and label QQ (t) = IX;PW = W; % record Current WW = Update (w,q (t), X (:, T), t,a); % update the W through index%W = Update2 (w,q (t), X (:, T), t,a,d); % update the W through distancest = t+1; End iter = iter-1; A = a * 0.95;endif PW = = W% norm ((pw-w), 2) the 2-norm disp of the matrix (' Training done!!! '); Else disp (' Dimensions run out!!! '); end%------------------------------------------------------------% Testing------------------------------------- -----------------------foo = Pwd;path = FullFile (foo, ' Image '), CD (path), im_1 = Imread (' CR. BMP '); im_2 = Imread (' HS4. BMP '); im_3 = Imread (' LENA. BMP '); CD (foo); im_1 = Double (im_1),% im_2 = double (im_2),% Im_3 = double (im_3); test (im_1,w);% test (im_2,w);% test (IM_3,W);
Dist.m
function [d] = dist (x, y)% compute the dist between the VECTORSM = size (x,1); n = size (y,1); if M ~= ndisp (' different dimensi ONS!!! '); Ends = 0;for I =1:ns = s + (x (i)-y (i)) ^2;endd = sqrt (s);
Update.m
function [W] = update (w,q,x,t,a)% update the values in W[m,n] = size (w); Nq =; % distance of neighborfor j = q-nq:q+nqif J <= 0 | | J > continue end for i = 1:MW (i,j) = W (i,j) + A * (X (i)-W (i,j)); EndEnd
Test.m
function [Res] = Test (im,w)% test for the image data compression based on SOM network% (256*256)/(4*4)% will (256*256) be divided (4*4) ) image block n = 4;m = 4;block_n = N*ones (1,256/n); % Block_n = [4,4....4] 64 4block_m = M*ones (1,256/m); im_block = Mat2cell (im,block_n,block_m);%im_block = Reshape (im_bloc k,1,4096); X = ones (16,4096); for i =1:4096 tmp = Cell2mat (Im_block (i)); %x (:, i) = reshape (tmp,16,1); For a = 1:4 for b = 1:4 X (:, i) = TMP (A, b); End Endend[l,k] = size (X);d q = Ones (1,4096); q = Ones (1,4096); t = 1; N = 512;while (t<=k) for j = 1:nd (j) = Dist (X (:, T), W (:, j)); End[d,ix] = min (d);d q (t) = D; % find the MIN{DJ} and label QQ (t) = IX; t = t+1;end%----------------------------------------------------------------% recover the image%------------------- ---------------------------------------------Y = zeros (l,k), I =1;while (i<=k) tmp = q (i); for j = 1:16y (j,i ) = W (j,tmp); Endi = I+1;endres = Ones (256,256); block_n =Ones (1,256/4); % Block_n = [4,4....4] 64 4block_m = 4*ones (1,256/4); res = Mat2cell (res,block_n,block_m); for i =1:k%temp = Reshape (Y (: , i), bis); For a = 1:4 for b = 1:4 temp (A, b) = Y (A-1) +b,i; End End Res (i) = Mat2cell (temp); end;res = Cell2mat (res);%Y = Imresize (y,[256,256]); Figure:imshow (res,[0,255]); tmp = 0;for m = 1:256 for n = 1:256 TMP = tmp + (IM (m,n)-res (m,n)) ^2; End;end; MSE = tmp/(256*256); a = 255*255/mse; PSNR = 10*log10 (a);d ISP (sprintf ('%f ', PSNR));
self-organizing Feature Map Neural Networks (SOM)