Announce

PukiWiki contents have been moved into SONOTS Plugin (20070703)

Demo: Singluar Value Decomposition

Table of Contents
Developersonots
First Edition04/2005
Last Modified05/2006
LanguageMatlab

This page is written in both English and Japanese.

Abstract

This is a note rather than a project to use svd function.

raw10%
lena.pnglenasvd10.png
5%2%
lenasvd05.pnglenasvd02.png

Source Codes

% doSvd: Apply Singular Value Decomposition for an matrix data as an
%  feature extraction method. 
%  [S] = doSvd(I, R) returns the approximated vector of
%  singular values S (of USV) of the matrix I. The # of coefficients is
%  reduced into R percent (such as 5). 
%
%  See also svd
%     http://www.mathworks.com/access/helpdesk/help/techdoc/ref/svd.html
%
% Author        : Naotoshi Seo
% First Edition : April, 2005
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

function [S] = doSvd(I, R)
 % Testing
 % I = imread('lena.png');
 I = double(I);
 if nargin < 2
     R = 100;
 end

 S = svd(I);
 DIM = floor(length(S)*(R*0.01));
 S = S(1:DIM);
end

An experiment of data compression by SVD

Source Codes

function doSvdTest(percent)
 I = imread('lena.png');
 [m n c] = size(I);
 I = double(I);
 R = I(:, :, 1);
 G = I(:, :, 2);
 B = I(:, :, 3);
 [RU, RS, RV] = svd(R);
 % TIPS: svd(R) returns diag(RS) because it is the most important
 [GU, GS, GV] = svd(G);
 [BU, BS, BV] = svd(B);
 RdS = diag(RS);
 GdS = diag(GS);
 BdS = diag(BS);
 D = length(RdS);
 DIM = floor(D*(percent*0.01));
 RdS(DIM+1:end) = 0; % reduce dimensions
 GdS(DIM+1:end) = 0;
 BdS(DIM+1:end) = 0;
 RSR = sparse(1:D, 1:D, RdS, m, n); % reconstruct into matrix
 GSR = sparse(1:D, 1:D, GdS, m, n);
 BSR = sparse(1:D, 1:D, BdS, m, n);
 RR = RU * RSR * RV'; % recover
 GR = GU * GSR * GV';
 BR = BU * BSR * BV';
 IR = cat(3, RR, GR, BR);
 IR = uint8(IR);
 imshow(IR);
 imwrite(IR, sprintf('lenasvd%02d.png', percent), 'PNG');
end

Results

raw10%
lena.pnglenasvd10.png
5%2%
lenasvd05.pnglenasvd02.png

Although it looks 2% is insufficient, it is usaully enough for the purpose of pattern recognition.

Comparison between SVD and Eigenvalues

Source Codes

 I = imread('lena.png');
 R = double(I(:, :, 1));
 S = svd(R);
 E = eig(R);
 disp(E(1:8))
 disp(S(1:8))
 disp(abs(E(1:8)))

Results

  1.0e+004 *

   4.6061          
  -0.3949          
   0.3881          
  -0.2019          
   0.1280 + 0.0465i
   0.1280 - 0.0465i
   0.0308 + 0.1138i
   0.0308 - 0.1138i

  1.0e+004 *

    4.6625
    0.5603
    0.4303
    0.3522
    0.2949
    0.2900
    0.2060
    0.1942

  1.0e+004 *

    4.6061
    0.3949
    0.3881
    0.2019
    0.1362
    0.1362
    0.1179
    0.1179

Discussion

Results are almost same(It's clear from the math, too). It means both has almost same potency as feature extraction methods.

However, eigen requires a square matrix as an input although svd does not. This means svd is more flexible. However, eigen is less complexity than svd (it is clear because svd is using calculation of eigen values and more).

References