Dear Yoshiki, dear Ed,
thank you for your replies, over which I have been meditating for a weekend.
I have tried to implement a singular value decomposition with U and V always
square - and maybe of different sizes. I was following a calculated example
from a
textbook to do this.
The idea is the following.
Assume we want to calculate the SVD of matrix a (not necessarily square).
I was considering a=[[10 5 -10],[2 -11 10]] as an example, so no isssues with
respect to 64 bit precision can matter here.
I'll describe the procedure and draw a line how far I could follow
it (to Float's precision).
It was suggested to calculate b=a*a.tranpose, to get the eigenvalues
of b : lambda_1=360 and lambda_2=90.
The singular values of matrix a are then the square roots of these numbers,
i.e.
sigma_1=6*sqrt(10) and sigma_2=3*sqrt(10).
One may then calculate normed eigenvectors to lambda_1: [-1/sqrt(2)
1/sqrt(2) ]
and to lambda_2 := [ 1/sqrt(2) 1/sqrt(2)], these two vectors form the matrix
V.
Now, to calculate U, one uses the fact that
a.tranpose*v.column(0)=sigma_1*u_column(0)
and a.tranpose*v.column(1)=sigma_2*u_column(1).
This gives two vectors, but as U is 3x3, I need a third vector, which is
constructed using
Gram-Schmidt orthogonalization.
In the textbook, this third vector is determined to be [ 1/3 2/3
2/3].transpose , giving
U as [ [-2/sqrt(45) 2/sqrt(5) 1/3],[-4/sqrt(45) -1/sqrt(5) 2/3],[5/sqrt(45)
0 2/3]].
I tried to construct U from a random vector, but I got something like [0.304
0.657 0.657] instead
and determinants for U between 0.96 and 1.03 , where they should be 1 or
quite close to 1
all the time.
Up to calculating U, all the calculations I got were in Float-perfect
harmony with
the textbook example.
So, I've checked my Gram-Schmidt implementation, but I can't find any errors
in there, and
I can reproduce some worked-out examples from textbooks ..
So, my guess was that maybe the sigmas I calculated from GSL eigenvalue
calculations
might be to blame, but you tell me that that's not the case ....
Best regards,
Axel