上面是MATLAB自带函数的解
下面是MATLAB语言实现的神经网络程式
随然看起来特征值和特征向量跟MATLAB自带函数区别很大,但是,检验结果的方法是
V*e*transpose(V)与原矩阵的差别
从结果看,还不错
下面是c版的结果
下面分别是M版和c版代码
function [Eigen,V]=eigenvalue(A) assert(mean2(A-transpose(A))==0,'the input should be a symmetrical matrix'); learning_rate=0.00255; punish_rate=1; epoch=65000; [m,n]=size(A); Eigen=eye(m,n); V=eye(m,n); %initialize X with gauss white noise X=randn(m,n);%normally distributed for i=1:epoch B=transpose(V)*X; E2=X-V*B; U=Eigen*B; E1=A*X-V*U; V=V+learning_rate*(E1*transpose(U)+punish_rate*E2*transpose(B)); Eigen=Eigen+learning_rate*transpose(V)*E1*transpose(B); for mi=1:m% nomalize R for ni=1:n if mi~=ni Eigen(mi,ni)=0; end end end end
代码效率不高,加上循环太多,c版的要运行好几分钟
#include <assert.h> #include "matrix.h" Vect matrix_eigenvalue(const Matrix&mat,/*out*/Matrix& eigenvector) { assert(matrix_is_symmetrical(mat)); int size=mat.size(); Matrix V(size);//stimulus Vect vv(size); for (int i=0;i<size;i++){ V[i]=vv; } //initialize eigen value matrix and characteristic vectors with identity matrix matrix_identify(V); Matrix eigen_mat=V; //initialize stimulus with 0_mean Gauss white noise Matrix X=V; matrix_rand_normal(X); double learning_rate=0.00255; double punish_rate=1; int epoch=65000; Matrix error_1; Matrix error_2; Matrix U; Matrix B; while (epoch-->0){ B=matrix_transpose(V)*X; error_2=X-V*B; U=eigen_mat*B; error_1=mat*X-V*U; V=V+learning_rate*(error_1*matrix_transpose(U)+punish_rate*error_2*matrix_transpose(B)); eigen_mat=eigen_mat+learning_rate*matrix_transpose(V)*error_1*matrix_transpose(B); //diagonalize for (int i=0;i<size;i++){ for (int j=0;j<size;j++){ if (i!=j){ eigen_mat[i][j]=0; } } } } eigenvector=V; Vect diagnal(size); for (int i=0;i<size;i++){ diagnal[i]=eigen_mat[i][i]; } return diagnal; }
发表于 @ 2010年07月04日 19:20:00 | | | |