I want to use Numpy to calculate eigenvalues and eigenvectors. Here is my code:
import numpy as np
from numpy
import linalg as LA
lapl = np.array(
[
[2, -1, -1, 0, 0, 0],
[-1, 3, 0, -1, 0, -1],
[-1, 0, 2, -1, 0, 0],
[0, -1, -1, 3, -1, 0],
[0, 0, 0, -1, 2, -1],
[0, -1, 0, 0, -1, 2]
])
w, v = LA.eigh(lapl)
print('Eigenvalues:', np.round(w, 0))
print('Eigenvectors:', np.round(v, 2))
Here is the result:
Eigenvalues: [0. 1. 2. 3. 3. 5.] Eigenvectors: [ [0.41 0.5 0.41 - 0.46 0.34 0.29] [0.41 0. 0.41 0.53 0.23 - 0.58] [0.41 0.5 - 0.41 - 0.07 - 0.57 - 0.29] [0.41 0. - 0.41 0.53 0.23 0.58] [0.41 - 0.5 - 0.41 - 0.46 0.34 - 0.29] [0.41 - 0.5 0.41 - 0.07 - 0.57 0.29] ]
However, when I run the same matrix in Wolfram Alpha, I am getting a different result - eigenvalues are the same, but the eigenvectors are different:
v1 = (1, -2, -1, 2, -1, 1) v2 = (0, -1, 1, -1, 0, 1) v3 = (1, -1, 0, -1, 1, 0) v4 = (1, 1, -1, -1, -1, 1) v5 = (-1, 0, -1, 0, 1, 1) v6 = (1, 1, 1, 1, 1, 1)
I'm trying to get the eigenvalues and eigenvectors from a square matrix with the following commands:,The eigenvalues coincide with other softwares, but the eigenvector not. It makes me wonder what I'm doing wrong?,But if you need to reproduce eigenvectors from other software, the simple answer is that you are probably making the wrong sort of comparison.,Note that the eigen vectors are stored in the columns, not rows. The usual differences in reported eigen vectors is the sign, i.e., vectors from one algorithm may differ from another algorithm by being multiplied by -1. Somewhat apart from that, use of matrices is discouraged, plain old arrays are much preferred.
import numpy as np
A = matrix([
[5, 2, 0],
[3, 1, -5],
[11, 4, -4]
])
λ, U = np.linalg.eig(A)
print('Starting matrix:\n', A)
print('\nEigenvalues:\n', λ)
print('\nEigenvectors:\n', U)
In[6]: A = array([
[5, 2, 0],
[3, 1, -5],
[11, 4, -4]
])
In[7]: e, v = np.linalg.eig(A)
In[8]: sqrt(sum(v * v, axis = 0))
Out[8]: array([1., 1., 1.])
I want to use Numpy to calculate eigenvalues anycodings_eigenvector and eigenvectors. Here is my code:,Why am I getting a different result? What anycodings_eigenvector should I do in Python to get the same result anycodings_eigenvector as produced by Alpha? ,Note that by scaling the vectors, even anycodings_python the sign can change. That's why positive anycodings_python and negative elements might get flipped.,Considering the first 4 issues the anycodings_python results are actually pretty close. The anycodings_python singularity shouldn't matter, since anycodings_python there is only one zero eigenvalue, thus anycodings_python the corresponding eigenvector is unique anycodings_python (up to sign and length).
I want to use Numpy to calculate eigenvalues anycodings_eigenvector and eigenvectors. Here is my code:
import numpy as np
from numpy
import linalg as LA
lapl = np.array(
[
[2, -1, -1, 0, 0, 0],
[-1, 3, 0, -1, 0, -1],
[-1, 0, 2, -1, 0, 0],
[0, -1, -1, 3, -1, 0],
[0, 0, 0, -1, 2, -1],
[0, -1, 0, 0, -1, 2]
])
w, v = LA.eigh(lapl)
print('Eigenvalues:', np.round(w, 0))
print('Eigenvectors:', np.round(v, 2))
Here is the result:
Eigenvalues: [0. 1. 2. 3. 3. 5.] Eigenvectors: [ [0.41 0.5 0.41 - 0.46 0.34 0.29] [0.41 0. 0.41 0.53 0.23 - 0.58] [0.41 0.5 - 0.41 - 0.07 - 0.57 - 0.29] [0.41 0. - 0.41 0.53 0.23 0.58] [0.41 - 0.5 - 0.41 - 0.46 0.34 - 0.29] [0.41 - 0.5 0.41 - 0.07 - 0.57 0.29] ]
However, when I run the same matrix in anycodings_eigenvector Wolfram Alpha, I am getting a different anycodings_eigenvector result - eigenvalues are the same, but the anycodings_eigenvector eigenvectors are different:
v1 = (1, -2, -1, 2, -1, 1) v2 = (0, -1, 1, -1, 0, 1) v3 = (1, -1, 0, -1, 1, 0) v4 = (1, 1, -1, -1, -1, 1) v5 = (-1, 0, -1, 0, 1, 1) v6 = (1, 1, 1, 1, 1, 1)
Similar function in SciPy that also solves the generalized eigenvalue problem.,Compute the eigenvalues and right eigenvectors of a square array.,This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays.,eigenvalues and eigenvectors of a real symmetric or complex Hermitian (conjugate symmetric) array.
>>> from numpy
import linalg as LA
>>> w, v = LA.eig(np.diag((1, 2, 3))) >>> w; v array([1., 2., 3.]) array([ [1., 0., 0.], [0., 1., 0.], [0., 0., 1.] ])
>>> w, v = LA.eig(np.array([ [1, -1], [1, 1] ])) >>> w; v array([1. + 1. j, 1. - 1. j]) array([ [0.70710678 + 0. j, 0.70710678 - 0. j], [0. - 0.70710678 j, 0. + 0.70710678 j] ])
>>> a = np.array([ [1, 1 j], [-1 j, 1] ]) >>> w, v = LA.eig(a) >>> w; v array([2. + 0. j, 0. + 0. j]) array([ [0. + 0.70710678 j, 0.70710678 + 0. j], # may vary[0.70710678 + 0. j, -0. + 0.70710678 j] ])
>>> a = np.array([ [1 + 1e-9, 0], [0, 1 - 1e-9] ]) >>> # Theor.e - values are 1 + /- 1e-9 >>> w, v = LA.eig(a) >>> w; v array([1., 1.]) array([ [1., 0.], [0., 1.] ])
Though the methods we introduced so far look complicated, the actually calculation of the eigenvalues and eigenvectors in Python is fairly easy. The main built-in function in Python to solve the eigenvalue/eigenvector problem for a square array is the eig function in numpy.linalg. Let’s see how we can use it., Chapter 22. Ordinary Differential Equation - Initial Value Problems ODE Initial Value Problem Statement Reduction of Order The Euler Method Numerical Error and Instability Predictor-Corrector and Runge Kutta Methods Python ODE Solvers Advanced Topics Summary Problems , Chapter 15. Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Problem Statement The Power Method The QR Method Eigenvalues and Eigenvectors in Python Summary Problems , Chapter 23. Ordinary Differential Equation - Boundary Value Problems ODE Boundary Value Problem Statement The Shooting Methods Finite Difference Method Numerical Error and Instability (BVP) Python ODE Solvers (BVP) Summary Problems
import numpy as np
from numpy.linalg
import eig
a = np.array([
[0, 2],
[2, 3]
])
w, v = eig(a)
print('E-value:', w)
print('E-vector', v)
E - value: [-1. 4.] E - vector[[-0.89442719 - 0.4472136] [0.4472136 - 0.89442719]]
a = np.array([
[2, 2, 4],
[1, 3, 5],
[2, 3, 4]
])
w, v = eig(a)
print('E-value:', w)
print('E-vector', v)
E - value: [8.80916362 0.92620912 - 0.73537273] E - vector[[-0.52799324 - 0.77557092 - 0.36272811] [-0.604391 0.62277013 - 0.7103262] [-0.59660259 - 0.10318482 0.60321224]]
Last Updated : 10 Aug, 2020
Output:
[ [1 - 2] [1 3] ] [2. + 1. j 2. - 1. j] [ [0.81649658 + 0. j 0.81649658 - 0. j] [-0.40824829 - 0.40824829 j - 0.40824829 + 0.40824829 j] ]