Numpy | Linear Algebra

The Linear Algebra module of NumPy offers various methods to apply linear algebra on any numpy array.
One can find:

  • rank, determinant, trace, etc. of an array.
  • eigen values of matrices
  • matrix and vector products (dot, inner, outer,etc. product), matrix exponentiation
  • solve linear or tensor equations and much more!
# Importing numpy as np
import numpy as np

A = np.array([[6, 1, 1],
              [4, -2, 5],
              [2, 8, 7]])

# Rank of a matrix
print("Rank of A:", np.linalg.matrix_rank(A))

# Trace of matrix A
print("\nTrace of A:", np.trace(A))

# Determinant of a matrix
print("\nDeterminant of A:", np.linalg.det(A))

# Inverse of matrix A
print("\nInverse of A:\n", np.linalg.inv(A))

print("\nMatrix A raised to power 3:\n",
           np.linalg.matrix_power(A, 3))

Output:

Rank of A: 3

Trace of A: 11

Determinant of A: -306.0

Inverse of A:
 [[ 0.17647059 -0.00326797 -0.02287582]
 [ 0.05882353 -0.13071895  0.08496732]
 [-0.11764706  0.1503268   0.05228758]]

Matrix A raised to power 3:
 [[336 162 228]
 [406 162 469]
 [698 702 905]]

 

Matrix eigenvalues Functions

numpy.linalg.eigh(a, UPLO=’L’) : This function is used to return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.Returns two objects, a 1-D array containing the eigenvalues of a, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns).


# Python program explaining
# eigh() function

from numpy import linalg as geek

# Creating an array using array 
# function
a = np.array([[1, -2j], [2j, 5]])

print("Array is :",a)

# calculating an eigen value
# using eigh() function
c, d = geek.eigh(a)

print("Eigen value is :", c)
print("Eigen value is :", d)

Output :

Array is : [[ 1.+0.j,  0.-2.j],
                [ 0.+2.j,  5.+0.j]]

Eigen value is : [ 0.17157288,  5.82842712]

Eigen value is : [[-0.92387953+0.j , -0.38268343+0.j ],
       [ 0.00000000+0.38268343j,  0.00000000-0.92387953j]]

 
numpy.linalg.eig(a) : This function is used to compute the eigenvalues and right eigenvectors of a square array.


# Python program explaining
# eig() function

from numpy import linalg as geek

# Creating an array using diag 
# function
a = np.diag((1, 2, 3))

print("Array is :",a)

# calculating an eigen value
# using eig() function
c, d = geek.eig(a)

print("Eigen value is :",c)
print("Eigen value is :",d)

Output :

Array is : [[1  0  0],
                 [0  2  0],
                 [0  0  3]]

Eigen value is : [ 1  2  3]

Eigen value is : [[ 1  0  0],
                 [  0  1  0],
                 [  0  0  1]]

 

Function Description
linalg.eigvals() Compute the eigenvalues of a general matrix.
linalg.eigvalsh(a[, UPLO]) Compute the eigenvalues of a complex Hermitian or real symmetric matrix.

Matrix and vector products

numpy.dot(vector_a, vector_b, out = None) : returns the dot product of vectors a and b. It can handle 2D arrays but considering them as matrix and will perform matrix multiplication. For N dimensions it is a sum product over the last axis of a and the second-to-last of b :

dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) 

Code #1:


# Python Program illustrating
# numpy.dot() method

import numpy as geek

# Scalars
product = geek.dot(5, 4)
print("Dot Product of scalar values  : ", product)

# 1D array
vector_a = 2 + 3j
vector_b = 4 + 5j

product = geek.dot(vector_a, vector_b)
print("Dot Product  : ", product)

Output:

Dot Product of scalar values  :  20
Dot Product  :  (-7+22j)

 
How Code #1 works ?

vector_a = 2 + 3j
vector_b = 4 + 5j

now dot product 
= 2(4 + 5j) + 3j(4 - 5j)
                = 8 + 10j + 12j - 15
                = -7 + 22j

 
numpy.vdot(vector_a, vector_b) : Returns the dot product of vectors a and b. If first argument is complex the complex conjugate of the first argument(this is where vdot() differs working of dot() method) is used for the calculation of the dot product. It can handle multi-dimensional arrays but working on it as a flattened array.

Code #1:


# Python Program illustrating
# numpy.vdot() method

import numpy as geek

# 1D array
vector_a = 2 + 3j
vector_b = 4 + 5j

product = geek.vdot(vector_a, vector_b)
print("Dot Product  : ", product)

Output :

Dot Product  :  (23-2j)

 
How Code #1 works ?

vector_a = 2 + 3j
vector_b = 4 + 5j

As per method, take conjugate of vector_a i.e. 2 - 3j

now dot product = 2(4 - 5j) + 3j(4 - 5j)
                = 8 - 10j + 12j + 15
                = 23 - 2j
Function Description
matmul() Matrix product of two arrays.
inner() Inner product of two arrays.
outer() Compute the outer product of two vectors.
linalg.multi_dot() Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order.
tensordot() Compute tensor dot product along specified axes for arrays >= 1-D.
einsum() Evaluates the Einstein summation convention on the operands.
einsum_path() Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays.
linalg.matrix_power() Raise a square matrix to the (integer) power n.
kron() Kronecker product of two arrays.

Solving equations and inverting matrices

numpy.linalg.solve() : Solve a linear matrix equation, or system of linear scalar equations.Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.

# Python Program illustrating
# numpy.linalg.solve() method

import numpy as np

# Creating an array using array
# function
a = np.array([[1, 2], [3, 4]])

# Creating an array using array
# function
b = np.array([8, 18])

print(("Solution of linear equations:", 
      np.linalg.solve(a, b)))

Output:

Solution of linear equations: [ 2.  3.]

 
numpy.linalg.lstsq() : Return the least-squares solution to a linear matrix equation.Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b – a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.

# Python Program illustrating
# numpy.linalg.lstsq() method


import numpy as np
import matplotlib.pyplot as plt

# x co-ordinates
x = np.arange(0, 9)
A = np.array([x, np.ones(9)])

# linearly generated sequence
y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24]
# obtaining the parameters of regression line
w = np.linalg.lstsq(A.T, y)[0] 

# plotting the line
line = w[0]*x + w[1] # regression line
plt.plot(x, line, 'r-')
plt.plot(x, y, 'o')
plt.show()

Output:

4

Function Description
numpy.linalg.tensorsolve() Solve the tensor equation a x = b for x.
numpy.linalg.inv() Compute the (multiplicative) inverse of a matrix.
numpy.linalg.pinv() Compute the (Moore-Penrose) pseudo-inverse of a matrix.
numpy.linalg.tensorinv() Compute the ‘inverse’ of an N-dimensional array.

Special Functions

numpy.linalg.det() : Compute the determinant of an array.

# Python Program illustrating
# numpy.linalg.det() method

import numpy as np

# creating an array using 
# array method
A = np.array([[6, 1, 1],
              [4, -2, 5],
              [2, 8, 7]])


print(("\nDeterminant of A:"
     , np.linalg.det(A)))

Output:

Determinant of A: -306.0

 
numpy.trace() : Return the sum along diagonals of the array.If a is 2-D, the sum along its diagonal with the given offset is returned, i.e., the sum of elements a[i,i+offset] for all i.If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-arrays whose traces are returned. The shape of the resulting array is the same as that of a with axis1 and axis2 removed.

# Python Program illustrating
# numpy.trace()() method

import numpy as np

# creating an array using 
# array method
A = np.array([[6, 1, 1],
              [4, -2, 5],
              [2, 8, 7]])


print("\nTrace of A:", np.trace(A))


Output:

Trace of A: 11

 

Function Description
numpy.linalg.norm() Matrix or vector norm.
numpy.linalg.cond() Compute the condition number of a matrix.
numpy.linalg.matrix_rank() Return matrix rank of array using SVD method
numpy.linalg.cholesky() Cholesky decomposition.
numpy.linalg.qr() Compute the qr factorization of a matrix.
numpy.linalg.svd() Singular Value Decomposition.



  • Last Updated : 25 Jan, 2024

Share your thoughts in the comments
Similar Reads