Open In App

Last Minute Notes – Engineering Mathematics

See Last Minute Notes on all subjects here

Matrices


A matrix represents a collection of numbers arranged in an order of rows and columns. It is necessary to enclose the elements of a matrix in parentheses or brackets. 
A matrix with 9 elements is shown below. 




This Matrix [M] has 3 rows and 3 columns. Each element of matrix [M] can be referred to by its row and column number. For example, a23=6 

Order of a Matrix : 
The order of a matrix is defined in terms of its number of rows and columns. 
Order of a matrix = No. of rows ×No. of columns 
Therefore Matrix [M] is a matrix of order 3 × 3. 

Transpose of a Matrix : 
The transpose [M]T of an m x n matrix [M] is the n x m matrix obtained by interchanging the rows and columns of [M]. 
if A= [aij] mxn , then AT = [bij] nxm where bij = aji 

Properties of transpose of a matrix: 
 






Singular and Nonsingular Matrix: 
 

  1. Singular Matrix: A square matrix is said to be singular matrix if its determinant is zero i.e. |A|=0
  2. Nonsingular Matrix: A square matrix is said to be non-singular matrix if its determinant is non-zero.



Square Matrix: A square Matrix has as many rows as it has columns. i.e. no of rows = no of columns. 
Symmetric matrix: A square matrix is said to be symmetric if the transpose of original matrix is equal to its original matrix. i.e. (AT) = A. 
Skew-symmetric: A skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix whose transpose equals its negative.i.e. (AT) = -A.
Diagonal Matrix: A diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. The term usually refers to square matrices. 
Identity Matrix:A square matrix in which all the elements of the principal diagonal are ones and all other elements are zeros.Identity matrix is denoted as I. 
Orthogonal Matrix: A matrix is said to be orthogonal if AAT = ATA = I 
Idempotent Matrix: A matrix is said to be idempotent if A2 = A 
Involutory Matrix: A matrix is said to be Involutory if A2 = I. 

Adjoint of a square matrix: 

 


Properties of Adjoint: 

  1. A(Adj A) = (Adj A) A = |A| In
  2. Adj(AB) = (Adj B).(Adj A)
  3. |Adj A|= |A|n-1
  4. Adj(kA) = kn-1 Adj(A)


Inverse of a square matrix: 



Here |A| should not be equal to zero, means matrix A should be non-singular. 


Properties of inverse: 

1. (A-1)-1 = A 
2. (AB)-1 = B-1A-1 
3. Only a non-singular square matrix can have an inverse. 

Trace of a matrix : 
Let A=[aij] nxn is a square matrix of order n, then the sum of diagonal elements is called the trace of a matrix which is denoted by tr(A). tr(A) = a11 + a22 + a33+ ……….+ ann. Remember trace of a matrix is also equal to the sum of eigen value of the matrix. For example: 
 




Properties of trace of matrix: 
Let A and B be any two square matrices of order n, then 
 

  1. tr(kA) = k tr(A) where k is a scalar.
  2. tr(A+B) = tr(A)+tr(B)
  3. tr(A-B) = tr(A)-tr(B)
  4. tr(AB) = tr(BA)


Solution of a system of linear equations: 
Linear equations can have three kind of possible solutions: 
 


Rank of a matrix: Rank of matrix is the number of non-zero rows in the row reduced form or the maximum number of independent rows or the maximum number of independent columns. 
Let A be any mxn matrix and it has square sub-matrices of different orders. A matrix is said to be of rank r, if it satisfies the following properties: 
 

  1. It has at least one square sub-matrices of order r who has non-zero determinant.
  2. All the determinants of square sub-matrices of order (r+1) or higher than r are zero.


Rank is denoted as P(A). 
if A is a non-singular matrix of order n, then rank of A = n i.e. P(A) = n. 

Properties of rank of a matrix: 
 

  1. If A is a null matrix then P(A) = 0 i.e. Rank of null matrix is zero.
  2. If In is the nxn unit matrix then P(A) = n.
  3. Rank of a matrix A mxn , P(A) ≤ min(m,n). Thus P(A) ≤m and P(A) ≤ n.
  4. P(A nxn ) = n if |A| ≠ 0
  5. If P(A) = m and P(B)=n then P(AB) ≤ min(m,n).
  6. If A and B are square matrices of order n then P(AB) ? P(A) + P(B) – n.
  7. If Am×1 is a non zero column matrix and B1×n is a non zero row matrix then P(AB) = 1.
  8. The rank of a skew symmetric matrix cannot be equal to one.


System of homogeneous linear equations AX = 0
 

  1. X = 0. is always a solution; means all the unknowns has same value as zero. (This is also called trivial solution)
  2. If P(A) = number of unknowns, unique solution.
  3. If P(A) < number of unknowns, infinite number of solutions.


System of non-homogeneous linear equations AX = B
 

  1. If P[A:B] ≠P(A), No solution.
  2. If P[A:B] = P(A) = the number of unknown variables, unique solution.
  3. If P[A:B] = P(A) ≠ number of unknown, infinite number of solutions.


Here P[A:B] is rank of gauss elimination representation of AX = B. 
There are two states of the Linear equation system: 
 


Linear dependence and Linear independence of vector: 

Linear Dependence: A set of vectors X1 ,X2 ….Xr is said to be linearly dependent if there exist r scalars k1 ,k2 …..kr such that: k1 X1 + k2X2 +……..kr Xr = 0. 

Linear Independence: A set of vectors X1 ,X2….Xr is said to be linearly independent if for all r scalars k1,k2 …..krsuch that k1X1+ k2 X2+……..krXr = 0, then k1 = k2 =……. = kr = 0. 
How to determine linear dependency and independency ? 
Let X1, X2 ….Xr be the given vectors. Construct a matrix with the given vectors as its rows. 
 

  1. If the rank of the matrix of the given vectors is less than the number of vectors, then the vectors are linearly dependent.
  2. If the rank of the matrix of the given vectors is equal to the number of vectors, then the vectors are linearly independent.


 

Eigen Value and Eigen Vector


Eigen vector of a matrix A is a vector represented by a matrix X such that when X is multiplied with matrix A, then the direction of the resultant matrix remains the same as vector X. 

Mathematically, above statement can be represented as: 
 

AX = λX

where A is any arbitrary matrix, λ are eigen values and X is an eigen vector corresponding to each eigen value.

Here, we can see that AX is parallel to X. So, X is an eigen vector.

Method to find eigen vectors and eigen values of any square matrix A 
We know that,

AX = λX

=> AX – λX = 0

=> (A – λI) X = 0 …..(1)

Above condition will be true only if (A – λI) is singular. That means,

|A – λI| = 0 …..(2)

(2) is known as characteristic equation of the matrix.

The roots of the characteristic equation are the eigen values of the matrix A.

Now, to find the eigen vectors, we simply put each eigen value into (1) and solve it by Gaussian elimination, that is, convert the augmented matrix (A – λI) = 0 to row echelon form and solve the linear system of equations thus obtained.

Some important properties of eigen values



 

Probability

Probability refers to the extent of occurrence of events. When an event occurs like throwing a ball, picking a card from deck, etc ., then the must be some probability associated with that event. 

Basic Terminologies:



Probability of an Event – If there are total p possible outcomes associated with a random experiment and q of them are favourable outcomes to the event A, then the probability of event A is denoted by P(A) and is given by 
 

 P(A) = q/p 


The probability of non occurrence of event A, i.e, P(A’) = 1 – P(A) 

Note –




Theorems: 
 



Total Law of Probability – Let S be the sample space associated with a random experiment and E1, E2, …, En be n mutually exclusive and exhaustive events associated with the random experiment . If A is any event which occurs with E1 or E2 or … or En, then 
 

P(A) = P(E1)P(A/E1) + P(E2)P(A/E2) + ... +  P(En)P(A/En)



Conditional Probability 

Conditional probability P(A | B) indicates the probability of event ‘A’ happening given that event B happened. 

 

    


Product Rule: 
Derived from above definition of conditional probability by multiplying both sides with P(B) 

   P(A ∩ B) = P(B) * P(A|B) 


Bayes’s formula 


Random Variables 
A random variable is basically a function which maps from the set of sample space to set of real numbers. The purpose is to get an idea about the result of a particular situation where we are given probabilities of different outcomes. 

Discrete Probability Distribution – If the probabilities are defined on a discrete random variable, one which can only take a discrete set of values, then the distribution is said to be a discrete probability distribution. 

Continuous Probability Distribution – If the probabilities are defined on a continuous random variable, one which can take any value between two numbers, then the distribution is said to be a continuous probability distribution. 

Cumulative Distribution Function – 
Similar to the probability density function, the cumulative distribution function of a real-valued random variable X, or just distribution function of evaluated at , is the probability that will take a value less than or equal to 
For a discrete Random Variable, 

For a continuous Random Variable, 


 

Uniform Probability Distribution


The Uniform Distribution, also known as the Rectangular Distribution, is a type of Continuous Probability Distribution. 
It has a Continuous Random Variable restricted to a finite interval and it’s probability function has a constant density over this interval. 
The Uniform probability distribution function is defined as- 

   



Expectation: The mean of the distribution, represented as E[x]. 


Variance: 
For uniform distribution, 



Exponential Distribution 
For a positive real number the probability density function of a Exponentially distributed Random variable is given by- 

   

where Rx is exponential random variables. 



Binomial Distribution: 

Mean=np, where p is the probability of success 
Variance. = np(1-p) 



Poisson Distribution: 




 

Calculus:



Limits, Continuity and Differentiability 

Existence of Limit – The limit of a function at exists only when its left hand limit and right hand limit exist and are equal i.e. 


 

Some Common Limits –


   



L’Hospital Rule – 
If the given limit is of the form or i.e. both and are 0 or both and are , then the limit can be solved by L’Hospital Rule
If the limit is of the form described above, then the L’Hospital Rule says that – 

where and obtained by differentiating and 
If after differentiating, the form still exists, then the rule can be applied continuously until the form is changed. 

Continuity 
A function is said to be continuous over a range if it’s graph is a single unbroken curve. 
Formally, 
A real valued function is said to be continuous at a point in the domain if – 
exists and is equal to 
If a function is continuous at then- 

Functions that are not continuous are said to be discontinuous. 

Differentiability 
The derivative of a real valued function wrt is the function and is defined as – 


A function is said to be differentiable if the derivative of the function exists at all points of its domain. For checking the differentiability of a function at point 
must exist. 

If a function is differentiable at a point, then it is also continuous at that point. 
Note – If a function is continuous at a point does not imply that the function is also differentiable at that point. For example, is continuous at but it is not differentiable at that point. 

Lagrange’s Mean Value Theorem 
 

Suppose be a function satisfying three conditions:


 

1) f(x) is continuous in the closed interval a ≤ x ≤ b

2) f(x) is differentiable in the open interval a < x < b

Then according to Lagrange’s Theorem, there exists at least one point ‘c’ in the open interval (a, b) such that:



Rolle’s Mean Value Theorem 

Suppose f(x) be a function satisfying three conditions:

1) f(x) is continuous in the closed interval a ≤ x ≤ b

2) f(x) is differentiable in the open interval a < x < b

3) f(a) = f(b)

Then according to Rolle’s Theorem, there exists at least one point ‘c’ in the open interval (a, b) such that:

f ‘ (c) = 0

Indefinite Integrals 


Fundamental Integration Formulas – 

  1. ∫xndx = (xn+1/(n+1))+C
  2. ∫(1/x)dx = (loge|x|)+C
  3. ∫exdx = (ex)+C
  4. ∫axdx = ((ax)/(logea))+C
  5. ∫sin(x)dx = -cos(x)+C
  6. ∫cos(x)dx = sin(x)+C
  7. ∫sec2(x)dx = tan(x)+C
  8. ∫cosec2(x)dx = -cot(x)+C
  9. ∫sec(x)tan(x)dx = sec(x)+C
  10. ∫cosec(x)cot(x)dx = -cosec(x)+C
  11. ∫cot(x)dx = log|sin(x)|+C
  12. ∫tan(x)dx = log|sec(x)|+C
  13. ∫sec(x)dx = log|sec(x)+tan(x)|+C
  14. ∫cosec(x)dx = log|cosec(x)-cot(x)|+C



Definite Integrals: 
Definite integrals are the extension after indefinite integrals, definite integrals have limits [a, b]. It gives the area of a curve bounded between given limits. 

, It denotes the area of curve F(x) bounded between a and b, where a is the lower limit and b is the upper limit. 

Note: If f is a continuous function defined on the closed interval [a, b] and F be an anti derivative of f. Then 
Here, the function f needs to be well defined and continuous in [a, b]. 
 


Article Tags :