# Last Minute Notes – Engineering Mathematics

See Last Minute Notes on all subjects here.

### Matrices

A matrix represents a collection of numbers arranged in an order of rows and columns. It is necessary to enclose the elements of a matrix in parentheses or brackets.

A matrix with 9 elements is shown below.

This Matrix [M] has 3 rows and 3 columns. Each element of matrix [M] can be referred to by its row and column number. For example, a_{23}=6

**Order of a Matrix :**

The order of a matrix is defined in terms of its number of rows and columns.

Order of a matrix = No. of rows Ã—No. of columns

Therefore Matrix [M] is a matrix of order 3 Ã— 3.

**Transpose of a Matrix :**

The transpose [M]^{T} of an m x n matrix [M] is the n x m matrix obtained by interchanging the rows and columns of [M].

if A= [a_{ij}] mxn , then A^{T} = [b_{ij}] nxm where b_{ij} = a_{ji}

**Properties of transpose of a matrix:**

- (A
^{T})^{T}= A - (A+B)
^{T}= A^{T}+ B^{T} - (AB)
^{T}= B^{T}A^{T}

**Singular and Nonsingular Matrix:**

- Singular Matrix: A square matrix is said to be singular matrix if its determinant is zero i.e. |A|=0
- Nonsingular Matrix: A square matrix is said to be non-singular matrix if its determinant is non-zero.

**Square Matrix:** A square Matrix has as many rows as it has columns. i.e. no of rows = no of columns. **Symmetric matrix:** A square matrix is said to be symmetric if the transpose of original matrix is equal to its original matrix. i.e. (A^{T}) = A. **Skew-symmetric:** A skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix whose transpose equals its negative.i.e. (A^{T}) = -A.**Diagonal Matrix:** A diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. The term usually refers to square matrices. **Identity Matrix:**A square matrix in which all the elements of the principal diagonal are ones and all other elements are zeros.Identity matrix is denoted as I. **Orthogonal Matrix:** A matrix is said to be orthogonal if AA^{T} = A^{T}A = I **Idempotent Matrix:** A matrix is said to be idempotent if A^{2} = A **Involutory Matrix:** A matrix is said to be Involutory if A^{2} = I.

**Adjoint of a square matrix:**

**Properties of Adjoint:**

- A(Adj A) = (Adj A) A = |A| I
_{n} - Adj(AB) = (Adj B).(Adj A)
- |Adj A|= |A|
^{n-1} - Adj(kA) = k
^{n-1}Adj(A)

Inverse of a square matrix:

Here |A| should not be equal to zero, means matrix A should be non-singular.

**Properties of inverse:**

1. (A^{-1})^{-1} = A

2. (AB)^{-1} = B^{-1}A^{-1}

3. Only a non-singular square matrix can have an inverse.

**Trace of a matrix :**

Let A=[a_{ij}] _{nxn} is a square matrix of order n, then the sum of diagonal elements is called the trace of a matrix which is denoted by tr(A). tr(A) = a_{11} + a_{22} + a_{33}+ â€¦â€¦â€¦.+ a_{nn}. Remember trace of a matrix is also equal to the sum of eigen value of the matrix. For example:

**Properties of trace of matrix:**

Let A and B be any two square matrices of order n, then

- tr(kA) = k tr(A) where k is a scalar.
- tr(A+B) = tr(A)+tr(B)
- tr(A-B) = tr(A)-tr(B)
- tr(AB) = tr(BA)

**Solution of a system of linear equations:**

Linear equations can have three kind of possible solutions:

- No Solution
- Unique Solution
- Infinite Solution

**Rank of a matrix:** Rank of matrix is the number of non-zero rows in the row reduced form or the maximum number of independent rows or the maximum number of independent columns.

Let A be any mxn matrix and it has square sub-matrices of different orders. A matrix is said to be of rank r, if it satisfies the following properties:

- It has at least one square sub-matrices of order r who has non-zero determinant.
- All the determinants of square sub-matrices of order (r+1) or higher than r are zero.

*Rank is denoted as P(A).*

if A is a non-singular matrix of order n, then rank of A = n i.e. P(A) = n.

**Properties of rank of a matrix:**

- If A is a null matrix then P(A) = 0 i.e. Rank of null matrix is zero.
- If I
_{n}is the nxn unit matrix then P(A) = n. - Rank of a matrix A mxn , P(A) â‰¤ min(m,n). Thus P(A) â‰¤m and P(A) â‰¤ n.
- P(A
_{ nxn}) = n if |A| â‰ 0 - If P(A) = m and P(B)=n then P(AB) â‰¤ min(m,n).
- If A and B are square matrices of order n then P(AB) ? P(A) + P(B) â€“ n.
- If A
_{mÃ—1}is a non zero column matrix and B_{1Ã—n}is a non zero row matrix then P(AB) = 1. - The rank of a skew symmetric matrix cannot be equal to one.

**System of homogeneous linear equations AX = 0**.

- X = 0. is always a solution; means all the unknowns has same value as zero. (This is also called trivial solution)
- If P(A) = number of unknowns, unique solution.
- If P(A) < number of unknowns, infinite number of solutions.

**System of non-homogeneous linear equations AX = B**.

- If P[A:B] â‰ P(A), No solution.
- If P[A:B] = P(A) = the number of unknown variables, unique solution.
- If P[A:B] = P(A) â‰ number of unknown, infinite number of solutions.

Here P[A:B] is rank of gauss elimination representation of AX = B.

There are two states of the Linear equation system:

**Consistent State:**A System of equations having one or more solutions is called a consistent system of equations.**Inconsistent State:**A System of equations having no solutions is called inconsistent system of equations.

**Linear dependence and Linear independence of vector:**

**Linear Dependence:** A set of vectors X_{1} ,X_{2} â€¦.X_{r} is said to be linearly dependent if there exist r scalars k_{1} ,k_{2} â€¦..k_{r} such that: k_{1} X_{1} + k_{2}X_{2} +â€¦â€¦..k_{r} X_{r} = 0.

**Linear Independence:** A set of vectors X_{1} ,X_{2}â€¦.X_{r} is said to be linearly independent if for all r scalars k_{1},k_{2} â€¦..k_{r}such that k_{1}X_{1}+ k_{2} X_{2}+â€¦â€¦..k_{r}X_{r} = 0, then k_{1} = k_{2} =â€¦â€¦. = k_{r} = 0. **How to determine linear dependency and independency ?**

Let X_{1}, X_{2} â€¦.X_{r} be the given vectors. Construct a matrix with the given vectors as its rows.

- If the rank of the matrix of the given vectors is less than the number of vectors, then the vectors are linearly dependent.
- If the rank of the matrix of the given vectors is equal to the number of vectors, then the vectors are linearly independent.

### Eigen Value and Eigen Vector

Eigen vector of a matrix A is a vector represented by a matrix X such that when X is multiplied with matrix A, then the direction of the resultant matrix remains the same as vector X.

Mathematically, above statement can be represented as:

AX = Î»X

where A is any arbitrary matrix, Î» are eigen values and X is an eigen vector corresponding to each eigen value.

Here, we can see that AX is parallel to X. So, X is an eigen vector.

__Method to find eigen vectors and eigen values of any square matrix A __

We know that,

AX = Î»X

=> AX â€“ Î»X = 0

=> (A â€“ Î»I) X = 0 …..(1)

Above condition will be true only if (A â€“ Î»I) is singular. That means,

|A â€“ Î»I| = 0 …..(2)

(2) is known as characteristic equation of the matrix.

The roots of the characteristic equation are the eigen values of the matrix A.

Now, to find the eigen vectors, we simply put each eigen value into (1) and solve it by Gaussian elimination, that is, convert the augmented matrix (A â€“ Î»I) = 0 to row echelon form and solve the linear system of equations thus obtained.

**Some important properties of eigen values**

- Eigen values of real symmetric and hermitian matrices are real
- Eigen values of real skew symmetric and skew hermitian matrices are either pure imaginary or zero
- Eigen values of unitary and orthogonal matrices are of unit modulus |Î»| = 1
- If Î»
_{1, }=Î»_{2}…….Î»_{n}are the eigen values of A, then kÎ»_{1}, kÎ»_{2}…….kÎ»_{n}are eigen values of kA - If Î»
_{1, }Î»_{2}…….Î»_{n}are the eigen values of A, then 1/Î»_{1}, 1/Î»_{2}…….1/Î»_{n}are eigen values of A^{-1} - If Î»
_{1, }Î»_{2}…….Î»_{n}are the eigen values of A, then Î»_{1}^{k}, Î»_{2}^{k}…….Î»_{n}^{k}are eigen values of A^{k} - Eigen values of A = Eigen Values of A
^{T }(Transpose) - Sum of Eigen Values = Trace of A (Sum of diagonal elements of A)
- Product of Eigen Values = |A|
- Maximum number of distinct eigen values of A = Size of A
- If A and B are two matrices of same order then, Eigen values of AB = Eigen values of BA

### Probability

**Probability** refers to the extent of occurrence of events. When an event occurs like throwing a ball, picking a card from deck, etc ., then the must be some probability associated with that event.

**Basic Terminologies:**

**Random Event :-**If the repetition of an experiment occurs several times under similar conditions, if it does not produce the same outcome everytime but the outcome in a trial is one of the several possible outcomes, then such an experiment is called random event or a probabilistic event.**Elementary Event –**The elementary event refers to the outcome of each random event performed. Whenever the random event is performed, each associated outcome is known as elementary event.**Sample Space –**Sample Space refers to the set of all possible outcomes of a random event.Example, when a coin is tossed, the possible outcomes are head and tail.**Event –**An event refers to the subset of the sample space associated with a random event.**Occurrence of an Event –**An event associated with a random event is said to occur if any one of the elementary event belonging to it is an outcome.**Sure Event –**An event associated with a random event is said to be sure event if it always occurs whenever the random event is performed.**Impossible Event –**An event associated with a random event is said to be impossible event if it never occurs whenever the random event is performed.**Compound Event –**An event associated with a random event is said to be compound event if it is the disjoint union of two or more elementary events.**Mutually Exclusive Events –**Two or more events associated with a random event are said to be mutually exclusive events if any one of the event occurs, it prevents the occurrence of all other events.This means that no two or more events can occur simultaneously at the same time.**Exhaustive Events –**Two or more events associated with a random event are said to be exhaustive events if their union is the sample space.

**Probability of an Event –** If there are total **p** possible outcomes associated with a random experiment and **q** of them are favourable outcomes to the event A, then the probability of event A is denoted by P(A) and is given by

P(A) = q/p

The probability of non occurrence of event A, i.e, P(A’) = 1 – P(A)

**Note –**

- If the value of P(A) = 1, then event A is called sure event .
- If the value of P(A) = 0, then event A is called impossible event.
- Also, P(A) + P(A’) = 1

**Theorems:**

**General –**Let A, B, C are the events associated with a random experiment, then- P(AâˆªB) = P(A) + P(B) – P(Aâˆ©B)
- P(AâˆªB) = P(A) + P(B) if A and B are mutually exclusive
- P(AâˆªBâˆªC) = P(A) + P(B) + P(C) – P(Aâˆ©B) – P(Bâˆ©C)- P(Câˆ©A) + P(Aâˆ©Bâˆ©C)
- P(Aâˆ©B’) = P(A) – P(Aâˆ©B)
- P(A’âˆ©B) = P(B) – P(Aâˆ©B)

**Extension of Multiplication Theorem –**Let A_{1}, A_{2}, ….., A_{n}are n events associated with a random experiment, then P(A_{1}âˆ©A_{2}âˆ©A_{3}….. A_{n}) = P(A_{1})P(A_{2}/A_{1})P(A_{3}/A_{2}âˆ©A_{1}) ….. P(A_{n}/A_{1}âˆ©A_{2}âˆ©A_{3}âˆ© ….. âˆ©A_{n-1})

**Total Law of Probability –** Let S be the sample space associated with a random experiment and E_{1}, E_{2}, …, E_{n} be n mutually exclusive and exhaustive events associated with the random experiment . If A is any event which occurs with E_{1} or E_{2} or … or E_{n}, then

P(A) = P(E_{1})P(A/E_{1}) + P(E_{2})P(A/E_{2}) + ... + P(E_{n})P(A/E_{n})

Conditional probability P(A | B) indicates the probability of event ‘A’ happening given that event B happened.

**Product Rule:**

Derived from above definition of conditional probability by multiplying both sides with P(B)

P(A âˆ© B) = P(B) * P(A|B)

**Random Variables**

A random variable is basically a function which maps from the set of sample space to set of real numbers. The purpose is to get an idea about the result of a particular situation where we are given probabilities of different outcomes.

**Discrete Probability Distribution –** If the probabilities are defined on a discrete random variable, one which can only take a discrete set of values, then the distribution is said to be a discrete probability distribution.

**Continuous Probability Distribution –** If the probabilities are defined on a continuous random variable, one which can take any value between two numbers, then the distribution is said to be a continuous probability distribution.

**Cumulative Distribution Function –**

Similar to the probability density function, the **cumulative distribution function** of a real-valued random variable X, or just distribution function of evaluated at , is the probability that will take a value less than or equal to .

For a discrete Random Variable,

For a continuous Random Variable,

**Uniform Probability Distribution ****–**

The Uniform Distribution, also known as the **Rectangular Distribution**, is a type of Continuous Probability Distribution.

It has a Continuous Random Variable restricted to a finite interval and it’s probability function has a constant density over this interval.

The Uniform probability distribution function is defined as-

**Expectation:** The mean of the distribution, represented as E[x].

**Variance:** .

For uniform distribution,

**Exponential Distribution**

For a positive real number the probability density function of a Exponentially distributed Random variable is given by-

where R_{x} is exponential random variables.

**Binomial Distribution:** **Mean**=np, where p is the probability of success **Variance**. = np(1-p)

### Calculus:

**Limits, Continuity and Differentiability**

**Existence of Limit –** The limit of a function at exists only when its left hand limit and right hand limit exist and are equal i.e.

**Some Common Limits –**

**L’Hospital Rule –**

If the given limit is of the form or i.e. both and are 0 or both and are , then the limit can be solved by **L’Hospital Rule**.

If the limit is of the form described above, then the L’Hospital Rule says that –

where and obtained by differentiating and .

If after differentiating, the form still exists, then the rule can be applied continuously until the form is changed.

**Continuity **

A function is said to be continuous over a range if it’s graph is a single unbroken curve.

Formally,

A real valued function is said to be continuous at a point in the domain if –

exists and is equal to .

If a function is continuous at then-

Functions that are not continuous are said to be discontinuous.

**Differentiability **

The derivative of a real valued function wrt is the function and is defined as –

A function is said to be **differentiable** if the derivative of the function exists at all points of its domain. For checking the differentiability of a function at point ,

must exist.

If a function is differentiable at a point, then it is also continuous at that point. **Note –** If a function is continuous at a point does not imply that the function is also differentiable at that point. For example, is continuous at but it is not differentiable at that point.

**Lagrangeâ€™s Mean Value Theorem**

**S**uppose be a function satisfying three conditions:

1) f(x) is continuous in the closed interval a â‰¤ x â‰¤ b

2) f(x) is differentiable in the open interval a < x < b

Then according to Lagrange’s Theorem, there exists **at least one** point ‘c’ in the open interval (a, b) such that:

Suppose f(x) be a function satisfying three conditions:

1) f(x) is continuous in the closed interval a â‰¤ x â‰¤ b

2) f(x) is differentiable in the open interval a < x < b

3) f(a) = f(b)

Then according to Rolle’s Theorem, there exists **at least one** point ‘c’ in the open interval (a, b) such that:

f ‘ (c) = 0

**Definition :**Let f(x) be a function. Then the family of all its antiderivatives is called the indefinite integral of a function f(x) and it is denoted by âˆ«f(x)dx.

The symbol âˆ«f(x)dx is read as the indefinite integral of f(x) with respect to x.

Thus âˆ«f(x)dx= âˆ…(x) + C.

Thus, the process of finding the indefinite integral of a function is called integration of the function.

**Fundamental Integration Formulas –**

- âˆ«x
^{n}dx = (x^{n+1}/(n+1))+C - âˆ«(1/x)dx = (log
_{e}|x|)+C - âˆ«e
^{x}dx = (e^{x})+C - âˆ«a
^{x}dx = ((a^{x})/(log_{e}a))+C - âˆ«sin(x)dx = -cos(x)+C
- âˆ«cos(x)dx = sin(x)+C
- âˆ«sec
^{2}(x)dx = tan(x)+C - âˆ«cosec
^{2}(x)dx = -cot(x)+C - âˆ«sec(x)tan(x)dx = sec(x)+C
- âˆ«cosec(x)cot(x)dx = -cosec(x)+C
- âˆ«cot(x)dx = log|sin(x)|+C
- âˆ«tan(x)dx = log|sec(x)|+C
- âˆ«sec(x)dx = log|sec(x)+tan(x)|+C
- âˆ«cosec(x)dx = log|cosec(x)-cot(x)|+C

**Definite Integrals:**

Definite integrals are the extension after indefinite integrals, definite integrals have limits [a, b]. It gives the area of a curve bounded between given limits.

, It denotes the area of curve F(x) bounded between a and b, where a is the lower limit and b is the upper limit.

**Note:** If *f* is a continuous function defined on the closed interval [a, b] and F be an anti derivative of f. Then

Here, the function *f* needs to be well defined and continuous in [a, b].

Embark on a transformative journey towards GATE success by choosing Data Science & AI as your second paper choice with our specialized course. If you find yourself lost in the vast landscape of the GATE syllabus, our program is the compass you need.

## Please

Loginto comment...