Finding the probability of a state at a given time in a Markov chain | Set 2
Given a Markov chain G, we have the find the probability of reaching the state F at time t = T if we start from state S at time t = 0.
A Markov chain is a random process consisting of various states and the probabilities of moving from one state to another. We can represent it using a directed graph where the nodes represent the states and the edges represent the probability of going from one node to another. It takes unit time to move from one node to another. The sum of the associated probabilities of the outgoing edges is one for every node.
Consider the given Markov Chain( G ) as shown in below image:
Input : S = 1, F = 2, T = 1 Output: 0.23 We start at state 1 at t = 0, so there is a probability of 0.23 that we reach state 2 at t = 1. Input: S = 4, F = 2, T = 100 Output: 0.284992
In the previous article, a dynamic programming approach is discussed with a time complexity of O(N2T), where N is the number of states.
Matrix exponentiation approach: We can make an adjacency matrix for the Markov chain to represent the probabilities of transitions between the states. For example, the adjacency matrix for the graph given above is:
We can observe that the probability distribution at time t is given by P(t) = M * P(t – 1), and the initial probability distribution P(0) is a zero vector with the Sth element being one. Using these results, we can get solve the recursive expression for P(t). For example, if we take S to be 3, then P(t) is given by,
If we use effective matrix exponentiation technique, then the time complexity of this approach comes out to be O(N3 * log T). This approach performs better than the dynamic programming approach if the value of T is considerably higher than the number of states, i.e. N.
Below is the implementation of the above approach:
The probability of reaching 2 at time 100 after starting from 4 is 0.284991
Time Complexity: O(N3 * logT)
Space Complexity: O(N2)