Given a Markov chain G, we have the find the probability of reaching the state F at time t = T if we start from state S at time t = 0.
A Markov chain is a random process consisting of various states and the probabilities of moving from one state to another. We can represent it using a directed graph where the nodes represent the states and the edges represent the probability of going from one node to another. It takes unit time to move from one node to another. The sum of the associated probabilities of the outgoing edges is one for every node.
Consider the given Markov Chain( G ) as shown in below image:
Input : S = 1, F = 2, T = 1 Output : 0.23 We start at state 1 at t = 0, so there is a probability of 0.23 that we reach state 2 at t = 1. Input : S = 4, F = 2, T = 100 Output : 0.284992
We can use dynamic programming and depth-first search (DFS) to solve this problem, by taking the state and the time as the two DP variables. We can easily observe that the probability of going from state A to state B at time t is equal to the product of the probability of being at A at time t – 1 and the probability associated with the edge connecting A and B. Therefore the probability of being at B at time t is the sum of this quantity for all A adjacent to B.
Below is the implementation of the above approach:
The probability of reaching 2 at time 100 after starting from 4 is 0.284992
Time complexity: O(N2 * T)
Space complexity: O(N * T)
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.