Open In App

Algorithms Sample Questions | Recurrences | Set 2

Last Updated : 28 May, 2019
Improve
Improve
Like Article
Like
Save
Share
Report
  • Question 1: What is the complexity of T(n)?
    T(n) = T(n-1) +  \frac{1}{n(n-1)}
    
    1. Θ( 1n )
    2. Θ( 1n2 )
    3. Θ( 1 )
    4. Θ( ln( n ) )
    Answer: 3 Explanation: Using the substitution technique to solve the given recursive function, the closed form (non-recursive form) of T(n) can be inductively guessed as following:

    T(n) = T(1) +  \sum_{k=1}^{n} \frac{1}{k(k-1)}

    Before going further and getting involved in complicated fractions, it would be better to simplify the equation at hand. The partial fraction decomposition method are a common in use to express a rational fraction, where both numerator and denominator are of polynomial form, as a sum of one or several fractions with a simpler denominator. Using this method gives:

     \frac{1}{k(k-1)} = \frac{A}{K} + \frac{B}{k-1}

    Doing this decomposition, the values of A and B would be found as -1, and +1, respectively. Upon performing this decomposition, the non-recursive form of T(n) shall be written as:

     T(n) = T(1) +  \sum_{k=1}^{n} (\frac{1}{k-1} - \frac{1}{k})

    Expanding this compact representation of T(n) will result in:

     T(n)=T(1)+((\frac{1}{1}-\frac{1}{2})+(\frac{1}{2}-\frac{1}{3})+...

    +(\frac{1}{n-2}-\frac{1}{n-1})+(\frac{1}{n-1}+\frac{1}{n}))

    As it shall be obvious, each fraction, except the first and last one, is presented twice, but each time with an opposite sign; so, all of the fractions will be vanished except the first and last ones:

    T(n) = T(1) + \frac{1}{1} - \frac{1}{n} = T(1) + 1 - \frac{1}{n}

    From the last equation that has just been derived, the asymptotic complexity of T (n) is:

     T(n) \in \theta (1 - \frac{1}{n}) \in \theta (1)

  • Question 2: Determine (count) the total possible number of strings of length N with four characters {a, b, c, d} containing an even number of “a”s? [ C(N) is abbreviate of the function Count(N) to serve this purpose.]
    1. C(N) = 5.0 * 3N-1 – 1.0 *5N-1
    2. C(N) = 1.0 * 4N + 6
    3. C(N) = 0.5 *2N + 0.5 *4N
    4. C(N) = 1.0 * 2N + 2.0*(3)N
    Example: 
    
    For n = 1, there are 3 distinct strings: C(3) = 3
    
    • b, c, d.
    For n = 2, r, for n = 2, 10 possible strings are: C(10) = 10
    • aa, bb, cc, dd, bc, bd, cb, cd, db, dc
    Answer: 3 Explanation: There are two possible scenarios for the first character of the strings with given constraint:
    1. The first character can be either one of possible characters “b”, ”c”, or “d” except “a”. In this case, the aim is to find the number of strings of remaining length “N-1” which are made of an even number of “a” s.
      • C(N) = 3* C(N-1)
    2. The first character might be “a”, and one of “a”s is already occurred; therefore, the number of possible strings with odd numbers of “a”s should be counted. This number cannot be called C(N-1), but C(n) still can be achieved by using C(N-1).
      • C(N) = 4N-1 – C(N-1), where:
        • 4N-1: total number of possible strings of length (N-1) with four characters
        • C(N-1): number of strings of length(N-1) made of an even number of “a”s.
    As two distinct scenarios mentioned above cannot happen at the same time, the total number of strings is the sum of the cases:

     C(N) = 3 * C(N-1) + (4^{N-1} - C(N-1))

     = 2*C(N-1) + 4^{N-1}

    Two initial conditions like what is brought below are needed to find a particular solution:
    1. For n = 1, there are 3 distinct strings: b, c, d.
    2. Or, for n = 2, 10 possible strings are: aa, bb, cc, dd, bc, bd, cb, cd, db, dc
    Now, the problem can be formulated as:

     C(N) = C_{1} * 2^{N} + C_{2} * 4^{N}, C(1) = 3, C(2) = 10

    The total possible number of strings with given constraint is:

    C(N) = 0.5  * 2^{N} + 0.5 * 4^{N}

  • Question 3: Which one of the recurrence functions does not have a solution of polynomial form?
    1.  T(N) = 2 * T(n-2) + 1
    2.  T(N) = T(n-1) + N^{2}
    3.  T(N) = T([ \frac{8}{9} n ] ) + 9 * n + 1
    4.  T(N) = 100 * T(\frac{n}{99}) + N
    Answer: 1 Explanation: The solution of all above equations are required to see the solution to which option is not polynomial. It shall be seen that all of the solutions are of polynomial form, except the one brought in first option. Here are the analyses of the options:
    1. Option (1) has exponential solution: In order to find the homogeneous solution of the equation presented in first option, the roots of its characteristic equation is required:

       T(N) = 2 * T(n-1) + 1  \Rightarrow r^{2} - 2 = 0   \Rightarrow r =  \pm  \sqrt{2}

      Therefore, the solution is of exponential form:

       T(n)= c_{1} *( \sqrt{2})^{n}+ c_{2}*(-\sqrt{2})^{n}

       = c_{1} *2^{n/2}+c_{2} *(-1)^{n} * (2)^{n/2}

    2. Option (2): The equation of second option can be easily solved by substitution technique:

       T(n) = T(n-1) + n^{2} = T(n-2) + n^{2} + (n-1)^{2}

      The closed form of T(n) which can be inductively guessed is:

       T(n) = T(1) + \sum_{k=1}^{n} k^{2}

      = T(1) + sum.of.square.numbers  \approx  \frac{n(n+1)(2n+1)}{6}

      This says that it is a polynomial function of third degree (cubic polynomial):

       T(n) \in  \theta (T(1)) + \theta ( n^{3} ) \in \theta (1) + \theta (n^{3}) \in \theta (n^{3})

    3. Option (3): In according to master theorem, the complexity of the equation proposed in third option is of  \theta(n) .
    4. Option (4): Forth option equation lies into the case 3 of the master theorem; hence its complexity is of  \theta(n^{\frac{100}{99}}) .
  • Question 4: which one give the best estimation of F(n) asymptotic complexity?
     F(n) =\begin{cases}{F(\frac{n}{2}) + n^{3}} & {n  \leq 100}\\{F(100) + n^{2}} & {n>100}\end{cases} 
    
    1. O( n3 )
    2. Ω( n2 )
    3. O( n2 )
    4. Θ( n2 )
    Answer: 4 Explanation: There are some tricky facts to consider before making any decision in such these problems:
    1. [Finding as tightest boundary as possible] Best asymptotic estimation is when the complexity behavior is expressed by Θ-notation. This is mostly based on average performance of algorithms, so-called the average case scenario, which needs complicated analysis that the programmers prefer to use other notations like O (); however, still it is a good practice to try to find the tightest boundary as possible.
    2. [Notations talk about infinity] The mathematic asymptotic notations delineate the behavior of complexity functions in infinity, or when “n” tends toward a very large amount. Infinity is a hypothetical and abstract concept which cannot be reached in real life. For the purpose of calculations, and to have an idea of a function behavior, the number which should be considered is solely depend on the problem. In this problem seems to be no limit on the input data size “n”.
    3. [Being aware of the test-makers’ traps] In order to get rid of the potential traps provided by the test-makers, an enough large value shall be assigned to n; in this problem small values like 100, or 1000 shall not be chosen. As there is a need to compare the complexity of n3 for n < 100, versus n2 for n > 100, the values greater than 1000 is acceptable that causes n2 excel n3 for n = 100 . Even if we were about to re-calculate F(100) at each call time, for n> 1000, n2 is always greater than F(100) which is of order n3 for small values such as n=100. The value of F(100) can be simply ignored in required calculations for n > 1000.
    4. [In real world applications, It would sometimes be efficient to use some AUXILIARY MEMORY SPACE to gain some speed] In this problem, it would be a great idea to use an auxiliary memory space of Θ(1) to save the result of F(100) at once for future use, without having to recalculate F(100) at each call time for n>100. So the complexity of F(100) would be of O(1), maybe just to read a value, and the complexity of F(n) would be of Θ(n2) for n>100.
    The complexity of F(n) is:

     F(n) \in \Theta (n^{2})

  • Question 5: Which one specifies appropriate ranges for k and α (alpha), where the following asymptotic expression holds?

     (ln(n))^{k} \in O(n^{ \alpha })

    1. α ≥ 0 & k ≤ α
    2. α ≥ 0.1 & k ≥ 0
    3. α > 1 & k ∈ R
    4. α > 0 & k ∈ R
    Answer: 4 Explanation: There is an effective solution to this problem from a very reliable resource, a cheat-sheet from MIT university. However, not all, but an important part of description brought in this solution is based on the derivations which may be found at this MIT cheat-sheet. The goal is to find the range in which the expression lnkn ∈ O(nα) is true. The worst condition under which this expression might not be held is where letting the left side of the expression grows very big, while trying to force the right side to remain as small as possible; in other words, attempting to assign a large possible value to the exponent “k”, while specifying very small values for the exponent “Alpha” on the other side. Small values, even less than 1, can be assigned to α. However, for now, α is ascribed an optional specific constant E, but not yet a negative value; Α is neither set a variable E(n) which is a function of “n” as that will be a whole new problem to tackle, not the problem described in this question. Let α = |E| (an optional small positive constant) and k>0 ; the aim is to see whether the expression lnkn ∈ O(nE) is true, or not:

     \lim_{n \rightarrow  \infty } \frac{ln(n)^{k}}{n^{| E |}} =   (\lim_{n \rightarrow  \infty } \frac{ln(n)}{n^{\frac{| E |}{k}}})^{k} = (\frac{ \infty }{ \infty })^{k}

    Applying Hopital’s rule gives:

    (\lim_{n \rightarrow  \infty } \frac{ln(n)}{n^{\frac{| E |}{k}}})^{k} = (\lim_{n \rightarrow  \infty } \frac{\frac{1}{n}}{\frac{|E|}{k} * n^{\frac{|E|}{k}}-1})^{k}

     = (\lim_{n \rightarrow  \infty } \frac{\frac{1}{n}}{\frac{|E|}{k} * n^{\frac{| E |}{k}}* \frac{1}{n}})^{k} = (\lim_{n \rightarrow  \infty } \frac{1}{\frac{|E|}{k} * n^{\frac{| E |}{k}} })^{k} = 0

    What it is derived till now says that there is no positive exponent “k” for a logarithm function that it can surpass variable “n” with even very small positive exponents. For sure the same is true when the logarithm functions take negative exponents and become even smaller. In other words:

    (ln(n))^{-|k|}  \leq  (ln(n))^{k} \in O(n^{|E|})

    What if the exponent Alpha takes negative values ( α = -|E| )? As negative exponents mean to flip the fractions, and flipping the factions is accompanied with changing the inequality sign, it yields:

    \frac{1}{(ln(n))^{k}}  \geq  \frac{1}{n^{|E|}}

    The O-notation is not hold anymore. Therefore, the range in which the mentioned inequality preserved is when k ∈ R & α>0; In mathematical language:

     (ln(n))^{k} \in O(n^{\alpha} ) for k ∈ R & α>0

    The polynomial expressions cannot contain variables with fractional exponents; therefore, this problem is even more inclusive as it takes such fractional values. As a result, this problem statement describes how easily polynomial expressions grows faster than the logarithm functions.

    Source:

    1. MIT Asymptotic cheat sheet
    2. A compilation of Iran university exams (with a bit of summarization, modification, and also translation)


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads