Logarithmic vs Double Logarithmic Time Complexity
Time complexity is the computational complexity that describes the amount of time it takes to run an algorithm.”
Now, we know that there can be multiple ways we can solve any particular problem. But, knowing how we can solve the problem in the most efficient way is important in real-life applications. To find the efficiency of any algorithm or problem, it is most important to know what its time and space complexities are. To know more about those, Click Here
Logarithmic Time Complexity :
1. What is Logarithmic Time Complexity or O(log N)?
Now, if you’ve already got your hands a little dirty in competitive programming or programming in general then you must’ve probably come across this term, O(log n), maybe when learning Binary Search. Keep reading if you haven’t.
To understand O(log n) let’s take the most classic examples of all, the dictionary problem, and find the word “program” in it. If you opened the dictionary and started looking for it on every page from 1 through n…. it would be an example of O(N) time complexity and we know that’s not an efficient way to do it…
So, we open the book roughly to a center page and see if our word that starts from the letter P will fall before the words on the currently selected page or after it. If “Program” is supposed to come after it, we now try to find the center page between the last page and our current page and so on… till we reach a single page that contains our desired word, i.e., program.
So basically we went on dividing our problem in half at every step till we found the result. This is what we mean by O(log N). Here, the time will go up linearly and the N will go up exponentially. E.g., if it takes 5 seconds to compute 100 elements, then it takes 6 seconds to compute 1000 elements, and so on…
2. What is Double Logarithmic Time Complexity or O(log (log N))?
We see O(log(log N)) time complexity when implementing the Van Emde Boas Trees (VEBT), in place of the more conventional Binary Search Trees (BST).
But for example, if we take N<106 then in the best case, the double logarithmic algorithm surpasses the logarithmic one by approximately a factor of 5. And also, in general, its implementation is more difficult.
Now, in the real world, a factor of 5 is very significant and hence, a speed-up by a factor of that number is huge! But usually, the size of the data in most real use cases is bigger than 109 or even 1015. So, given that your data is smaller than 106, you can achieve some great results using any O(log (log N)) algorithm over any O(log(N)) algorithm.
The speed-up however may get hampered due to the higher constants involved in the implementation of the VEB Trees and you may have to analyze different runtime constants to get a proper idea.
In any case, we can’t say that one particular complexity is better than the other due to many role-playing factors. The most appropriate one may be chosen as per the requirements of the problem that is to be solved.