We are given a tree(can be extended to a DAG) and we have many queries of form LCA(u, v), i.e., find LCA of nodes ‘u’ and ‘v’.
We can perform those queries in O(N + QlogN) time using RMQ, where O(N) time for pre-processing and O(log N) for answering the queries, where
N = number of nodes and
Q = number of queries to be answered.
Can we do better than this? Can we do in linear(almost) time? Yes.
The article presents an offline algorithm which performs those queries in approximately O(N + Q) time. Although, this is not exactly linear, as there is an Inverse Ackermann function involved in the time complexity analysis. For more details on Inverse Ackermann function see this. Just as a summary, we can say that the Inverse Ackermann Function remains less than 4, for any value of input size that can be written in physical inverse. Thus, we consider this as almost linear.
Let we want to process these queries- LCA(5,4), LCA(1,3), LCA(2,3)
Now, after pre-processing, we perform a LCA walk starting from the root of the tree(here- node ‘1’). But prior to the LCA walk, we colour all the nodes with WHITE. During the whole LCA walk, we use three disjoint set union functions- makeSet(), findSet(), unionSet().
These functions use the technique of union by rank and path compression to improve the running time. During the LCA walk, our queries gets processed and outputted (in a random order). After the LCA walk of the whole tree, all the nodes gets coloured BLACK.
Note- The queries may not be processed in the original order. We can easily modify the process and sort them according to the input order.
The below pictures clearly depict all the steps happening. The red arrow shows the direction of travel of our recursive function LCA().
As, we can clearly see from the above pictures, the queries are processed in the following order, LCA(5,4), LCA(2,3), LCA(1,3) which is not in the same order as the input(LCA(5,4), LCA(1,3), LCA(2,3)).
Below is C++ implementation.
LCA(5 4) -> 2 LCA(2 3) -> 1 LCA(1 3) -> 1
Time Complexity : Super-linear, i.e- barely slower than linear. O(N + Q) time, where O(N) time for pre-processing and almost O(1) time for answering the queries.
Auxiliary Space : We use a many arrays- parent, rank, ancestor which are used in Disjoint Set Union Operations each with the size equal to the number of nodes. We also use the arrays- child, sibling, color which are useful in this offline algorithm. Hence, we use O(N).
For convenience, all these arrays are put up in a structure- struct subset to hold these arrays.
CLRS, Section-21-3, Pg 584, 2nd /3rd edition
This article is contributed by Rachit Belwariar. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
- Print common nodes on path from root (or common ancestors)
- Lowest Common Ancestor in a Binary Tree | Set 1
- Lowest Common Ancestor in a Binary Tree | Set 3 (Using RMQ)
- Lowest Common Ancestor in Parent Array Representation
- Lowest Common Ancestor in a Binary Search Tree.
- Lowest Common Ancestor for a Set of Nodes in a Rooted Tree
- Lowest Common Ancestor in a Binary Tree | Set 2 (Using Parent Pointer)
- Number of elements greater than K in the range L to R using Fenwick Tree (Offline queries)
- Longest Common Prefix using Trie
- Longest Common Extension / LCE | Set 2 ( Reduction to RMQ)
- Least Common Ancestor of any number of nodes in Binary Tree
- Print the path common to the two paths from the root to the two given nodes
- Suffix Tree Application 5 - Longest Common Substring
- Longest Common Extension / LCE | Set 3 (Segment Tree Method)
- Longest Common Extension / LCE | Set 1 (Introduction and Naive Method)
- Suffix Array | Set 2 (nLogn Algorithm)
- Shortest Path Faster Algorithm
- Burrows - Wheeler Data Transform Algorithm
- Extended Mo's Algorithm with ≈ O(1) time complexity
- Kruskal's Algorithm (Simple Implementation for Adjacency Matrix)