Cristian’s Algorithm is a clock synchronization algorithm is used to synchronize time with a time server by client processes. This algorithm works well with low-latency networks where Round Trip Time is short as compared to accuracy while redundancy prone distributed systems/applications do not go hand in hand with this algorithm. Here Round Trip Time refers to the time duration between start of a Request and end of corresponding Response.
Below is an illustration imitating working of cristian’s algorithm:
1) The process on the client machine sends the request for fetching clock time(time at server) to the Clock Server at time .
2) The Clock Server listens to the request made by the client process and returns the response in form of clock server time.
3) The client process fetches the response from the Clock Server at time and calculates the synchronised client clock time using the formula given below.
where refers to the synchronised clock time,
refers to the clock time returned by the server,
refers to the time at which request was sent by the client process,
refers to the time at which response was received by the client process
Working/Reliability of above formula:
refers to the combined time taken by the network and the server to transfer request to the server, process the request and returning the response back to the client process, assuming that the network latency and are approximately equal.
The time at client side differs from actual time by at most seconds. Using above statement we can draw a conclusion that the error in synchronisation can be at most seconds.
Python Codes below illustrate the working of Cristian’s algorithm:
Code below is used to initiate a prototype of a clock server on local machine:
Socket successfully created Socket is listening...
Code below is used to initiate a prototype of a client process on local machine:
Time returned by server: 2018-11-07 17:56:43.302379 Process Delay latency: 0.0005150819997652434 seconds Actual clock time at client side: 2018-11-07 17:56:43.302756 Synchronized process client time: 2018-11-07 17:56:43.302637 Synchronization error : 0.000119 seconds
Improvision in Clock Synchronization:
Using iterative testing over the network, we can define a minimum transfer time using which we can formulate an improved synchronization clock time(less synchronization error).
Here, by defining a minimum transfer time, with a high confidence, we can say that the server time will
always be generated after and the will always be generated before , where is the minimum transfer time which is the minimum value of and during several iterative tests. Here synchronization error can be formulated as follows:
Similarily, if and differ by a considerable amount of time, we may substitute by and , where is the minimum observed request time and refers to the minimum observed response time over the network.
The synchronized clock time in this case can be calculated as:
So, by just introducing response and request time as separate time latencies, we can improve synchronization of clock time and hence decrease the overall synchronization error. Number of iterative tests to be ran depends on the overall clock drift observed.
- Berkeley's Algorithm
- A* Search Algorithm
- RSA Algorithm in Cryptography
- Boundary Fill Algorithm
- Jump Pointer Algorithm
- Back-off Algorithm for CSMA/CD
- How to solve RSA Algorithm Problems?
- ElGamal Encryption Algorithm
- Relabel-to-front Algorithm
- Raft Consensus Algorithm
- Operating System | Dekker's algorithm
- Difference between Algorithm, Pseudocode and Program
- Election algorithm and distributed processing
- Computer Network | RC5 Encryption Algorithm
- Reverse Cuthill Mckee Algorithm
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.