Cristian’s Algorithm is a clock synchronization algorithm is used to synchronize time with a time server by client processes. This algorithm works well with low-latency networks where Round Trip Time is short as compared to accuracy while redundancy-prone distributed systems/applications do not go hand in hand with this algorithm. Here Round Trip Time refers to the time duration between the start of a Request and the end of the corresponding Response.
Below is an illustration imitating the working of Cristian’s algorithm:
1) The process on the client machine sends the request for fetching clock time(time at the server) to the Clock Server at time .
2) The Clock Server listens to the request made by the client process and returns the response in form of clock server time.
3) The client process fetches the response from the Clock Server at time and calculates the synchronized client clock time using the formula given below.
where refers to the synchronized clock time,
refers to the clock time returned by the server,
refers to the time at which request was sent by the client process,
refers to the time at which response was received by the client process
Working/Reliability of the above formula:
refers to the combined time taken by the network and the server to transfer the request to the server, process the request, and return the response back to the client process, assuming that the network latency and are approximately equal.
The time at the client-side differs from actual time by at most seconds. Using the above statement we can draw a conclusion that the error in synchronization can be at most seconds.
Python Codes below illustrate the working of Cristian’s algorithm:
Code below is used to initiate a prototype of a clock server on local machine:
Socket successfully created Socket is listening...
Code below is used to initiate a prototype of a client process on the local machine:
Time returned by server: 2018-11-07 17:56:43.302379 Process Delay latency: 0.0005150819997652434 seconds Actual clock time at client side: 2018-11-07 17:56:43.302756 Synchronized process client time: 2018-11-07 17:56:43.302637 Synchronization error : 0.000119 seconds
Improvision in Clock Synchronization:
Using iterative testing over the network, we can define a minimum transfer time using which we can formulate an improved synchronization clock time(less synchronization error).
Here, by defining a minimum transfer time, with a high confidence, we can say that the server time will
always be generated after and the will always be generated before , where is the minimum transfer time which is the minimum value of and during several iterative tests. Here synchronization error can be formulated as follows:
Similarly, if and differ by a considerable amount of time, we may substitute by and , where is the minimum observed request time and refers to the minimum observed response time over the network.
The synchronized clock time in this case can be calculated as:
So, by just introducing response and request time as separate time latencies, we can improve the synchronization of clock time and hence decrease the overall synchronization error. A number of iterative tests to be run depends on the overall clock drift observed.