If you are a software engineer/developer, just give it a thought, what do you do when a module in a software has to be changed?
The answer is changing the module.
But decades ago, when modularity wasn’t on the forefront, things were a bit different.
There were times when software was what we called, Monoliths. Single dependent applications who would be written in C or C# or other procedural languages, which whenever an update was to be given to it, had to be fully removed and reinstalled on the servers. This is when we used to get the downtimes of services. These applications used to be very fast but still, the downtime during an update was a big issue.
Hence the resolver came in the form of load balancers.
Instead of using 1 server or 2 servers, the number of servers was increase. This helped in 2 ways:
Not all the time was traffic the same for the service. So whenever traffic was high, load was balanced among the servers and during low traffic times, one server would be up serving. This reduced wastage of resources, provided a backup and handled heavy traffic at peak hours.
Next, and the major advantage came that now whenever there needed to an update on the software, instead of removing the application as a whole, it was removed from half the servers. So suppose the application is running on 4 servers, during a low traffic time, 2 of the servers would be brought down and 2 would be up. The application was then updated on the 2 down servers and then the process repeated for the other 2 servers. So the user never had a down time.
But this also did not fully solve the problem as still the whole application was being replaced which was still not the ideal situation.
Then with the advent of modularity in the programming practices, Oracle introduced J2EE, or Java Platform Enterprise Edition. This helped as it provided choices of services, databases, simplified architecture and integration with existing information systems. This led to monopoly of oracle in this field, counter to which gave rise to the idea of micro-services.
What we saw today in applications is the use of micro-services at its best.
Instead of changing the whole application, now just the module to be updated is changed and the rest of the application is always up and running. This can be done as micro-service architecture makes applications loosely coupled, organized around business capabilities and highly maintainable by providing collection of services having these properties in an architectural method.
And finally then, we have our present form of applications, highly modularized, maintainable and easily updatable, thanks to a long history of monolith applications.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- The Story of Netflix and Microservices
- Blog | General facts about the GATE exams
- Blog | General facts about the Campus Placement Drive
- Blog | Program Vs Software
- Blog | Software Engineer Skills
- Blog | Role of a System Analyst
- Blog | Programming Guidelines
- 5 Ways to Make Money With Your Blog
- Difference between Article and Blog
- How to Choose the Best Colors For The User Interface?
- Skills Required to Become a Cloud Engineer
- 8 Important Business Skills For an IT Professional
- Top 10 Cybersecurity Tools That You Should Know
- Top Online Courses to Learn Data Science with Certifications
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.