Software technology is in the midst of a major computational shift towards distributed object computing DOC. Distributed computing is poised for a second client-server revolution, a transition from first generation client-server era to the next generation client-server era. In this new client-server model, servers are plentiful instead of scarce(because every client can be a server) and proximity no longer matters. This immensely expanded client-server model is made possible by the recent exponential network growth and the progress in network-aware multithreaded desktop operating systems.
In the first generation client-server era, which still is very much in progress, SQL databases, transaction processing (TP) monitors, and groupware have begun to dispatch file servers as client-server application models. In the new client-server era, DOC is expected to dominate other client-server application models.
Distributed object computing promises the most flexible client-server systems, because it utilizes reusable software components that can roam anywhere on networks, run on different platforms, communicate with legacy applications by means of object wrappers and manage themselves and the resources they control. Objects can break monolithic applications into more manageable components that coexist on the expanded bus.
Distributed objects are reusable software components that can be distributed and accessed by the user across the network. These objects can be assembled into distributed applications. DOC introduces a higher level of abstraction into the distributed applications.
Distributed Objects Computing will be a key part of tomorrow’s information systems.
Distributed object technology has been tied to standards from the early stage. Since 1989, the Object Management Group(OMG), with over 500 member companies, has been specifying the architecture for an open software bus on which object components written by different vendors can operate across networks and operating systems. The OMG and the object bus are well on their way to becoming the universal client-server middleware.
Currently there are several competing DOC standards, including the OMG’s CORBA, OpenDoc and Microsoft’s ActiveX/DCOM. Although DOC technology offers unprecedented computing power, few organizations have been able to harness it as yet.
The main reasons commonly cited for slow adoption of DOC include closed legacy architecture, incompatible protocols, inadequate network bandwidths and security issues.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- Serverless Computing and FaaS Model - The Next Stage in Cloud Computing
- Difference between Next Generation Network and Traditional Network
- Difference between Traditional Firewall and Next Generation Firewall
- MPI - Distributed Computing made easy
- Difference between Cloud Computing and Grid Computing
- Difference between Grid computing and Cluster computing
- Quantum Computing - The Computing Technology of Tomorrow
- Difference between Cloud Computing and Cluster Computing
- Python – The new generation Language
- Hashing in Distributed Systems
- Design Issues of Distributed System
- Centralized vs Distributed Version Control: Which One Should We Choose?
- Limitation of Distributed System
- What is DDoS(Distributed Denial of Service)?
- Various Failures in Distributed System
- Difference between Client /Server and Distributed DBMS
- Distributed Component Object Model (DCOM)
- What is DFS(Distributed File System)?
- Cloud Computing
- Edge Computing – A Building Block for Smart Applications of the Future
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.