Skip to content
Related Articles

Related Articles

Improve Article

What is Autonomic Computing?

  • Difficulty Level : Basic
  • Last Updated : 02 Sep, 2020

Autonomic Computing is a type of visionary computing that has been started by IBM. This is made to make adaptive decisions that use high-level policies. It has a feature of constant up-gradation using optimization and adaptation. 


It was proposed by Jiming Liu in the year 2001. It uses artificial systems to solve complex problems using imitations like that of humans. On October 15 of the year 2001, IBM Research President, Paul Horn addressed an annual meeting. He presented a solution to growing complexities and mentioned that the answer lied in building computer systems that could regulate themselves in a manner in which our nervous systems regulate and protect our bodies. E.g. Ant-colony optimization 
Ant colony optimization is a metaheuristic that we based on population for finding approximate solutions to complex optimization problems. 

Need Of Autonomic Computing

With the increase in the demand for computers, computer-related problems are also increasing. They are becoming more and more complex. The complexity has become so much that there is a spike in demand for skilled workers. This has fostered the need for autonomic computers that would do computing operations without the need for manual intervention. 

Areas Of Autonomic Computing

There are four areas of Autonomic Computing as defined by IBM. These are as follows:

  1. Self-Configuration: The system must be able to configure itself automatically according to the changes in its environment.
  2. Self-Healing: IBM mentions that an autonomic system must have property by which it must be able to repair itself from errors and also route the functions away from trouble whenever they are encountered. 
  3. Self-Optimization: According to IBM an autonomic system must be able to perform in an optimized manner and ensure that it follows an efficient algorithm for all computing operations. 
  4. Self-Protection: the IBM States that an autonomic system must be able to perform detection, identification, and protection from the security and system attacks so that systems’ security and integrity remain intact.


  1. The Autonomic system knows itself. This means that it knows its components, specifications capacity, and the real-time status. It also has knowledge about its own, borrowed, and shared resources.
  2. It can configure itself again and again and run its setup automatically as and when required.
  3. It has the capability of optimizing itself by fine-tuning workflows.
  4. It can heal itself. This is a way of mentioning that it can recover from failures.
  5. It can protect itself by detecting and identifying various attacks on it.
  6. It can open itself. This means that it must not be a proprietary solution and must implement open standards.
  7. It can hide. This means that it has the ability to allow resource optimization, by hiding its complexity.
  8. An autonomic system according to IBM must be able to know or expect what kind of demand is going to arise for its resources to make it a transparent process for the users to see this information.

Autonomic Computing (AC) Architecture 

The autonomic computing is needed so that it overcomes the problem of the increased complexity of the computing systems that acts to prevent further growth of the systems. There are several predictions with suggesting growth of 38% devices per annum with increased complexity. There is a need for autonomic computing in distributed computing because of the management of the computer networks complexes and a limiting factor in the future development of distributes computing systems. 
Mobile computing has brought with it an increased complexity for employee management systems as employees need to access their company’s data even when they are not in the office. All such cases of complexity arise a need for autonomic computing as it’s better than manual computing that is erroneous and time-consuming. Autonomic computing is a system that deploys high-level policies to make decisions. It is based on the architecture that is called MAPE that stands for monitor, analyze plan, and execution. The architecture revolves around the idea of a reduction in management costs. The AC architecture comprises attributes that allow self-management, according to various vendors by involving control loops. 

  • Control loops: A resource provider provides control loops. It is embedded in the runtime environment. 
    It is configured using a manageability interface that is provided for every resource e.g. hard drive.
  • Managed Elements: The managed element is a component of the controlled system. It can be hardware as well as a software resource. Sensors and effectors are used to control the managed element.
  • Sensors: This contains information about the state and any changes in the state of elements of the autonomic system.
  • Effectors: These are commands or application programming interfaces (API) that are used to change the states of an element.
  • Autonomic Manager: This is used to make sure that the control loops are implemented. This divides the loop into 4 parts for its functioning. These parts are monitor, analyze, plan, and execute.

The Autonomic Computing must involve the following 3 properties : 

  1. Automatic: It must be able to execute its operations without human intervention.
  2. Adaptive: Autonomic computers must be able to make changes according to their environment and other unforeseen conditions such as security attacks and system breakdowns.
  3. Aware: it must also have awareness of the processes and internal states that would allow the previous two features to be executed.


  1. It is an open-source.
  2. It is an evolutionary technology that adapts itself to new changes.
  3. It is optimized hence gives better efficiency and performance thereby taking lesser time in execution.
  4. It is very secure and can counter system and security attacks automatically.
  5. It has backup mechanisms that allow recovery from system failures and crashes.
  6. It reduces the cost of owning (Total Cost of Ownership) such a mechanism as it is less prone to failure and can maintain itself.
  7. It can set up itself thereby reducing the time taken in manual setup.


  1. There will always be a possibility of the system crashing or malfunctioning.
  2. This would result in an increase in unemployment due to the lesser needs of people after it is implemented.
  3. The affordability would be an issue because it would be expensive.
  4. It would need people who are very skilled to manage or develop such systems, thereby increasing the cost to the company that employs them.
  5. It is dependent on internet speed. Its performance decreases with a decrease in internet speed.
  6. It would not be available in rural areas where there are lesser provisions of stable internet connection.
My Personal Notes arrow_drop_up
Recommended Articles
Page :