Autonomic computing

From Wikipedia, the free encyclopedia

Autonomic computing (AC) refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.[1]

Description[]

The AC system concept is designed to make adaptive decisions, using high-level policies. It will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control schemes (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. This architecture is sometimes referred to as Monitor-Analyze-Plan-Execute (MAPE).

Driven by such vision, a variety of architectural frameworks based on "self-regulating" autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. However, most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services. Some autonomic systems involve mobile agents interacting via loosely coupled communication mechanisms.[2]

Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. For example, ant colony optimization could be studied in this paradigm.[3]

Problem of growing complexity[]

Forecasts suggest that the computing devices in use will grow at 38% per year[4] and the average complexity of each device is increasing.[4] Currently, this volume and complexity is managed by highly skilled humans; but the demand for skilled IT personnel is already outstripping supply, with labour costs exceeding equipment costs by a ratio of up to 18:1.[5] Computing systems have brought great benefits of speed and automation but there is now an overwhelming economic need to automate their maintenance.

In a 2003 IEEE Computer article, Kephart and Chess[1] warn that the dream of interconnectivity of computing systems and devices could become the "nightmare of pervasive computing" in which architects are unable to anticipate, design and maintain the complexity of interactions. They state the essence of autonomic computing is system self-management, freeing administrators from low-level task management while delivering better system behavior.

A general problem of modern distributed computing systems is that their complexity, and in particular the complexity of their management, is becoming a significant limiting factor in their further development. Large companies and institutions are employing large-scale computer networks for communication and computation. The distributed applications running on these computer networks are diverse and deal with many tasks, ranging from internal control processes to presenting web content to customer support.

Additionally, mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. They do so by using laptops, personal digital assistants, or mobile phones with diverse forms of wireless technologies to access their companies' data.

This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. Manual control is time-consuming, expensive, and error-prone. The manual effort needed to control a growing networked computer-system tends to increase very quickly.

80% of such problems in infrastructure happen at the client specific application and database layer.[citation needed] Most 'autonomic' service providers[who?] guarantee only up to the basic plumbing layer (power, hardware, operating system, network and basic database parameters).

Characteristics of autonomic systems[]

A possible solution could be to enable modern, networked computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by the autonomic nervous system of the human body.[6] This nervous system controls important bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious intervention.

In a self-managing autonomic system, the human operator takes on a new role: instead of controlling the system directly, he/she defines general policies and rules that guide the self-management process. For this process, IBM defined the following four types of property referred to as self-star (also called self-*, self-x, or auto-*) properties. [7]

  1. Self-configuration: Automatic configuration of components;
  2. Self-healing: Automatic discovery, and correction of faults;[8]
  3. Self-optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements;
  4. Self-protection: Proactive identification and protection from arbitrary attacks.

Others such as Poslad[7] and Nami and Bertels[9] have expanded on the set of self-star as follows:

  1. Self-regulation: A system that operates to maintain some parameter, e.g., Quality of service, within a reset range without external control;
  2. Self-learning: Systems use machine learning techniques such as unsupervised learning which does not require external control;
  3. Self-awareness (also called Self-inspection and Self-decision): System must know itself. It must know the extent of its own resources and the resources it links to. A system must be aware of its internal components and external links in order to control and manage them;
  4. Self-organization: System structure driven by physics-type models without explicit pressure or involvement from outside the system;
  5. Self-creation (also called Self-assembly, Self-replication): System driven by ecological and social type models without explicit pressure or involvement from outside the system. A system’s members are self-motivated and self-driven, generating complexity and order in a creative response to a continuously changing strategic demand;
  6. Self-management (also called self-governance): A system that manages itself without external intervention. What is being managed can vary dependent on the system and application. Self -management also refers to a set of self-star processes such as autonomic computing rather than a single self-star process;
  7. Self-description (also called self-explanation or Self-representation): A system explains itself. It is capable of being understood (by humans) without further explanation.

IBM has set forth eight conditions that define an autonomic system:[10]

The system must

  1. know itself in terms of what resources it has access to, what its capabilities and limitations are and how and why it is connected to other systems;
  2. be able to automatically configure and reconfigure itself depending on the changing computing environment;
  3. be able to optimize its performance to ensure the most efficient computing process;
  4. be able to work around encountered problems by either repairing itself or routing functions away from the trouble;
  5. detect, identify and protect itself against various types of attacks to maintain overall system security and integrity;
  6. adapt to its environment as it changes, interacting with neighboring systems and establishing communication protocols;
  7. rely on open standards and cannot exist in a proprietary environment;
  8. anticipate the demand on its resources while staying transparent to users.

Even though the purpose and thus the behaviour of autonomic systems vary from system to system, every autonomic system should be able to exhibit a minimum set of properties to achieve its purpose:

  1. Automatic: This essentially means being able to self-control its internal functions and operations. As such, an autonomic system must be self-contained and able to start-up and operate without any manual intervention or external help. Again, the knowledge required to bootstrap the system (Know-how) must be inherent to the system.
  2. Adaptive: An autonomic system must be able to change its operation (i.e., its configuration, state and functions). This will allow the system to cope with temporal and spatial changes in its operational context either long term (environment customisation/optimisation) or short term (exceptional conditions such as malicious attacks, faults, etc.).
  3. Aware: An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose. Awareness will control adaptation of its operational behaviour in response to context or state changes.

Evolutionary levels[]

IBM defined five evolutionary levels, or the autonomic deployment model, for the deployment of autonomic systems:

  • Level 1 is the basic level that presents the current situation where systems are essentially managed manually.
  • Levels 2–4 introduce increasingly automated management functions, while
  • level 5 represents the ultimate goal of autonomic, self-managing systems.[11]

Design patterns[]

The design complexity of Autonomic Systems can be simplified by utilizing design patterns such as the model–view–controller (MVC) pattern to improve concern separation by encapsulating functional concerns.[12]

Control loops[]

A basic concept that will be applied in Autonomic Systems are closed control loops. This well-known concept stems from Process Control Theory. Essentially, a closed control loop in a self-managing system monitors some resource (software or hardware component) and autonomously tries to keep its parameters within a desired range.

According to IBM, hundreds or even thousands of these control loops are expected to work in a large-scale self-managing computer system.

Conceptual model[]

AutonomicSystemModel.png

A fundamental building block of an autonomic system is the sensing capability (Sensors Si), which enables the system to observe its external operational context. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e.g., bootstrapping, configuration knowledge, interpretation of sensory data, etc.) without external intervention. The actual operation of the autonomic system is dictated by the Logic, which is responsible for making the right decisions to serve its Purpose, and influence by the observation of the operational context (based on the sensor input).

This model highlights the fact that the operation of an autonomic system is purpose-driven. This includes its mission (e.g., the service it is supposed to offer), the policies (e.g., that define the basic behaviour), and the "survival instinct". If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space.

See also[]

References[]

  1. ^ Jump up to: a b Kephart, J.O.; Chess, D.M. (2003), "The vision of autonomic computing", Computer, 36: 41–52, CiteSeerX 10.1.1.70.613, doi:10.1109/MC.2003.1160055
  2. ^ Padovitz, Amir; Arkady Zaslavsky; Seng W. Loke (2003). Awareness and Agility for Autonomic Distributed Systems: Platform-Independent Publish-Subscribe Event-Based Communication for Mobile Agents. Proceedings of the 14th International Workshop on Database and Expert Systems Applications (DEXA'03). pp. 669–673. doi:10.1109/DEXA.2003.1232098. ISBN 978-0-7695-1993-7. S2CID 15846232.
  3. ^ Jin, Xiaolong; Liu, Jiming (2004), "From Individual Based Modeling to Autonomy Oriented Computation", Agents and Computational Autonomy, Lecture Notes in Computer Science, 2969, p. 151, doi:10.1007/978-3-540-25928-2_13, ISBN 978-3-540-22477-8
  4. ^ Jump up to: a b Horn. "Autonomic Computing:IBM's Perspective on the State of Information Technology" (PDF). Archived from the original (PDF) on September 16, 2011.
  5. ^ ‘Trends in technology’, survey, Berkeley University of California, USA, March 2002
  6. ^ http://whatis.techtarget.com/definition/autonomic-computing
  7. ^ Jump up to: a b Poslad, Stefan (2009). Autonomous systems and Artificial Life, In: Ubiquitous Computing Smart Devices, Smart Environments and Smart Interaction. Wiley. pp. 317–341. ISBN 978-0-470-03560-3. Archived from the original on 2014-12-10. Retrieved 2015-03-17.
  8. ^ S-Cube Network. "Self-Healing System".
  9. ^ Nami, M.R.; Bertels, K. (2007). A survey of autonomic computing systems. 3rd International Conference on Autonomic and Autonomous Systems. pp. 26–30.
  10. ^ "What is Autonomic Computing? Webopedia Definition".
  11. ^ "IBM Unveils New Autonomic Computing Deployment Model". 2002-10-21.
  12. ^ Curry, Edward; Grace, Paul (2008), "Flexible Self-Management Using the Model–View–Controller Pattern", IEEE Software, 25 (3): 84, doi:10.1109/MS.2008.60, S2CID 583784

External links[]

Retrieved from ""