next up previous
Next: Conclusions and future work Up: New Approach to Automated Previous: Determining Dependencies with Neural

Architecture for Automated Model Generation

  The architecture explained in this section covers all steps of phase 3 of the modeling process, including data collection, the evaluation in neural networks and a well defined interface to the management tools of phase 4. It also contains means to simplify the installation process of the agents in the managed environment (in correspondence to phase 2), however the focus of this section lies on the operational aspects.

Besides some low-level probes the whole architecture is based on agents and agent systems, respectively, but its principles are not restricted to a special implementation of a management agent system. Reasons for the choice to use an agent based architecture are that the following features are provided by most agent systems in an easy to use way (see also [#!ppc99!#]):

However, it is not assumed that agent systems have to be hosted on all machines. As explained in the following, the architecture contains means to cleanly embrace other sources for activity measurement via proprietary or standardized management protocols. Along with the architecture, supplementary information about the prototypical implementation developed in the project is presented. The agent platform chosen is the Mobile Agent Systems Architecture (MASA, [#!ghr99!#]) implemented and developed at our research group for general management purposes. The platform and agents are written in Java, making them to a great extent independent of the underlying system. The inter-agent communication is based on the Common Object Request Broker Architecture 2.0 (CORBA, [#!corba20!#]).

Our agent architecture is structured in three layers (as depicted by figure [*]):

The lowest layer hosts all means of data collection. It contains implementations to measure data via standardized or proprietary management interfaces. It further provides a homogeneous interface for objects' activities to the next layer, thus making their heterogeneous nature fully transparent to the rest of the architecture.
The middle layer filters and pre-processes the activity data. It is further used to channel the data flows in a way that the impact on the performance of managed systems is kept at an acceptable level through load balancing and caching mechanisms.
The last layer hosts the components for model generation including the neural networks.

Figure: Deployment of probes and agents

The basic types for the means of collection in layer 1 are:

management agents (in the sense used in management architectures like OSI or Internet management) already in place or to be installed,
proprietary measurement tools (delivered with applications, etc),
special probes, developed and deployed for the purpose of gathering information for this modeling,
or agents of the agent architecture directly capable of measuring through interfaces of the agent system.

As representatives of the first type our prototype supports access to SNMP agents, currently used to meter CPU activity of hosts and amount of network traffic on IP interfaces. Further we implemented probes of the third type, metering CPU utilization of applications by reading from the `proc filesystem' (as provided by SUN Solaris, Linux and others). The means of collection should be installed close to the objects that have to be monitored to avoid unnecessary traffic. On the other hand, not all endsystems are capable of hosting an agent system or are not allowed to--for security or other reasons. In these cases remote monitoring (as in our case of data access via SNMP) is the preferred choice.

There is no difference, whether data is gathered (in step 3-i) to calculate a collective domain activity or for single objects directly represented in the generated model. Figure [*] shows the same flow of information for both cases. On the left hand side domain activity is calculated, while on the right hand side the information goes directly to mediator agents.

The homogeneous interfaces to the upper layers are provided by so called collector agents. If their agent system provides appropriate interfaces they are able to directly collect data from their host system. Otherwise they send queries to externally implemented probes.

The mentioned interface is divided into two parts. One is mainly used by the configuration agents to initialize and configure the agents while the second part is used for the data queries at run-time. The same interface is also used and provided by mediator agents in the second layer. This allows to cascade them in larger scenarios or leave them out in small ones. These agents may further implement automated load balancing by traveling to hosts with unused resources or to places with higher available communication bandwidth. There are two possibilities of how these agents may collaborate: Either they use the configuration part of another agent's interface to suggest the delegation of a task like to apply filters on data, or via the query interface, e.g., by rejecting queries to probes for which it had previously been responsible. In this case the agent may specify another agent that should be responsible from now on. For now, our prototypical implementation of the mediator agents concentrates on caching and simple delegation tasks. We do not yet make active use of mobility aspects and complex collaboration algorithms. Further tasks assigned to mediator agents are:

In adaption to the available resources these tasks may either be combined in one agent for all data, respectively single data streams, or be distributed over multiple cascaded agents.

The domain agents basically behave just like mediator agents. However, they implement special processing function to combine various streams of input data to one single stream for the collective domain (or distributed application) activity. Their query interface also allows to acquire the underlying data of single objects to reduce the communication bandwidth in cases where detailed models are constructed within the domain, but the collective domain activity is needed for other models, too.

The last layer contains the process of model creation. It is distributed onto two kinds of agents. The modeler agents organize the modeling. They query the other agents for pre-processed data, initiate the modeling process and finally implement the dependency model interface used by management tools. In our prototype it also contains an applet based user interface allowing to supervise and control the modeling. Modeler agents may collaborate with others by sharing ready evaluated parts of the models. Thus, an enterprise-wide modeler agent might eventually only calculate the inter-domain dependencies and query the underlying structures from local modeler agents. The second type on this layer are the neural agents which implement the neural networks. It is possible to install a pool of these agents and use them from the modelers as required. However, to reduce overhead it is recommended to place neural agents close to the modelers or even on the same agent system.

Figure [*] shows the agents deployment in an IT-environment with two domains. The goal is to construct a dependency model (depicted by the gray box) for the administrator of domain A, who is interested in the details of his own domain as well as the connections to the central server and the second domain B.

The data flows begin at the probes or the collector agents, respectively. As an example, one agent in domain B (like the others depicted by a white square without inner symbol, but marked with an asterisk `*') uses queries to an SNMP management agent on the router and additionally collects data from an external probe on its own host. The agent on the other system in the domain directly accesses its host via management interfaces provided by the agent system. Both agents' data is then forwarded to the domain agent that calculates the resulting domain activity by joining the time intervals and summing up the values in case of overlaps. On the interface towards the mediator it behaves just like any collector agents. Therefore, the whole domain appears as just one object in the model. The mediator agent carries out pre-processing on the data that did not already take place and forwards it to the modeler agent, which generates the complete resulting models with the help of a neural agent.

next up previous
Next: Conclusions and future work Up: New Approach to Automated Previous: Determining Dependencies with Neural
Copyright Munich Network Management Team