The Simple Times (tm) is an openly-available publication devoted to the promotion of the Simple Network Management Protocol. In each issue, The Simple Times presents technical articles and featured columns, along with a standards summary and a list of Internet resources. In addition, some issues contain summaries of recent publications and upcoming events.
The Simple Times is openly-available. You are free to copy, distribute, or cite its contents; however, any use must credit both the contributor and The Simple Times. (Note that any trademarks appearing herein are the property of their respective owners.) Further, this publication is distributed on an "as is" basis, without warranty. Neither the publisher nor any contributor shall have any liability to any person or entity with respect to any liability, loss, or damage caused or alleged to be caused, directly or indirectly, by the information contained in The Simple Times.
The Simple Times is available as an online journal in HTML, PDF and PostScript. New issues are announced via an electronic mailing list. For information on subscriptions, see the end of this issue.
get-bulk
operation.
In this article, we look into ways of making bulk transfers of MIB data between SNMP agents and managers more efficient. We consider a bulk transfer to be the transfer of several hundreds of kilobytes of MIB data in a single logical transaction. For bulk transfers, our objectives are:
This article is structured as follows. First we discuss what we consider the three main problems with bulk transfers: latency, network overhead and table retrieval. Next we discuss three different approaches to solve these problems. The first approach aims to be a small evolutionary change to the current SNMPv3 framework, requiring minimal changes to existing SNMP manager and agent implementations. As such, this approach is envisaged to be useful in the short term. The second approach uses a mixture of SNMP and other protocols. The third approach discusses alternative protocols and encodings, abandoning the SNMP protocol and associated BER encoding altogether. This approach will therefore take longer to design, implement and deploy, and is envisaged to be useful in the longer term. This approach also serves as food for thought, and is intended to solicit discussion on future Internet management frameworks and protocols.
get-next
operator, the retrieval of large tables with
many rows requires at least one get-next
operation per
table row. If a table row does not fit into a single message (due to
message size constraints) even more operations per row are needed.
RFC 1187 describes an algorithm that speeds up the retrieval of an entire table by using multiple threads in parallel where each thread retrieves only a portion of the table. To make this work, one needs a manager which supports multiple threads and which has knowledge about the distribution of instance identifiers in the table. Note that the algorithm does not reduce the total number of request/response PDU exchanges. Instead, it is more efficient in terms of latency because several threads gather data simultaneously. The price for achieving reduced latency with multiple threads is bursty SNMP traffic, which can cause overload problems on the agent side.
If the algorithm described in RFC 1187 is not used, then each
get-next
operation must be completely finished before the
next one can start. Things get worse in case packets get dropped
within the network, since retransmission timers have to expire and
retransmissions must succeed before the retrieval process can
continue.
The situation improves with the introduction of the
get-bulk
operator. However, the response to a single
get-bulk
operation still has to fit into a single UDP
packet. In theory, UDP can handle packets of nearly 64 KBytes. In
practice, the maximum packet size will be much smaller. For the
hundreds of kilobytes of MIB data we are considering here, even the
use of get-bulk
results in a large overall delay.
Each request/response exchange (be it get-next
or
get-bulk
) involves at least a network round trip delay
time, possibly time-out and retransmission delays, and probably also
other protocol stack overhead delays (e.g. marshalling and
unmarshalling of data, context switching). In summary, the overall
latency of a bulk transfer is high because of the large number of PDU
exchanges involved and their synchronous nature.
get-bulk
overshoot'' problem.
BER encoding is well known to be fairly inefficient in terms of network overhead. Mitra [1] and Neufeld and Vuong [2] describe this issue in detail. At the time BER was chosen for SNMP, network overhead was not considered to be a main issue; the reason BER was selected was because it was readily available and simple to implement. Since alternative encoding rules exist nowadays, it is feasible to reduce network overhead by selecting another set of encoding rules. These new rules, however, should not increase latency too much due to additional encoding/decoding times.
If we look at the OIDs of the objects involved in a bulk transfer, we observe a high degree of redundancy. For the objects in a table, we see multiple occurrences of identical portions of OIDs: all OID prefixes up to the column number are identical, as are the instance identifier postfixes of all entries of a single table row. Because of this, redundant information is transferred, resulting in a higher network overhead than strictly needed.
The get-bulk
operator also adds to the network overhead,
since the manager, which does not know the size of the table to be
retrieved, has to guess a value for the max-repetitions
parameter. Using small values for max-repetitions
may
result in too many PDU exchanges. Using large values, however, may
result in an ``overshoot'' effect: the agent returns data that does
not belong to the table the manager is interested in. This data will
be sent over the network back to the manager, just to be discarded.
get-bulk
overshoot consequences.
Retrieving table objects is more complex than retrieving other
objects. This is due to the fact that the SNMP frameworks have no
notion of tables, but only of conceptual tables. The difference
between these two concepts is important for tables that have rows in
which some columnar objects do not exist; in other words, for tables
that allow their row entries to have ``holes'' in them. Consider the
case where a manager wants to retrieve a table by performing repeated
get-next
operations. In most cases the manager uses for
each row of the table a single get-next
operation. The
get-next
PDU contains a list of OIDs, one for each column
of the row; the value of these OIDs is usually taken from the response
of the previous get-next
operation. If there is a hole in
the table, the get-next
operation returns the elements of
the next row, except for the column in which there is a hole. For this
column the get-next
operation returns the next available
object in the MIB tree, which is the columnar object for the next
table row that does have a value in that column (we will not discuss
what happens if none of the remaining table rows has a value in that
column). As a consequence, the manager is faced with a set of columnar
objects that do not belong to the same row anymore; it has the
cumbersome task of finding out what objects belong to which rows and
where the holes are. In short, reconstructing the actual table,
including determining where the holes in the table are and where the
table ends, is a time-consuming task.
Another problem is that the manager has no guarantee that it will retrieve a table in a consistent state. This is particularly true for large tables, because the retrieval of such tables involves a large number of PDU exchanges, which take a considerable amount of time. If in the meantime some table elements are changed by the agent, the manager ends up with an inconsistent view of the table.
Finally there is a problem which we call ``get-bulk
overshoot.'' When get-bulk
is used to retrieve a table,
object values may be returned that do not belong to the table of
interest. If, for example, a max-repetitions
value of 50
is used, and the table contains only 10 additional elements, 40
elements will be returned that are not really needed. In this case,
the agent processed information, retrieved object values from the
instrumentation and used resources, just to have the manager
discard the information. This can add up to quite an amount of wasted
resources.
Now that we have outlined some problems with bulk MIB data transfers, we will discuss three approaches to solving them.
get-bulk
request does not fit into a single UDP packet. As a result, overall
latency will decrease and table consistency will improve. A downside
is that large buffers are required on both the manager and the agent
to store the large SNMP messages. This can be a serious problem for
agents in embedded environments.
Several issues should be investigated:
In 1994, the University of Twente had (temporarily) a prototype of SNMPv2p running over TCP. Recently, Schönwälder and Deri modified the Linux CMU SNMP library and the UCD-SNMP software to transport SNMP traffic over TCP. These experiments suggest that the extension of an existing SNMP implementation to support TCP should be relatively straightforward.
There are two different types of encoding schemes: schemes that use a definite form for the length field and schemes that use an indefinite form. Definite-form schemes require the whole of the message to be in a buffer, because they insert the length of the message in front of the message. Indefinite-form schemes do not put a length field in front of a message. Instead, they mark the end of an encoded ASN.1 element by a special byte. Hence, an indefinite-form scheme does not require the complete message to be buffered and it can encode on the fly.
We first note that all versions of SNMP mandate the use of the definite form of BER. Replacing BER by a different encoding scheme therefore requires a new protocol version and is thus a major change.
The ISO has defined several alternatives to BER. PER encoding (Packed Encoding Rules) has approximately 30% shorter encodings, at the expense of a small increase in encoding time. PER allows the use of the indefinite form, so no large encoding buffer is needed. Lightweight Encoding Rules (LER) decrease overall latency by ensuring quick encoding and decoding. However, network overhead is adversely affected, because the encodings can be much longer than those generated by BER. Distinguished Encoding Rules (DER) use the definite form only. They slightly improve encoding time over BER while having a minimal impact on network overhead compared to BER. Finally, Canonical Encoding Rules (CER) use the indefinite form like PER, but are less demanding in terms of encoding time.
We initially thought we should move from a definite-form encoding scheme to an indefinite-form encoding scheme in order to avoid large buffers. We later realized that the SNMP version 3 (SNMPv3) message header can include an authentication digest, which is computed over the whole PDU. As a result, we must buffer the entire PDU before transmitting it anyway if authentication is used. Therefore, switching to alternate encoding rules does not really prove advantageous over BER in the general case.
SNMPv3 allows to add encryption envelopes to SNMP messages. This
feature can not only be useful for its intended purpose, which is
encryption, but it can also be exploited to achieve data compression.
By adding an encryption algorithm that in fact compresses the message,
the size of the messages that are transmitted over the wire decreases.
Defining compression as an encryption algorithm allows to add
compression to SNMPv3 without making any changes to the
protocol. However, since there is no noAuthPriv
security
level in SNMPv3, one has to use authentication in order to take
advantage of compression.
Using compression relieves us of the need to abandon BER, in order to replace it with a new more efficient encoding scheme. It leaves the installed base of implemented and debugged BER encoding and decoding software in place. Any standard compression algorithm can be used like e.g. DEFLATE (RFC 1951), for which stable, debugged implementations are readily available.
get-next
nor get-bulk
are efficient for that
purpose. Note that retrieving an entire table, an entire table column
or a part of a table column are all special cases of a MIB subtree. We
define the get-subtree
operation to retrieve all objects
below a particular node in the MIB tree. By allowing the operation not
only to retrieve a single subtree, but also to retrieve multiple
subtrees with a varbind list, the operation becomes even more
powerful. It can then be used to retrieve selected columns of a
complete table or selected columns within a range of rows of a column.
Examples of the usage of this operation include retrieving the entire interface table (ifTable), retrieving the operational status of all interfaces in the ifTable, retrieving both the operational and the administrative statuses of all interfaces in the ifTable, and retrieving the state and remote address of all TCP connections to a particular local address/port combination. For each of these examples, the information is requested in a single protocol operation.
The amount of data returned for a single get-subtree
operation can be quite large; this has two implications. First, the
get-subtree
operation will be most useful when used over
TCP. The strict message size limitations of the UDP transport would
immediately break the advantages of this new operation. Second, even
when using TCP as a transport, it will generally not be feasible for
agents to have memory buffers to store huge response message. Further,
since it might take some time for the agent to collect the MIB data,
other requests may have to wait some time before a single-threaded
agent will process them. Therefore, a mechanism is needed that allows
the agent to return multiple related response
messages
for a single get-subtree
request. The TCP transport will
take care of any required retransmissions and it will keep the
responses in order. The TCP transport will also provide a window that
allows multiple responses to be in transit concurrently.
In summary, the main advantages of the get-subtree
protocol operation are:
get-bulk
overshoot anymore.
get-bulk
overshoot also means that no
network overhead is generated for objects that are not of interest
anyway.
get-next
or
get-bulk
operations.
max-repetitions
values.
get-subtree
operation can be an extension to both
SNMPv1 and SNMPv3. No new message format is needed, only a new PDU
type. This qualifies get-subtree
as a relatively small
evolutionary step.
get-subtree
collects and
returns each of the subtrees specified in its varbind list
simultaneously, that is, row by row for a table. This ensures an
efficient retrieval of table rows from the instrumentation and it
minimizes the risk of getting inconsistencies within a single row.
The problem with holes in tables discussed previously still exists. The reconstruction of the conceptual table remains the task of the manager. Only the retrieval and transport over the network is greatly simplified by this new protocol operation.
CISCO-BULK-FILE-MIB
) specifies how an SNMP
agent stores a user-defined set of MIB data into a local file. The
second MIB module (CISCO-FTP-CLIENT-MIB
) can be used to
upload local files to an FTP server using the FTP protocol. An SNMP
agent implementing both MIB modules can be instructed to save a
specified (large) amount of local MIB data into a file and upload that
file to a particular FTP server. We will now describe these MIB modules
briefly.
The CISCO-BULK-FILE-MIB
defines three tables. The
cbfDefineFileTable
defines the name of the file, how it
is stored and what encoding format will be used. One or more entries
in the cbfDefineObjectTable
are associated to a row in
the cbfDefineFileTable
. The entries specify what local
MIB objects should be put in the file upon creation. A complete MIB
table can be specified in a single entry in the
cbfDefineObjectTable
. A manager initiates the creation of
the actual file by doing a set
operation on the
cbfDefineFileNow
object. This results in a new entry in
the cbfStatusFileTable
which keeps track of the progress
of the file creation.
The storage type of a bulk file can either be permanent, volatile or
ephemeral, where the latter indicates that data exists only in small
amounts until it is read. This storage type, when used in combination
with the CISCO-FTP-CLIENT-MIB
, prevents the need for a
buffer large enough to hold the complete file.
There are three options for the format of the data files: BER encoded,
binary and human-readable ASCII. The BER encoded format is identical
to an SNMP varbind list. The binary format consists of
tags and data fields. There is a tag to set a standard OID prefix, a
tag for a single object, and some tags to encode tables. Tables are
encoded with little OID redundancy: for each entire row only the
common instance portion of all the OIDs in that row is encoded. The
binary format uses a proprietary encoding scheme for the ASN.1
primitive types INTEGER
, OCTET STRING
and
OBJECT IDENTIFIER
. The ASCII format is a mechanical
translation of the binary format; translation rules for tags and
values to ASCII are given in the MIB specification.
The CISCO-FTP-CLIENT-MIB
has a single table. An entry in
the cfcRequestTable
table specifies a local file that is
to be uploaded to a specified FTP server, either in binary or in ASCII
mode, using a specified user name and password. A manager can initiate
a file transfer from the agent to an FTP server by setting the
cfcRequestEntryStatus
to active
. The
progress and result of the transfer can be monitored by reading the
cfcRequestOperationState
and
cfcRequestResult
objects. The manager can abort an
ongoing transfer by setting the cfcRequestStop
object.
In summary, this solution to the bulk transfer problem requires agents to implement two MIBs and the manager to configure entries in several MIB tables to initiate and control bulk transfers. This means that bulk transfers are treated totally different from normal accesses to MIB data. For this reason, security needs to be considered separately for these transfers. For example, there is no mechanism in place which authenticates or encrypts management data while in transit over the network. The hybrid solution described here also requires an FTP server on the manager side. This means that management data retrieved via a bulk transfer is processed very differently from management data retrieved via SNMP since it becomes available in a file on an FTP server.
There are some downsides to using HTTP for management as well; we will name three. First there is the feature richness of HTTP. HTTP has numerous options and features that are valid and useful for its intended purpose, which is to be used as a document transfer protocol in the World-Wide Web. However, for the transfer of management data, many of those features will not be useful or usable. Conforming implementations of the protocol must include all of these features. As a result, the HTTP implementations in network devices will be needlessly big and complex.
Second, since the development and standardization of HTTP will remain focused on its original purpose, future versions of the protocol might have characteristics that are unwanted for a management protocol. Also, for the same reason, it will probably be difficult to get new features that are desirable for the use as a management protocol into HTTP.
Finally, the security mechanisms proposed and used in conjunction with HTTP do not directly map to the security mechanisms defined in SNMPv3. This means that either some mappings need to be defined or that there will be different security mechanisms (authentication, privacy, access control) for accessing the same MIB data via SNMP or HTTP.
We first looked at solutions within the SNMP framework. We believe
that within the boundaries of the current SNMPv3 framework and with
relatively little effort and small changes, the problems can be solved
to a large extent. Latency can be significantly decreased by using TCP
as a transport and by introducing a new get-subtree
protocol operation. Network overhead can be decreased by compressing
the payload of an SNMP message. Table retrieval can be improved by
applying the new get-subtree
operation to conceptual
tables.
Second, there are possible hybrid solutions. We presented a solution proposed by Stewart. A downside to this solution might be that it treats bulk transfers as a separate, special issue, and still requires all of the normal SNMP framework and protocol stack to be in place. Furthermore, a whole new set of security problems will be the result of such an approach. Other hybrid solutions are probably also possible, but are not discussed in this article.
The third approach is to replace SNMP with another protocol. By using a protocol that runs over TCP, bulk transfer latency can remain low. By using compression on the encoded management information, network overhead can be kept low. If a mainstream technology is used for representing management information, e.g. XML, building management applications will no longer require skills specific to network management.
The first solution aims to be a small evolutionary step with respect to the current SNMPv3 management framework. It is relatively easy to implement and keeps the current implementations largely intact. This protects investments in current SNMP technology. The second solution is probably also fairly easy to implement, but has some architectural and security-related downsides that make it in our view less attractive than the first one. The third solution is not covered in as much detail as the first two. It will take quite some work to further define that solution. Because it breaks so radically with the current SNMP framework it will be more difficult to get it implemented and deployed. As such, it is intended to serve as food for thought for the long-term future of Internet management.
Application development using C++ has entered the main stream and with it a rich set of reusable class libraries are now readily available. What is missing is a standard set of C++ classes for network management. An object oriented approach to SNMP network programming provides many benefits including ease of use, safety, portability and extensibility. SNMP++ offers power and flexibility that would otherwise be difficult to implement and manage.
Oid
) class, Variable Binding
(Vb
) class, Protocol Data Unit (Pdu
) class,
Snmp
class and a variety of classes making work with
ASN.1 and SMI types easy and object oriented.
The classes manage various SNMP structures and resources automatically
when objects are instantiated and destroyed. This frees the
application programmer from having to worry about de-allocating
structures and resources and thus provides better protection from
memory corruption and leaks. SNMP++ objects may be instantiated
statically or dynamically. Static object instantiation allows
destruction when the object goes out of scope. Dynamic allocation
requires use of the C++ constructs new
and
delete
. Internal to SNMP++ are various SMI structures
which are protected and hidden from the public interface. All SMI
structures are managed internally, the programmer does not need to
define or manage SMI structures or values. For the most part, usage of
`C' pointers in SNMP++ is non existent. By hiding and managing all SMI
structures and values, the SNMP++ classes are easy and safe to use.
The programmer cannot corrupt what is hidden and protected from scope.
An SNMP++ application communicates with an agent through a session
model. That is, an instance of the Snmp
class maintains
logical connections to specified agents. An application may have
multiple Snmp
instances, each instance communicating to
the same or different agent(s). This is a powerful feature that allows
a network management application to have different sessions for each
management component. For example, an application may have one
Snmp
object to provide graphing statistics, another
Snmp
object to monitor traps, and a third
Snmp
object to allow SNMP MIB browsing. SNMP++
automatically handles multiple concurrent requests from different
Snmp
instances. Alternatively, a single Snmp
instance may be used for everything.
The majority of SNMP++ is portable C++ code. Only the implementation
of the Snmp
class is different for each target operating
system. If your program contains SNMP++ code, this code will port
without any changes. Currently SNMP++ implementations are available
for Microsoft Windows NT, Windows '95 and '98, HP UNIX, and Sun
Solaris.
SNMP++ supports automatic time-out and retries. This frees the
programmer from having to implement time-out or retry code.
Retransmission policy is defined in the SnmpTarget
class. This allows each managed target to have its own time-out /
retry policy.
SNMP++ supports a blocking and an asynchronous model. The blocked mode
for MS-Windows allows multiple blocked requests on separate
Snmp
class instances. SNMP++ also supports a non-blocking
asynchronous mode for requests. Time-outs and retries are supported in
both blocked and asynchronous modes.
SNMP++ has been designed with support and usage for SNMP version 1
(SNMPv1) and version 2c (SNMPv2c). All operations within the API
are designed to be multi-lingual and they are not SNMP version
specific. Through utilization of the SnmpTarget
class,
SNMP version specific operations are abstracted. SNMP++ supports all
six SNMP operations (Get, GetNext, GetBulk, Set, Inform and Trap)
through corresponding Snmp
member functions. Each of
these six functions utilizes similar parameter lists and operates in a
blocked or non-blocked (asynchronous) manner. SNMP++ is designed to
allow trap reception and sending on multiple transports including IP
and IPX. In addition, SNMP++ allows trap reception and sending using
non-standard trap IP ports and IPX socket numbers.
SNMP++ is implemented using C++ and thus allows a programmer to
overload or redefine behavior which does not suite their needs. For
example, if an application requires special Oid
object
needs, a subclass of the Oid
class may be created,
inheriting all the attributes and behavior the Oid
base
class while allowing new behavior and attributes to be added to the
derived class.
sysDescr.0
object
from the specified agent. The example shows all code needed to create
a SNMP++ session, get the system description, and print it out.
Retries and time-outs are managed automatically.
#include "snmp_pp.h" #define SYSDESCR "1.3.6.1.2.1.1.1.0" // OID for sysDescr.0 void get_system_descriptor() { int status; CTarget ctarget((IpAddress) "10.4.8.5"); // SNMP++ community target Vb vb(SYSDESCR); // SNMP++ VB Object Pdu pdu; // SNMP++ PDU // Construct a SNMP++ SNMP session object. Check the // creation status and print an error message on failure. Snmp snmp(status); if (status != SNMP_CLASS_SUCCESS) { cout << snmp.error_msg(status); return; } // Add the varbind to the pdu object and invoke an SNMP get // operation. Print the result or an error message. pdu += vb; if ((status = snmp.get(pdu, ctarget)) != SNMP_CLASS_SUCCESS) cout << snmp.error_msg(status); else { pdu.get_vb(vb,0); cout << "System Description = "<< vb.get_printable_value(); } }; // Thats all!
The actual SNMP++ calls are made up of ten lines of code. A
CTarget
object is created using the IP address of the
agent. A variable binding (Vb
) object is then created
using the object identifier of the MIB object to retrieve. The
Vb
object is then attached to a Pdu
object. An Snmp
object is used to invoke a
get
operation. Once retrieved, the response message is
printed out. All error handling code is included.
All source code for SNMP++ is freely available to any developer. This includes all source code and make files for building the libraries on MS-Windows, HP UNIX or Sun Solaris. Since the code is ANSI C++ compliant, it can also be ported to other platforms easily. Developers are free to use SNMP++ in their products without any royalties.
subscribe winsnmp
in the body.
Snmp
and SnmpMessage
needed modifications.
Snmp
class was modified. If the user requests the
class to send a Pdu
with SNMPv3, first the
Pdu
is stored for later reference and the engineID of the
host specified in the target object is determined. If the engineID is
unknown, the zero length engineID is used. Then the request is treated
like any other SNMPv1/SNMPv2c request, i.e. it is passed to the
SnmpMessage
class, which dispatches the message to the
appropriate Message Processing Model. The returned serialized message
is sent over the network and the response is passed to the
SnmpMessage
class for deserialization. In case the
received Pdu
is a Report-PDU, it is checked whether it
contains the unknownEngineIDs or the notInTimeWindows counter. If this
is true, the whole process is repeated, i.e. the engineID is
determined, the original message is serialized and sent again.
Additional tests prevent an infinite loop. For asynchronous requests,
this test is implemented by a new callback function that is called
instead of the function specified by the user.
SNMP++ dispatches messages automatically between the network and the
application. The dispatcher checks the version of incoming or outgoing
messages and either calls the new functions of the v3MP or the
standard functions to parse or build SNMPv1/SNMPv2c messages. The
ASN.1/BER functions are called in the SNMP++ class
SnmpMessage
. The methods of this class were extended to
check the version and to call the correct message processing
model. The methods of the SnmpMessage
class return and
are called with a Pdu
, the version and the
community. However, the v3MP of this implementation needs and returns
additional values (engineID, securityName, securityModel,
securityLevel, contextEngineID and contextName). As a Pdu
object does not contain any version specific values and since the
interface of the SnmpMessage
class should not be
modified, the community string was chosen to hold those parameters.
The user calls a function that writes all values except the engineID
separated by a backslash into the community string. The engineID is
added by the Snmp
class. This form of encoding has to be
changed to a length based encoding, as the engineIDs can contain
arbitrary characters.
The main part of the v3MP module was implemented as described in the
RFCs. The two SNMP ASIs to prepare an outgoing message
(prepareOutgoingMessage and prepareResponseMessage) are implemented in
one function that only gets the values for engineID, securityModel,
securityName, securityLevel, contextEngineID, contextName and PDU and
returns the serialized message. Similarly, the function to parse an
incoming message gets the serialized message and returns all the
values the first function gets as input. All other parameters are not
needed in SNMP++ or can be determined during processing:
transportDomain and transportAddress are not needed as the engineID is
passed to the v3MP, messageProcessingModel is assumed to be v3MP,
expectResponse and pduVersion can be determined from the
Pdu
and sendPduHandle is not necessary as messages are
dispatched to the application using the requestID of the
Pdu
. The v3MP does not return a stateReference as this
reference would have to be passed through the SnmpMessage
and Snmp
classes to the message queue class and would
imply the change of several interfaces. So all stateReferences are
cached inside the v3MP.
For engineID discovery the following procedure is used: The v3MP is
called to build a message with a zero length engineID. The v3MP sets
the securityLevel to noAuthNoPriv and deletes the variable bindings
from the Pdu
. Then the standard behavior for a request
message is used. When the answer is processed, the engineID is
automatically added to the list of known engineIDs. As this answer
contains a Report-PDU with the unknownEngineIDs counter, the
Snmp
class will start the serialization process again.
SNMP++ uses the requestID of the Pdu
to match incoming
responses to outstanding requests. If SNMPv3 is used, a response
possibly does not contain the requestID of the sent message (this
happens if the agent can not decrypt the scopedPDU). For this reason
the stateReference of each request contains the requestID and if a
report message contains a wrong requestID, it is set to the saved
value. For other message types the requestID is not changed as those
messages have to contain the correct requestID.
The security modules use the MD5 and DES routines of RSAEuro, the SHA routines of Uri Blumenthal and the IDEA routines of Tatu Ylonen.
The USM module contains two user tables, one with the user names and passwords and one with the localized keys for each used engineID. If SNMP++v3 is used in a manager, the user can add entries to the first table. Entries in the second table are automatically created if the USM is called to process or build an encrypted or authenticated message. If the user changes an entry in the first table, all appropriate entries in the other table are deleted. Both tables are deleted at program exit. As the calculation of localized keys may take several seconds and since an agent should not store passwords, the first table is not used in an agent. Users can be added at initialization time with passwords, in this case localized keys are computed with the local snmpEngineID, or through the usmUserTable of the agent. Several functions were added to the USM module to assist the user if he wants to change the keys in the usmUserTable in an agent.
This implementation was tested against the agents from UCD and MG-Soft. With both agents engineID discovery, time synchronization and exchange of noAuthNoPriv, authNoPriv (MD5 and SHA) and authPriv (MD5/DES and SHA/DES) messages worked. An agent written with AGENT++v3 and SNMP++v3 was used to test the cloning of users and the key change algorithm.
Future versions of SNMP++v3 could improve the handling of the SNMPv3
specific parameters. The chosen solution, which encodes those
parameters into the community string, works but it contradicts the
concept of SNMP++. According to this concept, a new target class which
contains the securityName, securityModel and securityLevel and which
is possibly responsible to store the engineIDs for each address, has
to be defined. The context information would be stored in the
Pdu
or passed directly to the methods get, get_next,
etc. of the Snmp
class. (This is already done with the
parameters nonRepeaters and maxRepetitions for a get-bulk
operation). The community based solution has the advantage that it is
simple to implement, but it is bad design to misuse the community
string that way. Whereas the target solution fits into the concept of
SNMP++, but the implementation is more complex and introduces
incompatible changes in the SNMP++ API.
The functions of the USM that assist the user to change a key for one agent could be extended to do the complete key change for several agents. To improve the performance of the USM, the table that contains the localized keys could be saved at program exit and restored at initialization time.
Advent Network Management showed interoperability using their Java JDK 1.1-based SNMPv3 MIB Browser. Bay Networks showed both command generator and responder applications. Bay's multilingual agent for their BayStack 200 hub showed different levels of authentication and privacy based on the SNMP Research stack, working with command generators (managers) using other code bases. Bay's Optivity manager applications worked with other vendors' code bases on network hardware in the booth. BMC Software demonstrated interoperation with authentication, encryption and remote configuration features developed in C for their PATROL SNMP Toolkit and Patrol product suite. Cisco Systems demonstrated interoperability between their implementation and other code bases, using the SNMP Research-based C-language command responder capability running on their Cisco 2500 platform.
HP demonstrated OpenView Network Node Manager interoperation of secure SNMPv3 authentication, privacy and remote administration with command responders. The SNMPv3 manager is implemented with a hook that allows the SNMP Research management stack to translate SNMPv1/v2c requests into SNMPv3 before sending the request out on the wire. IBM Networking product division demonstrated authentication and privacy interoperation using an OS/390 Unix agent written in C, running in Dallas, which was remotely configurable, and the Nways Workgroup Manager for NT, written in Java. Liebert Corporation interoperated using a monitoring and control agent for their UPStation GX based on the SNMP Research stack.
SNMP Research's SNMPv3 product line demonstrated authentication and encryption as well as remote configuration, interoperating with other code bases as well as with their own code base in other vendors' products and prototypes. Tivoli demonstrated interoperation of authentication and privacy using their Java-based SNMPv3 Browser. Omar Cherkaoui and Ylian Saint-Hilaire from the University of Quebec in Montreal demonstrated interoperation using their Java-based reference implementation, including an SNMPv3 proxy, to be licensed for non-commercial use.
Not confining themselves to the Hot Spot, Hot Spot vendors also demonstrated interoperation with Epilogue Technology's SNMPv3 code on the show floor and with SNMPv1 devices elsewhere in the show.
Of course, large carriers such as MCI are very interested in SNMP Version 3. Widespread support of ``confirmed Traps'' via the Inform PDU, 64-bit counters (for use on high-speed interfaces or in situations where frequent polling is not feasible), and the use of GetBulk for large table data retrieval can make an immediate difference in managing large-scale carrier-grade networks. Although these features exist in SNMPv2, the multiple versions of SNMPv2 that have led to a lack of consistent acceptance have kept these capabilities out of many systems and networks. The possibility of a secure Set mechanism to securely replace the use of (scripted) Telnet, particularly for customer service delivery, will take longer to implement than the other features, but will allow carriers such as MCI to improve upon service activation times. When SNMPv3 is widely supported, getting it into the network and management systems may be a little difficult but should pose no significant barrier. Carriers and users are used to rolling version migrations where multiple versions of software co-exist for some time. The forthcoming Coexistence and Transition RFC should help guide the way to smooth transitions between SNMPv1 (and v2) to SNMPv3.
The users and vendors look forward to future technology showcases on SNMPv3, the continued IETF Working Group efforts to finalize the standard documents (along with the various proposed enhancements being discussed), and further announcements of SNMPv3-capable products.
Participants brought command responder and command generator applications, including products, prototypes and works in progress. Advent Net, IBM, Tivoli and University of Quebec showed Java implementations. As you might expect from a technology showcase and a demonstration of work in progress, several companies took advantage of the Hot Spot to identify and fix a bug or two in their code, increasing the event's overall interoperability as the show continued.
Many of the booth's visitors expressed both surprise and pleasure at seeing 10 companies with SNMPv3 security implementations and, further, interoperating code. Hot Spot participants noted excitement by some visitors and a wait-and-see attitude by others; but many attendees with a skeptical attitude indicated they now believe SNMPv3 deserves a serious look. John Seligson of Bay Networks recalled, ``Many visitors asked what happened to SNMPv2. Once I explained that SNMPv3 incorporated the standardized aspects of SNMPv2 (i.e., SMIv2, new protocol operations, etc.) adding an intuitive user-based security and administrative framework they went away satisfied.''
We were pleased and encouraged to see visitors representing a broad range of companies and organizations, notably the telecom industry and universities. We heard that users understand that community-based security is not sufficient. Kevin Dwinnell of Liebert said, ``It is critical for customers to protect control over their network devices'' and applications. Some attendees expressed the need to use SNMPv3 security for the public components of their networks, even when they use SNMPv1 or SNMPv2c for the private components. Many were gratified to see SNMPv3's simplified administration. John Seligson said, ``Many visitors ... asked whether [the technology] would be appearing in products soon. ... I talked with several managers of very large networks who said that they would like to deploy as soon as possible.'' According to Cisco's Ram Kavasseri, ``Current Cisco customers were very interested in the planned release date for SNMPv3 functionality on Cisco routers, and the availability of SNMPv3-capable management platforms ... and applications.'' Kavasseri added, ``Response from booth visitors was extremely favorable. Major questions involved deploying of passwords across networks, and difficulty in debugging packets with the privacy mechanism enabled.''
While many visitors were well-informed about the protocol and its progress, any number of visitors used the Hot Spot to gather basic information. ``This was the way Interop used to be a few years ago,'' said Muriel Appelbaum of BMC Software, ``when a show attendee could just walk in and ask for a demo or ask a detailed question and get as much technical information as they wanted.''
Staffing the Hot Spot was productive and enjoyable because all the participants were helpful and cooperative. Bert Wijnen IETF Operations and Management Area Director, noted he is ``very encouraged with the number of interoperating implementations and with the positive spirit [shown in this] unified presentation ... of a single technology. [This shows SNMP is] back on track and moving forward.''
Please contact the respective organizations and vendors above for detailed information on their product plans and availability and take a look at the SNMPv3 web page at http://www.ibr.cs.tu-bs.de/projects/snmpv3/.
At the October 1998 Networld+Interop in Atlanta, SNMPv3 with Security and Administration was again the focus of a Hot Spot. Hot Spots at NetWorld+Interop focus on educating attendees about the latest in interoperable, standards-based technologies. At this event, key vendors demonstrated their implementations of the recently published SNMPv3. The demonstrations highlighted key features of the third version of the Internet-Standard Management Framework which now includes commercial-grade security and a robust administrative framework with remote configuration.
The October SNMPv3 Hot Spot in Atlanta was very similar to the highly successful SNMPv3 Hot Spot at Networld+Interop in May of 1998 in Las Vegas. Both demonstrated multiple interoperable implementations of SNMPv3, increased attendee knowledge of the capabilities, built enthusiasm for the new technology, and showed the strong vendor support for SNMPv3.
There were also a number of differences. Most notably, while the Hot Spot at Networld+Interop in May of 1998 was primarily a technology demonstration, the Hot Spot in Atlanta in October was primarily a products demonstration by the following participating companies:
The IETF standards process classifies documents as a Technical Specification (TS) or an Applicability Specification (AS). A TS is ``any description of a protocol, service, procedure, convention, or format.'' An AS describes ``how, and under what circumstances, one or more TSs may be applied to support a particular Internet capability.'' An AS specifies a requirement level to each TS to which it refers. The levels are:
Now, let's review what is defined by each version of SNMP. The SNMPv1 management framework includes documents that define the SNMPv1 management protocol (RFC 1157), the structure of management information (SMIv1) (RFC 1155, RFC 1212, and RFC 1215), and an initial set of managed objects (RFC 1213) and events (RFC 1215). The SMIv1 specifies the base data types for managed objects. It also defines a language for defining managed objects, events, refinements to the base data types, and OID values. Finally, SMIv1 contains several administrative assignments of OID values.
There are two additional frameworks for SNMP, which are SNMPv2 and SNMPv3. Both of these frameworks define similar, but ``incompatible on the wire'' versions of the SNMP protocol. An improved version of the SMI, called SMIv2, is defined in the SNMPv2 framework that is also used by the SNMPv3 framework. Neither the SNMPv2 nor the SNMPv3 frameworks replace the initial set of objects and events defined in the SNMPv1 framework. However, each version defines additional objects used to manage the SNMP protocol. The SNMPv3 framework contains a large number of objects that can be used to remotely configure the administrative aspects of SNMP entities, which include those supporting SNMPv1, SNMPv2c, and SNMPv3.
Independently from the frameworks, the initial set of objects and events defined in RFC 1213 and RFC 1215 were split into separate documents.
So, to answer the question ``What will happen to SNMPv1?,'' we need to break the question into three parts, which are:
system
and snmp
groups
interfaces
group
ip
group
(except for the IP routing table)
tcp
group
udp
group
at
group of RFC 1213 is not replaced, since it is
deprecated.
egp
group of RFC 1213 is not replaced since it is
obsolete because EGP is Historic.
The latest update of the SMI specifications, which should be published soon, clarify the differences between ASN.1 and the MIB module language. However, these SMI specifications still require the reader to have a copy of the 1998 version of ASN.1 handy for reference.
Say, you have in the C language a struct definition like the following:
struct myStruct { int a; int b[10]; int c; } myTab[20];To turn this into SNMP MIB definitions, you would need two tables:
myfirstTable OBJECT-TYPE SYNTAX SEQUENCE OF MyFirstEntry ... myFirstEntry OBJECT-TYPE SYNTAX MyFirstEntry ... INDEX { i1 } ... MyFirstEntry ::= SEQUENCE { i1 Integer32, a Integer32, c Integer32 } << definitions for objects i1, a, and c >> mySecondTable OBJECT-TYPE SYNTAX SEQUENCE OF MySecondEntry ... mySecondEntry OBJECT-TYPE SYNTAX MySecondEntry ... INDEX { i1, i2 } -- i1 is from the first, i2 -- is from the second table ... MySecondEntry ::= SEQUENCE { i2 Integer32, b Integer32 } << definitions for objects i2 and b >>
The Network Management Research Group (NMRG) will work on solutions for network management problems that are not yet considered well understood enough for engineering work within the IETF. The initial focus will be on higher-layer management services that interface with the current Internet management framework. This includes communication services between management systems, which may belong to different management domains, as well as customer-oriented management services.
We can expect to hear more about these research groups and the work they are doing in the future. This issue of The Simple Times already includes an article about bulk transfers of MIB data. It is the result of an ad-hoc meeting which led to the formation of the NMRG.
The OPS Web server (http://www.ops.ietf.org/) provides guidelines for authors of IETF MIB modules. It also has a Web page which allows to track the progression of OPS related Internet-Drafts through the IESG.
Finally, there are two new public OPS mailing lists: The
ops-area@ops.ietf.org mailing list
is intended for general discussions relevant to the OPS area. The
mibs@ops.ietf.org mailing list is for
discussions related to MIB development. To subscribe, send a message
to the corresponding -request
address with
subscribe
in the body.
The Simple Times also solicits terse announcements of products and services, publications, and events. These contributions are reviewed only to the extent required to ensure commonly-accepted publication norms.
Submissions are accepted only via electronic mail, and must be formatted in HTML version 1.0. Each submission must include the author's full name, title, affiliation, postal and electronic mail addresses, telephone, and fax numbers. Note that by initiating this process, the submitting party agrees to place the contribution into the public domain.