November 15, 1996

ATM Access Is Ready, Willing And Able To Fly On Your Backbone

By Joel Conover

ATM has long been eyed as a promising-but elusive-panacea for a variety of networking complaints, and several key changes have broadened the immediate therapeutic appeal of the technology. For one thing, it is enjoying newfound interoperability and multivendor support-necessary building blocks for any fledgling network technology. Now that vendors have ironed out most of the kinks in the current generation of ATM access devices, users feel more confident about purchasing large-scale ATM networks.

Second, the arrival of standards-protocols including User-to-Network Interface (UNI) 3.1, Interim Interswitch Switching Protocol (IISP) and Private Network-to-Network Interface (PNNI)-has simplified configuration and setup of ATM devices, to the point where ATM switching technology is easily understood by the majority of network administrators.

A third significant change is that ATM now supports legacy networks, thanks to LAN Emulation (LANE) and Multiprotocol Over ATM (MPOA), which let users run LANs on an ATM backbone. This brings us to ATM's final ace in the hole-it satisfies users' need for speed.

With bandwidth at a premium, there's only so much segmenting that can be done before a wholesale changeover is needed on the network backbone.

Still, implementing a connection-oriented technology such as ATM introduces new factors in building and maintaining your LAN: Each device on the network needs to establish at least one virtual circuit (VC) to communicate with some point on the other end; LANE 1.0 uses at least four VCs for every LAN Emulation Client (LEC), plus one or more data-direct VC, upon which actual user data is transmitted; and different edge devices can have anywhere from a single LEC to an LEC for every port.

When an ATM device is connected to the network, the LEC software sends out a query and establishes a call to another device at the other end of the wire. The time needed to set up this end-to-end VC is defined as the call setup time. Call setup times vary by vendor, but they last approximately 100 milliseconds (ms), or about 10 per second.

Call setup time isn't likely to have an impact on your network until something breaks-even the busiest Monday mornings shouldn't cause a noticeable delay, for the simple reason that many of the control VCs on the ATM fabric are never lost. But should a network disaster occur that forces a switch to reset, call setup time becomes crucial.

Consider the example of a switch servicing 3,000 network ports that is suddenly forced to reboot after three weeks of operation. Assuming the worst, where each port is serviced by an individual LEC, the switch suddenly will be hit with about 15,000 requests to reestablish VCs. Not only must it be able to deal with this onslaught of network service requests, but it must also be able to respond quickly. At current speeds, it could take as long as 25 minutes for your network to stabilize-and that's assuming every request is handled on the first try.

Meanwhile, your helpdesk has exploded with frantic telephone calls, and your work force is stuck in neutral as your ATM network reestablishes itself. If the phones don't work because you're on the bleeding edge of technology and have integrated voice, video and data into a single ATM backbone, call setup time then becomes very important indeed.

Almost as important in choosing the right edge devices is determining the number of VCs each edge device will use. Although a LEC generally can hold about 250 stations, a well-designed switch will keep this number more realistic, at five to 10 clients per LEC, for example. Thus, the number of VCs required to get your network up and running drops from 15,000 to a more manageable 3,000.

Equally important is calculating the number of VCs your core ATM switches can support. The deeper you go into your switched network, the more VCs your ATM device needs to support. Using a mesh of switches can alleviate VC headaches, but it ups the cost of your switched network. In general, an edge switch with anything more than 1,000 VCs is overkill.

Backbone switches can sustain anywhere from 2,000 to 64,000 VCs, but vendors recommend a minimum of 8,000 on core switches. A one-armed router might need as many as 20,000 VCs because of the large amount of traffic being forced through a single pipe. Cisco Systems offered two handy equations to help you estimate how many VCs your switching fabric will need to support, and the number an individual interface can handle (see chart above, "Estimating Your Needs," and Network Computing Online at www.NetworkComputing.com for more on the equations used)

What ATM Offers Over Ethernet is that the ATM standard promises to offer quality of service (QoS) connections from end to end. Although vendors promise this remarkable feature, few have delivered on their promises. Recently, there has been much talk about similar protocols over Ethernet, which may have you wondering why you should bother to invest in ATM.

The Resource Reservation Protocol (RSVP) and Realtime Transport Protocol (RTP) let applications request an amount of bandwidth to be set aside for a given application. As traffic starts to flow on a frame-based network, data is queued in a first-in, first-out (FIFO) buffer. Since the frame size of Ethernet can range from 64 bytes to about 1.5 KB, there is no guarantee that the rate of delivery will be consistent. If two bandwidth-reserved frames come into an Ethernet switch at the same time, the one entering the buffer first will be transmitted first. Because the whole frame must be transmitted, the second stream has to be buffered behind it.

Contrast this technology with a cell-based switch. With 53-byte ATM cells, multiple priority queues can be set up to allow different qualities of traffic to be balanced equally while maintaining a constant rate of flow. Because the queues can be serviced independently, individual cells can be transmitted alternately, allowing the ATM switch to maintain a true QoS to both destinations.

With Ethernet, the entire frame must be sent before the second queue can be serviced. Thus, though bandwidth has been reserved, true quality of service is lost because there is no fixed frame size for a given packet.

Quality of service over ATM is broken up into several different classes. Rather than attempting to grab a certain amount of bandwidth, ATM devices can negotiate for the best-available rate. As network dynamics change, the device can renegotiate its connection for more or less bandwidth, as necessary. These two factors give ATM a clear edge over Ethernet QoS protocols.

Another major benefit of ATM networks over switched Ethernet is the ability to scale out to the edge. However, with desktop ATM deployment costing two to four times the price of an equivalent Ethernet network, the added complexity and expense of an ATM network may well outweigh its scalability advantage for some users.

As vendors have pointed out-and we'd have to agree-the middle road seems most promising, with ATM out to the edge and switched Ethernet to the desktop. This solution seems to offer the best of both worlds, with scalable bandwidth out to the edge of the network and the simplicity of Ethernet to the user. Core ATM networks with switched Ethernet or shared Fast Ethernet offer unlimited bandwidth options, and keep the learning and cost curves down to a minimum.

ATM is a scalable network-you can add edge switches and switch-to-switch links until you run out of money or switch backplane bandwidth. However, with every switch you add, chances are you've integrated another subnet into your ATM network. Subnets in Ethernet translate to Virtual LANs (VLANs) in ATM-domains of restricted broadcast and multicast traffic. As your ATM network grows, the router that serves as the connection between VLANs also needs to grow.

Many shops will opt for a one-armed router with one or more ATM ports to handle traffic among VLANs. The snag with this design is that as intranet traffic and network size grow, the backbone router that once functioned as a gateway mostly to the backbone suddenly is inundated with intra-VLAN traffic.

Just as suddenly, your $50,000 ATM router isn't doing quite what you expected it to do. Bandwidth is only one problem-the number of VCs an individual router port can handle is a function of the hardware, and a big ATM network may push the router's connection table to the limits. ATM and LANE also add overhead to the routing process, and depending on what kind of processor your router has under the hood, you may be faced with less than optimal throughput even though the router has bandwidth to spare.

None of these problems can be solved by your existing routers. Two possible solutions are MPOA and Layer 3 routing. Both perform essentially the same function: They move the routing functionality off the central router and out to the switches.

Layer 3 routing requires you to have switches that can make Layer 3 routing decisions on the fly. Well-placed VLANs can help move traffic to the right switch, where it can be routed to another VLAN and sent to its destination.

MPOA moves the routing process to the MPOA server, while packet forwarding is implemented by the edge device. The former technology is available in products such as the FORE Systems PowerHub 7000 and is not ATM-centric; that is, even a frame-based network can take advantage of the technology. The latter technology is still in the works, but already functioning for vendors such as Newbridge Networks. It may be more popular for customers investing in ATM technology today, as it's likely that many ATM switches and edge devices will support MPOA in the future.

ATM promises to be scalable, so building an ATM network will require considerable long-term planning. Options exist for putting 155-Mbps OC-3 links-as well as higher-speed OC-12 links-between switches. If your current backbone is running out of steam, putting in a single OC-12 link may seem appealing-but multiple OC-3 connections will give you the same speed plus fault-tolerance. Should one of your fiber connections be severed, a quad OC-3 implementation will leave you with spare links for keeping your network running.

Cruising the Fast LANE Can today's LANE 1.0 standard meet your ATM network needs? The standard allows for a single LANE configuration server per emulated LAN-if this server goes down, so does your ATM network. If waiting for MPOA or LANE 2.0 isn't an option, you might consider proprietary redundant solutions from Bay Networks, Cisco Systems and 3Com Corp. to keep your network up and running. Be aware that choosing this option could put you in the single-vendor boat-an undesirable position for many shops.

Similarly, PNNI, the dynamic routing protocol that allows switches to reroute traffic in the event of switch failure, also is unsupported by most vendors. Fortunately, PNNI has been ratified for some time, so vendors should announce PNNI support in the first half of 1997. This protocol could save network managers many configuration headaches, because it allows ATM switches to negotiate their routes, eliminating the need to define static IISP routes between each pair of switches.

Chances are your existing network armory also consists of several troubleshooting devices, including network sniffers, cable analyzers and other diagnostic tools. Remember that many of these tools will need to be replaced or updated if you plan to keep on top of your new ATM backbone. ATM sniffers are available, but generally we've found them difficult to use, as they're geared more toward equipment manufacturers than users. Also, how useful is a standalone sniffer in a switched environment? You'd do better to look at the switch's onboard traffic-analysis features. Cable testing is somewhat easier, since most backbone implementations will be running optical fiber, and testers for this media have been around for some time.

Finally, though the folks at Bay Networks insist that there is no added "cell tax" in ATM backbones, we beg to differ. ATM has hidden costs above and beyond its price tag. Managing the ATM network requires setting up numerous switch-to-switch connections and configuring some sort of LANE service to get your non-native ATM applications communicating.

ATM-attached servers generally have much higher CPU utilization, mainly because of the overhead that LAN emulation incurs, not to mention that Ethernet driver developers have had several years to refine and hone their drivers for maximum performance. Still, the lure of ATM is great, and imminent standards, including PNNI, LANE 2.0 and MPOA, are positive reasons to persuade you that now is the time for ATM.

Unlike ATM technology in general, ATM to the desktop is the victim of a different fate. The complexity of LANE at the desktop has led many client sites to steer clear of this technology. At the Network Computing Real-World Distributed Labs at the University of Wisconsin at Madison, we set up two mock networks of 12 workstations each to compare ATM25 and Switched Ethernet desktop technologies.

We took the project one step further by having vendors spec a full-scale production network similar to our simulation.

We asked Bay Networks, FORE Systems and 3Com to provide us with quotes for a production-sized ATM25 network and an equivalent switched Ethernet network. We specified the following restrictions in the problem design:

Must support 250 clients in at least 10 workgroups.

Must have a high-speed backbone.

Must have high-speed uplinks between the workgroup edge devices and the backbone.

Must support 10 backbone-attached servers.

With the help of Bay Networks, we attempted to build an end-to-end ATM network. We used the production lab as a test bed to build a scaled-down version of the network, consisting of two workgroups with six machines in each workgroup. The "Network Shakedown" diagram on page 76 is a scaled-down version of the ATM end-to-end network that we tested. We benchmarked the networks using Novell's Perform3, and tested some of the QoS applications available under ATM.

The simulation tests favor Ethernet or Fast Ethernet as the dominant desktop technology. The results, including equipment required and individual costs, are outlined in the "End-to-End Network Solutions" chart below.

Our first test network consisted of 12 ATM25 clients with adapters provided by Adaptec, FORE and Madge Networks. We connected each workgroup to a Bay Networks MultiMedia Switch-a 25-Mbps ATM switch with a 155-Mbps OC-3 uplink. Each workgroup switch was connected via the 155-Mbps uplink to our backbone switch, a Bay Networks Centillion 100 running LANE services. We then attached our Novell NetWare 4.1 server to the Centillion using a 155-Mbps ForeRunner ATM adapter.

The second test network consisted of two workgroups of six switched 10-Mbps Ethernet clients connected to Bay Networks 28200 Ethernet switches. The switches were connected via Fast Ethernet to the Centillion 100, as was our Fast Ethernet-attached NetWare server.

Benchmarking the ATM network proved more challenging than we anticipated. Our first benchmarks used no packet bursting and we measured 93.7 Mbps of aggregate throughput on the ATM network. Getting more than 9 Mbps per ATM25 client, we removed six of the clients and re-ran the test.

The new configuration did not meet our expectations; we still achieved only 9.2 Mbps per client, or 55.6 Mbps of total throughput. Slightly puzzled by the lack of speed on our 25-Mbps ATM network, we decided to tweak the NET.CFG files to enable packet-burst support. This time our 12-machine test gave us 62 Mbps of aggregate throughput, but almost all of the throughput came from one switch-the other was totally shut out of the test.

Some quick investigation on the part of our Bay Networks engineer helped us discover the weakness in our test bed, and indeed, a weakness in ATM technology in general. One of the switches contained a rate-pacing board designed to buffer excessive cells as they move from the 155-Mbps network to the 25-Mbps segment; the switch that was shut out didn't have one of these boards. Here lies the problem: Going from high-speed connections to lower speeds requires large buffers and a good buffering scheme.

When we reconfigured the workstations on the unbuffered switch to not burst packets, the results were more in line with what we expected. The aggregate network throughput was 107 Mbps, with 71 percent server utilization. We also found that 4-KB datagrams went through the switch much faster than any of the larger message sizes we tried earlier. By moving all 12 workstations to the buffered switch, we saw as much as 124.3 Mbps of throughput, with 78 percent server utilization-about 10 Mbps per client, and approaching the limits of the OC-3 link between the server and the Centillion backbone switch.

To investigate QoS applications over ATM, we hooked up with First Virtual Corp. First Virtual makes a multimedia data server and ATM25 NICs, which, when using their drivers, allow for constant bit rate (CBR) QoS connections. By playing streaming data from the video server over the OC-3 connection between the buffered switch and the NetWare server, we were able to generate approximately 90 Mbps of CBR traffic. Theoretically, we should have been left with about 45 Mbps of bandwidth to run our Perform3 benchmarks, since LAN Emulation uses unspecified bit rate (UBR), or that which is available after everything else has been accounted for.

Our benchmarks gave us 98.2 Mbps of traffic before the CBR video was started, and 53.5 Mbps while the CBR video was running. Clearly, some form of QoS was at work here, as 44.7 Mbps of bandwidth had disappeared from our throughput after the CBR videos started.

After reconfiguring our lab for the Ethernet environment, we ran the same set of benchmarks. This time, no rate-pacing difficulties seemed present. The Bay Networks 28200 switches handled the test with ease, and we achieved a maximum throughput of 94.2 Mbps to our NetWare server, which ran at a cool 13 percent utilization. The switched network was able to give each client almost 8 Mbps of dedicated bandwidth.

Several facts became clear from the results of these tests: ATM adapters generate a much higher CPU utilization in network file servers (we have seen this time and again-LANE takes extra processing power); our 25-Mbps ATM clients did only marginally better than our switched Ethernet clients, which indicates that the extra cost of ATM doesn't justify the extra price when going to the desktop; ATM clearly offers QoS features and applications that are only in the planning stages for Ethernet (watch for QoS application programming interfaces [APIs] and applications to start shipping for ATM environments within the next six months); and switch architecture plays a major role in ATM25 applications-a poorly designed switch will result in slower network throughput, especially when large amounts of bursty traffic are present.

The Vendor Challenge The three vendors responded to our ATM network challenge with similar solutions. The degree of scalability varied slightly, with Bay Networks leaving the most room for expansion. The Bay solution consisted of 13 edge switches and a Centillion 100 at the core of the network, and the entire 260-node solution worked out to $1,038 per node. The FORE Systems solution used 14 edge switches with an ASX-200BX at the heart of the network, and cost $1,024 per node. 3Com had the most expensive solution, consisting of 10 edge devices and a CELLPlex 7000 as the backbone switch, and worked out to $1,046 per node. These close prices can be attributed to the youth and added design complexity of 25-Mbps technology-factors that probably will remain in play for several years.

Constructing a similar topology network based on Ethernet and Fast Ethernet, we found prices to be both more reasonable and diverse. 3Com was the price leader with a 260-node solution that cost only $325 per node. Although all the products submitted offered some sort of management, the FORE solution was the only one to offer nine integrated groups of Remote Monitoring (RMON) on every port at a price of $451 per node. As icing on the cake, the FORE backbone solution also featured Layer 3 switching capability. The Bay solution ran for $567 per node, and its two-switch design left the possibility of bottlenecks in the backbone, especially in client/server implementations.

ATM in the Marketplace How will ATM shake out in the marketplace? Here are the vendors' perspectives.

FORE Systems

Scalability is one of the great triumphs ATM boasts, and FORE is driving the technology hard with the promise of 2.5-Gbps switch-to-switch links before the end of 1997. Not only does this speed blow even Gigabit Ethernet out of the water, it also promises to be available at about the same time. Like other vendors, FORE is realizing that customers want a total systems solution, and it is growing in that capacity with its recent acquisitions of vendors Alantec and Applied Network Technology (ANT)

ATM on the backbone will continue to be FORE's core competency, as it attempts to address the enterprise and midtier markets. If you follow FORE's preachings, then 622-Mbps OC-12 backbones with 155-Mbps OC-3 to the closet is the only way to go.

FORE's product line, including the new PowerHub series, is geared toward providing ATM at the heart of the network while leaving legacy media in place. FORE has seen dramatic growth in the desktop ATM arena. Still, it feels that desktop ATM will be reserved primarily for use in specialized applications, which need the higher speeds or QoS capabilities.

Though routers will diminish in importance as switched networks grow, routing in general will become key to the corporate intranet and WAN applications. To address the growing rate of internal intranet traffic, FORE is building up its PowerHub line of high-end switches. These edge devices move traffic off the router and backbone network by locally switching between networks. FORE claims the PowerHub series even can route traffic between different switches, further reducing the routing load on central routers.

Distributed routing in an ATM environment also eliminates the one-armed router. An OC-3 interface for a high-end router such as the Catalyst 7000 series is likely to cost at least $25,000-a hefty price for a single router port. Depending on your network configuration and intranet traffic, your one-armed router can quickly become the biggest bottleneck in your network. It's better to save those expensive router ports for WAN connectivity.

The numbers inevitably tell the truth. More than 60 percent of FORE's sales are accounted for by ATM switches, and overall sales grew by 90 percent over the past year. Standards like PNNI will be shipping soon, further embellishing ATM's allure, and a new degree of scalability and ease of use will follow. QoS remains problematic for now; edge switches can do some data analysis and bandwidth reservation, but QoS won't be a reality until applications start taking advantage of the power of ATM.

Bay Networks

Bay Networks has been hot on ATM for several years, but only now is its product line maturing into a viable ATM solution, thanks to some help from its Centillion acquisition. The big words in Bay's ATM vocabulary are QoS and resiliency. Like the other players in the ATM game, Bay sees ATM primarily as a WAN and backbone solution, while user-level access will be 100 percent frame-based. Bay was particularly hyped about the failover capabilities of ATM. Unlike traditional spanning tree networks-which can take anywhere from 20 seconds to two minutes to converge on the loss of a switch-ATM networks generally have a sub-10-second failover time. Your helpdesk is much less likely to be inundated with user complaints in the latter scenario.

Bay was the only vendor to really hype QoS, no doubt largely because of Bay's avid support of Integrated PNNI (I-PNNI) as a standardized network routing protocol (see "Confused About I-PNNI?" on page 84). Bay subscribes to the philosophy that building an ATM network starts with getting ATM to every building and every closet. Desktops should be no more than "one wire away" if they want to take advantage of future QoS applications. Even the most complex frame environments can't equal the kind of traffic management available on today's ATM switches, and running a second wire to the ATM switch provides more bandwidth and a failover if the primary link fails.

Bay believes ATM will outpace the growth of other backbone technologies three- to four-fold in the coming years, and that it will account for about 40 percent of all backbone infrastructure purchases in 1997. This growth can be attributed to the fact that customers are much more comfortable with ATM as a whole, Bay asserts. ATM interface hardware is available for a much larger variety of machines, and chances of successful interoperability are high. At the same time, frame-based technologies, with which many users are already familiar, have been completely integrated into the ATM picture.

Cisco Systems

Enterprise-router manufacturer Cisco Systems is not kicking back and waiting for ATM to catch up. Indeed, the company's recent acquisition of StrataCom has positioned Cisco at the head of the pack for WAN-based ATM solutions, and ready to compete with WAN-access providers such as Cascade Communications Corp. and Newbridge. Cisco's main focus in the next year will be to capture the Internet as it migrates from frame-based Cisco routers to higher-speed switched ATM networks. According to Cisco, wide area ATM networking will become a reality in the coming year, thanks to recent advances in the Circuit Emulation and Inverse Multiplexing standards committees of the ATM Forum.

By providing inverse multiplexing through a standard method, Cisco hopes to solve the WAN-bandwidth problem. For many companies, a T1 link doesn't provide the level of service necessary for their amount of data, while a T3 line is prohibitively expensive and sometimes unavailable. By infusing the carrier market with switches capable of aggregating multiple T1s into an ATM fabric, Cisco hopes to offer these customers an alternative WAN solution.

Cisco has seen steady growth on the LAN front since the introduction of the Catalyst 5000 ATM board late last year. Cisco expects the ATM market to grow steadily with the industry as a whole, with ATM sales nearly doubling by next year. Cisco attributes the growth in the ATM sector to people's awareness of what they need to build an ATM network. In addition, performance of ATM infrastructure is up and people are no longer infatuated with-or frightened by-the idea of a cell-based technology. The Cisco LightStream 1010 and Catalyst 5000 make ATM networking easier with an early implementation of PNNI, the dynamic configuration protocol that promises to make ATM configuration not only more robust but also virtually effortless.

Cisco sees the ATM LAN as the next logical step in WAN connectivity. Using existing Cisco technology, such as the LightStream 1010, users can take advantage of available bit rate (ABR) technology to control the maximum data rate sent over the WAN, permitting greater control over how an ATM WAN connection is utilized. If you use ATM on your backbone, you can leverage its strength to control your bandwidth. By positioning itself to be the No. 1 provider of ATM technology to the telephone company market next year, you can almost bet that these features will be supported in your area before the year 2000.

3Com

3Com is riding the ATM wave as the next switched technology of choice, whether it be with ATM core networks or multifunction hubs with ATM support in the network center. At the edge, the decision is clearly Ethernet-preferably Fast Ethernet. ATM to the desktop is nixed for now, as the first real applications won't start hitting the market until the second half of 1997. But you can bet 3Com will roll out a line of ATM products in 1997 as it gears up to support next-generation LAN technologies like ATM WAN services and desktop multimedia. Both will require bandwidth an order of magnitude bigger than current backbones can support.

3Com is pushing for frame-based technologies on the ends of the network, including Fast Ethernet to the network server. Its logic is simple: Servers are starting to ship with Fast Ethernet on board, and 3Com goes as far as to recommend Fast Ethernet over ATM when reliability is critical. Although 3Com insists that ATM is the backbone solution of the future, it adds that switched networking, in general, is the wave of the future. The company estimates that ATM will hit its stride by next year, with about 20 percent of 3Com's upgrading customers choosing ATM. 3Com already has a solid ATM customer base; it claims that it has installed more than 300 ATM backbones, with about 10 percent of all edge uplinks being ATM ports.

3Com was the only vendor that brought up the new Cells in Frames (CIF) protocol. CIF lets you write ATM-aware applications that run on frame-based technologies such as Ethernet. That is, rather than designing ATM-aware applications, you write to the media-independent adapter driver and use just one API to handle QoS applications on the desktop. The result is that software developers can design QoS applications without sacrificing the majority of their users-Ethernet users. Although 3Com gives the impression of being somewhat skeptical about ATM, its product line seems to indicate that the opposite is true. Its CELLPlex, LANPlex and ONcore networking products present a great single-vendor LANE solution with the added feature of redundant LANE servers.

Newbridge Networks

Newbridge often is considered the odd-guy-out when it comes to ATM networking, primarily because it chose MPOA as its core LAN emulation technology when everyone else was climbing aboard the LANE bandwagon. Newbridge may have the last laugh. At the end of August, the ATM Forum took a decisive step toward making MPOA a reality, and vendors can finally start implementing software that will make their existing switches MPOA-compatible.

According to Newbridge, the core problem in building any network is that customers ask for technology that will improve their current network without involving replacement of the backbone router. Most solutions tend to increase available bandwidth, but end up bringing the backbone router to its knees. To get around this, switched routing was invented to move the route-forwarding function out to the edge of the network while centralizing the route-processing engine in a single route server (or MPOA server, as the naming convention now states). This decreases load on the forwarding engines of the traditional router model, while also decreasing forwarding time, because packets are switched using the same high-speed switching engines that are present for standard network functionality.

Newbridge insists that a scalable network must start with MPOA. In addition to the distributed router model, the centralized route engine allows for some distinctive network management features. The route server registers hosts throughout the environment, giving the MPOA server a much higher degree of knowledge than that of a typical router. For example, VLANs can be defined by protocol type as well as by port grouping, or they can dynamically change with the location of the MAC address. This ability also adds administrative functionality, such as being able to lock a user to a designated set of ports, preventing network intruders.

The downside to choosing the MPOA protocol is you're stuck with Newbridge's proprietary implementation of MPOA. Newbridge plans to adhere to the stripped-down standard the ATM Forum should ratify early next year, but it will likely be several months before other vendors release code that supports MPOA on their products. We can only hope that MPOA doesn't suffer the same interoperability trials that plagued early LANE implementations.

Joel Conover can be reached at jconover@nwc.com.

Copyright ® 1996 CMP Media Inc.