Techopedia explains Multilayer Switch Traditionally, switches are the network devices that forward data packets based on the Layer 2 information like media access control MAC addresses. Share this:. Related Terms. Related Articles. Virtual Networking: What's All the Hype? Related Questions. What is the difference between cloud computing and virtualization? What is the difference between cloud computing and web hosting? What is Cloud Print and how is it used?
More of your questions answered by our Experts. Related Tags. Latest Articles.
A great tool for network efficiency, usually implemented in corporate settings
Reinforcement Learning Vs. It provides QoS, policy reinforcement, service levels, and security. A well-designed network not only controls traffic, but also limits the size of failure domains. A failure domain is the area of a network that is impacted when a critical device or network service experiences problems. The function of the device that initially fails determines the impact of a failure domain. For example, a malfunctioning switch on a network segment normally affects only the hosts on that segment. However, if the router that connects this segment to others fails, the impact is much greater.
The use of redundant links and reliable enterprise-class equipment minimize the chance of disruption in a network. Smaller failure domains reduce the impact of a failure on company productivity. They also simplify the troubleshooting process, thereby, shortening the downtime for all users.
Because a failure at the core layer of a network can have a potentially large impact, the network designer often concentrates on efforts to prevent failures. These efforts can greatly increase the cost of implementing the network. In the hierarchical design model, it is easiest and usually least expensive to control the size of a failure domain in the distribution layer.
In the distribution layer, network errors can be contained to a smaller area; thus, affecting fewer users. When using Layer 3 devices at the distribution layer, every router functions as a gateway for a limited number of access layer users. Routers, or multilayer switches, are usually deployed in pairs, with access layer switches evenly divided between them.
This configuration is referred to as a building, or departmental, switch block. Each switch block acts independently of the others. As a result, the failure of a single device does not cause the network to go down. Even the failure of an entire switch block does not affect a significant number of end users.
To support an enterprise network, the network designer must develop a strategy to enable the network to be available and to scale effectively and easily. Included in a basic network design strategy are the following recommendations:. For many organizations, the availability of the network is essential to supporting business needs. Redundancy is an important part of network design for preventing disruption of network services by minimizing the possibility of a single point of failure.
- Network Switching Tutorial.
- Corporal Hitler and the Great War 1914-1918: The List Regiment?
- A great tool for network efficiency, usually implemented in corporate settings.
- Solvents Theory and Practice!
- Why Use a Multilayer Switch??
- Innovation with a purpose.;
One method of implementing redundancy is by installing duplicate equipment and providing failover services for critical devices. Another method of implementing redundancy is redundant paths. Redundant paths offer alternate physical paths for data to traverse the network. Redundant paths in a switched network support high availability. However, due to the operation of switches, redundant paths in a switched Ethernet network may cause logical Layer 2 loops.
STP eliminates Layer 2 loops when redundant links are used between switches. It does this by providing a mechanism for disabling redundant paths in a switched network until the path is necessary, such as when failures occur. STP is an open standard protocol, used in a switched environment to create a loop-free logical topology. In hierarchical network design, some links between access and distribution switches may need to process a greater amount of traffic than other links.
As traffic from multiple links converges onto a single, outgoing link, it is possible for that link to become a bottleneck. Link aggregation allows an administrator to increase the amount of bandwidth between devices by creating one logical link made up of several physical links. EtherChannel is a form of link aggregation used in switched networks. EtherChannel uses the existing switch ports; therefore, additional costs to upgrade the link to a faster and more expensive connection are not necessary. The EtherChannel is seen as one logical link using an EtherChannel interface.
Most configuration tasks are done on the EtherChannel interface, instead of on each individual port, ensuring configuration consistency throughout the links. Finally, the EtherChannel configuration takes advantage of load balancing between links that are part of the same EtherChannel, and depending on the hardware platform, one or more load-balancing methods can be implemented. The network must be designed to be able to expand network access to individuals and devices, as needed. An increasingly important aspect of extending access layer connectivity is through wireless connectivity.
Providing wireless connectivity offers many advantages, such as increased flexibility, reduced costs, and the ability to grow and adapt to changing network and business requirements. Additionally, a wireless router or a wireless access point AP is required for users to connect, as shown in the figure. There are many considerations when implementing a wireless network, such as the types of wireless devices to use, wireless coverage requirements, interference considerations, and security considerations.
Enterprise networks and ISPs often use more advanced protocols, such as link-state protocols, because of their hierarchical design and ability to scale for large networks. Link-state routing protocols such as Open Shortest Path First OSPF , as shown in the following figure, works well for larger hierarchical networks where fast convergence is important. When routers initiate an adjacency with neighbors, an exchange of link-state updates begins.
Routers reach a FULL state of adjacency when they have synchronized views on their link-state database. With OSPF, link state updates are sent when network changes occur. OSPF is a popular link-state routing protocol that can be fine-tuned in many ways. As the network is expanded, other, non-backbone areas can be created. All non-backbone areas must directly connect to area 0.
For example, EIGRP uses multiple tables to manage the routing process, as shown in the following figure. EIGRP contains many features that are not found in any other routing protocols.
It is an excellent choice for large, multi-protocol networks that employ primarily Cisco devices. When designing a network, it is important to select the proper hardware to meet current network requirements, as well as allow for network growth. Within an enterprise network, both switches and routers play a critical role in network communication.
When selecting switches, network administrators must determine the switch form factors. This includes fixed configuration, modular configuration, stackable, or non-stackable. The thickness of the switch, which is expressed in the number of rack units, is also important for switches that are mounted in a rack. For example, the fixed configuration switches are all one rack units 1U. In addition to these considerations, The following list highlights other common business considerations when selecting switch equipment. The port density of a switch refers to the number of ports available on a single switch.
The figure shows the port density of three different switches. Fixed configuration switches typically support up to 48 ports on a single device. They have options for up to four additional ports for small form-factor pluggable SFP devices. High-port densities allow for better use of limited space and power. If there are two switches that each contain 24 ports, they would be able to support up to 46 devices, because at least one port per switch is lost with the connection of each switch to the rest of the network.
In addition, two power outlets are required. Alternatively, if there is a single port switch, 47 devices can be supported, with only one port used to connect the switch to the rest of the network, and only one power outlet needed to accommodate the single switch. Modular switches can support very high-port densities through the addition of multiple switch port line cards.
For example, some Catalyst switches can support in excess of 1, switch ports. Large enterprise networks that support many thousands of network devices require high density, modular switches to make the best use of space and power. Without using a high-density modular switch, the network would need many fixed configuration switches to accommodate the number of devices that need network access.
This approach can consume many power outlets and a lot of closet space. The network designer must also consider the issue of uplink bottlenecks: A series of fixed configuration switches may consume many additional ports for bandwidth aggregation between switches, for the purpose of achieving target performance. With a single modular switch, bandwidth aggregation is less of an issue, because the backplane of the chassis can provide the necessary bandwidth to accommodate the devices connected to the switch port line cards.
Forwarding rates define the processing capabilities of a switch by rating how much data the switch can process per second.
Switch product lines are classified by forwarding rates, as shown in the figure. Entry-level switches have lower forwarding rates than enterprise-level switches. Forwarding rates are important to consider when selecting a switch. If the switch forwarding rate is too low, it cannot accommodate full wire-speed communication across all of its switch ports. Wire speed is the data rate that each Ethernet port on the switch is capable of attaining.
What is a Multilayer Switch? - Definition from Techopedia
Fortunately, access layer switches typically do not need to operate at full wire speed, because they are physically limited by their uplinks to the distribution layer. This means that less expensive, lower performing switches can be used at the access layer, and more expensive, higher performing switches can be used at the distribution and core layers, where the forwarding rate has a greater impact on network performance.
PoE allows the switch to deliver power to a device over the existing Ethernet cabling. This feature can be used by IP phones and some wireless access points. Click the highlighted icons in Figure 1 to view PoE ports on each device. PoE allows more flexibility when installing wireless access points and IP phones, allowing them to be installed anywhere that there is an Ethernet cable.
A network administrator should ensure that the PoE features are required, because switches that support PoE are expensive. PoE pass-through allows a network administrator to power PoE devices connected to the switch, as well as the switch itself, by drawing power from certain upstream switches. Click the highlighted icon in Figure 2 to view a Cisco Catalyst C. Multilayer switches are typically deployed in the core and distribution layers of an organization's switched network.
Related Enterprise Networking: Multilayer Switching and Applications
Copyright 2019 - All Right Reserved