Traditional 3-Tier Data Center Networks

About 20 years ago there was a need for more servers because of an increase in the number of applications due to improved network availability. Hence, a large number of access ports were required to be provisioned in data centers. Cloud computing did not exist at the time, and private data centers were the norm. Each enterprise had to invest in its infrastructure, and multi-tenancy was also limited to service provider networks as VLAN addressing space was limited to 4094 VLANs within the data center.

As a result of the high number of servers in the data center, access switches needed to have more ports to connect to the servers. These formed the access layer. These, in turn, needed to be aggregated to a distribution/aggregation layer typically at the end of the data center row. The distribution switches typically were modular switches with a large number of downstream ports to aggregate many access switches.

The next design consideration was to connect all these distribution switches to high bandwidth, low latency transport core with a high level of redundancy. The core layer’s functionality is only data transport with minimal routing overhead.

The switching functionality was limited to the access and distribution layers, with the distribution layer acting as the inter-VLAN gateway. The Core layer did not participate in switching and focused only on routing. The rest of the network is typically connected to the core or a dedicated distribution switch pair. Multiple uplinks were connected for redundancy at all layers, and redundant device pairs with identical configurations were configured in HA for fast failover. This was achieved using cross-links. VSS was one such popular configuration.

Problems with the Traditional 3-Tier Data Centers

The 3-tier data centers were mainly designed for North-South traffic flows at a time when server virtualization was in a nascent phase. As virtualization became more commonplace, applications were baked across multiple physical servers spread across the data center and in some cases, across multiple data centers. This exposed the problem with the number of hops in the three-tier architecture. Traffic flows looked like this: Server1 -> Access1->Dist1->Core (can be more than one hop)->Dist2->Access2->Server2. A minimum of 4 hops between the source and destination access switches.

The second and one of the most important factors was Spanning-tree. Managing so many STP domains was proving problematic, and all the available uplinks were not being used in the network. Provisioning newer applications required agility, and network teams could not cope with the dynamic nature of applications while working with the rigid Layer 2 designs.

Lastly, VM mobility was becoming more and more necessary for DR and HA scenarios for critical workloads. This was difficult to implement in the 3-tier model as destination data center L2 and L3 design meant that readdressing of the VMs was necessary. This broke the applications and required reconfigurations at the user and administrator levels.

In addition, the development of SOC capabilities by the network OEMs meant that near-linerate Routing lookups were possible. There was no real need to restrict routing to the core and distribution layer only, and it could be extended to the access layer and alleviate the issues with Spanning-tree.

Introduction to the CLOS Non-Blocking Fabric

The above problems with 3-tier data centers led OEMs to devise a simpler architecture using the below principles:

  • 3 stage fabric – Ingress, Crossbar, Egress
  • Equal cost Multipathing
  • The fabric contains only switches. 
  • All switches are multilayer switches with both Layer2 and Layer3 capabilities
  • Subscription ratio can be improved by the addition of new Spine switches
  • Number of access ports can be increased by the addition of new Leaf switches

Leaf and Spine network is an implementation of the CLOS architecture.

Understanding the Switch Roles – The ‘Leaf’ and the ‘Spine’

In the CLOS fabric, there are new switch roles that we need to understand. The leaf switch is another name for the access switch. Both the ingress and egress switches in the CLOS fabric are leaf switches.

In addition, we have the ‘Spine’ role – which is another name for the Crossbar functionality in a 3 stage CLOS fabric. As all ‘leaf’ switches connect to this layer, it is called the Spine of the fabric and hence the name.

Each leaf needs to connect to all the spines in the network and use ECMP to utilize all available paths for forwarding. The ‘spine’ switches need not connect to each other and are used only for connectivity between the leaf switches.

While it is possible to connect external networks to spine switches, it is recommended to have dedicated switches called ‘border-leaf’ switches that form the border of the fabric and connect to external networks.

Advantages of Using a Leaf-Spine Architecture:

The advantages of using a leaf-spine architecture are as below:

  • Non-blocking – no Spanning-tree blocked ports
  • Utilize all available paths to the destination access switch. 
  • Number of available paths = number of spine switches in the network.
  • Subscription ratio of 1:1 can be achieved in most cases
  • Source to destination is always 2 hops on the network
  • Deterministic east-west traffic paths
  • No spanning-tree needed – no learning and forwarding delays during reconvergence
  • Complete Layer 3 data center is possible

Rahi can help enterprises identify and deploy the latest leaf-spine solutions available in the market from a multitude of vendors. Rahi has extensive experience in deploying highly scalable data center networks across the globe and experienced professional services and managed services teams for Day 1 configuration and Day 2 support.

Sam is an A/V professional with 12 years of experience. He began his career with Thresher Communications prior to its acquisition by Rahi Systems. He is an Autodesk Certified Professional specializing in AutoCAD and Revit.

About Rahi

Rahi is a subsidiary of Wesco Distribution, a Fortune 200 Company with operations in 50+ countries and annual revenues over USD 19B. Rahi delivers comprehensive data centre solutions for global enterprises, hyperscalers, and multi-tenant data centres. Rahi provides IOR, local currency billing, and RMA services, enabling businesses to operate efficiently anywhere.
Since being acquired in Nov. 2022, Rahi’s global presence and analytical expertise help clients achieve their business and IT requirements.

Contact Us
Rahi Systems Australia PTY LTD, Unit 30, Slough Business Park, 2 Slough Ave, Silverwater NSW 2128 Australia
Follow Us
© 2023 Rahi Systems, Inc.