Scenario:
We have two sites and we have two Dell Compellent SC8000
Controllers per site. Each controller has - for front-end connectivity - 2 x 10
GbE dual port I/O cards, and 1 x 1 GbE dual port I/O cards. On each site we
have a Dell M1000e blade chassis with 4 x M8024-k 10GbE blade switches, and 2 x M6348 blade
switches. For performance reasons, we are going to keep most of the iSCSI and
network traffic contained inside the chassis; there will uplinks to top of rack
switches for iSCSI expansion to physical servers, uplinks to the core switches,
and uplinks to edge switches for iSCSI replication across to the other site.
Fig. 1 Connections
to Compellent Controllers
Our blades will be Dell Poweredge M620’s, and they have
dual 10 GbE NICs on the motherboard, Mezzanine 2 (Fabric B) is a quad port 1
GbE card, and Mezzanine 1 (Fabric C) is a dual port 10 GbE NIC card.
Designs:
1. Blade Chassis
Fabric A - 10 GbE used for iSCSI Traffic
This design uses the blade chassis fabrics A1 and A2 for
two independent iSCSI fabrics. Here we are using the SFP+ 10 GbE copper
cabling. Because each Compellent controller has 4 x 10 GbE NICs for a total of
8, this necessitates purchasing the additional 4 port SFP+ expansion modules
for each of the M8024-k’s used for iSCSI. All the internal ports would be
access ports for either the default VLAN or a chosen iSCSI VLAN.
Fig. 2: M1000e
Chassis and iSCSI on Fabrics A1 and A2
2. Blade Chassis
Fabric B - 1 GbE used for Management and vMotion
It could be argued that the addition of a quad port 1 GbE
NIC to a blade server with 2 x dual port 10 GbE NICs is unnecessary, but in
this scenario we have the M6348’s blade switches, and the quad port 1 GbE NICs,
and one very useful thing that the M6348 does have is external 1 GbE connections
that we can use for management links. All the internal ports would be trunk
ports.
Fig. 3: M1000e
Chassis and Management/vMotion on Fabrics B1 and B2
3. Blade Chassis
Fabric C - 10 GbE used for VM Networks
Here we use the M8024-k’s for virtual machine network
traffic (we could at a later date also add the ESXi host Management and vMotion
VMkernels here if require more throughput than the 1 GbE NICs on Fabric B will
give.) It is important here to have the correct connectors for uplinks to the
core switch. All internal ports would be trunk ports.
Fig. 4: M1000e
Chassis and VM Networks on Fabrics C1 and C2
4. Compellent
iSCSI Replication Connections to Edge Switches
Finally, we need to integrate the Compellent controllers
with the Edge switches for the iSCSI Replication traffic. The image below shows
connections to either one edge switch or a stacked edge switch.
Fig. 5: Compellent
iSCSI Replication Connections to Edge
Final Word
The title is for a reason ‘A Design…’ and this alludes to
the fact that there can be many different designs that will fit different
implementation scenarios. The most important thing any design must do
successfully is integrate with the existing infrastructure, and satisfy potential
needs for future expansion. As always, comments are most welcome!
Comments
Post a Comment