Dell M1000e


The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches.

Enclosure

The M1000e fits in a 19-inch rack and is 10 rack units high, 17.6" wide and 29.7" deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg.
On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules and the KVM switch. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server.
In June 2013, Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors.
In 2018, Dell introduced the Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures.
The M1000e enclosure has a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules.

Midplane

The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back.
The original midplane 1.0 capabilities are Fabric A - Ethernet 1 Gb; Fabrics B&C - Ethernet 1 Gb, 10 Gb, 40 Gb - Fibre Channel 4 Gb, 8 Gb - InfiniBand DDR, QDR, FDR10. The enhanced midplane 1.1 capabilities are Fabric A - Ethernet 1 Gb, 10 Gb; Fabrics B&C - Ethernet 1 Gb, 10 Gb, 40 Gb - Fibre Channel 4 Gb, 8 Gb, 16 Gb - InfiniBand DDR, QDR, FDR10, FDR.
The original M1000e enclosures came with midplane version 1.0 but that midplane did not support the 10GBASE-KR standard on fabric A. To have 10 Gb Ethernet on fabric A or 16 Gb Fibre Channel or InfiniBand FDR on fabrics B&C, midplane 1.1 is required.
Current versions of the enclosure come with midplane 1.1 and it is possible to upgrade the midplane. Via the markings on the back-side of the enclosure, just above the I/O modules: if an "arrow down" can be seen above the 6 I/O slots the 1.0 midplane was installed in the factory; if there are 3 or 4 horizontal bars, midplane 1.1 was installed.
As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC management interface actual installed version of the midplane is visible

Front:Blade servers

Each M1000e enclosure can hold up to 32 quarter-height, 16 half-height blades or 8 full-height or combinations. The slots are numbered 1-16 where 1-8 are the upper blades and 9-16 are directly beneath 1-8. When using full-height blades one use slot n and slot n+8
Integrated at the bottom of the front-side is a connection-option for 2 x USB, meant for a mouse and keyboard, as well as a standard VGA monitor connection. Next to this is a power-button with power-indication.
Next to this is a small LCD screen with navigation buttons which allows one to get system-information without the need to access the CMC/management system of the enclosure. Basic status and configuration information is available via this display. To operate the display one can pull it towards one and tilt it for optimal view and access to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal operation and an orange LED indicating a problem of some kind.
This LCD display can also be used for the initial configuration wizard in a newly delivered system, allowing the operator to configure the CMC IP address.

Back:power, management and I/O

All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: here one insert the 3 management-modules: one or two CMC modules and an optional iKVM module.
At the bottom of the enclosure there are 6 bays for power-supply units. A standard M1000e operates with three PSU's
The area in between offers 3 x 3 bays for cooling-fans and up to 6 I/O modules: three modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right hand side has places for A2, B2 and C2. The A fabric I/O modules connect to the on-board I/O controllers which in most cases will be a dual 1 Gb or 10 Gb Ethernet NIC. When the blade has a dual port on-board 1 Gb NIC the first NIC will connect to the I/O module in fabric A1 and the 2nd NIC will connect to fabric A2
I/O modules in fabric B1/B2 will connect to the Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3.
All modules can be inserted or removed on a running enclosure

Available server-blades

An M1000e holds up to 32 quarter-height, 16 half-height blades or 8 full-height blades or a mix of them. The 1/4 height blades require a full-size sleeve to install. The current list are the currently available 11G blades and the latest generation 12 models. There are also older blades like the M605, M805 and M905 series.

Power Edge M420

Released in 2012, PE M420 is a "quarter-size" blade: where most servers are 'half-size', allowing 16 blades per M1000e enclosure, with the new M420 up to 32 blade servers can be installed in a single chassis. Implementing the M420 has some consequences for the system: many people have reserved 16 IP addresses per chassis to support the "automatic IP address assignment" for the iDRAC management card in a blade, but as it is now possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server one needs to run CMC firmware 4.1 or later and one needs a full-size "sleeve" that holds up to four M420 blades. It also has consequences for the "normal" I/O NIC assignment: most blades have two LOMs : one connecting to the switch in the A1 fabric, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All available I/O modules have 16 internal ports: one for each half-size blade. As an M420 has two 10 Gb LOM NICs, a fully loaded chassis would require 2 × 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine card whereas all half-height and full-height systems support two Mezzanine cards.
To support all on-board NICs one would need to deploy a 32-slot Ethernet switch such as the MXL or Force10 I/O Aggregator. But for the Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are "load-balanced" between the B and C-fabric of the M1000e: the Mezzanine card in "slot A" connects to Fabric C while "slot B" connects to fabric B, and that is then repeated for C and D slots in the sleeve.

Power Edge M520

A half-height server with up to 2x 8 core Intel Xeon E5-2400 CPU, running the Intel C600 chipset and offering up to 384 Gb RAM memory via 12 DIMM slots. Two on-blade disks are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M520 can also be used in the PowerEdge VRTX system.

Power Edge M600

A half-height server with a Quad-Core Intel Xeon and 8 DIMM slots for up to 64 GB RAM

Power Edge M610

A half-height server with a quad-core or six-core Intel 5500 or 5600 Xeon CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5-inch hard-disks or SSDs and a choice of built-in NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video card

Power Edge M610x

A full-height blade server that has the same capabilities as the half-height M610 but offering an expansion module containing x16 PCI Express 2.0 expansion slots that can support up to two standard full-length/full-height PCIe cards.

Power Edge M620

A half-height server with up to 2x 12 core Intel Xeon E5-2600 or Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM memory via 24 DIMM slots. Two on-blade disks are installable for local storage with a range of RAID controller options. Two external and one internal USB ports and two SD card slots.
The blades can come pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL.
It can also be ordered with Citrix XenServer or VMware vSphere ESXi or using Hyper-V which comes with W2K8 R2.
According to the vendor all Generation 12 servers are optimized to run as virtualisation platform. Out-of-band management is done via iDRAC 7 via the CMC.

Power Edge M630

A half-height server with up to 2 x 22-core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM memory via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145-watt CPUs. Two on-blade disks are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M630 can also be used in the PowerEdge VRTX system. Amulet HotKey offers a modified M630 server that can be fitted with a GPU or Teradici PCoIP Mezzanine module.

Power Edge M640

A half-height server with up to 2 x 28-core Xeon Scalable CPU. Supported on both the M1000e and PowerEdge VRTX chassis. The server can support up to 16 DDR4 RDIMM memory slots for up to 1024 GB RAM and 2 drive bays supporting SAS / SATA or NVMe drives. The server uses iDRAC 9.