OpenSFF Enclosure Specification

Download as PDF

3. Enclosure Interfaces

OpenSFF enclosures rely on standardized mechanical, thermal, and electrical interfaces to ensure consistent compatibility between compute nodes, management modules, and shared infrastructure. This section defines the connector types and signal pathways required in both Core and Enterprise enclosures.

3.1 Node Interface Connectors

Each enclosure provides one or both of the following standardized connectors for interfacing with Compute Nodes:

  • The Core connector is based on the SFF-TA-1002 4C+ standard and is the primary interface for all Core Compute Nodes. It delivers high-speed signaling, power, and base-level I/O in a compact, unified form factor.
  • The Enterprise connector is based on the SFF-TA-1002 4C standard and is used only in Enterprise Enclosures, where it exists alongside the Core connector. It supplements the Core connector by providing additional I/O capabilities:
    • Two additional Ethernet ports
    • One additional USB-C 3.0 port
    • Additional room for future expansion

All connectors must conform to the mechanical and electrical definitions in the OpenSFF Compute Node Specification (Section 4), including:

  • Signal definitions for USB, DisplayPort, Ethernet
  • Power delivery requirements
  • Connector layout and other mechanical tolerances

While the connector shares the same mechanical form factor as standards like OCP NIC 3.0, the OpenSFF specification defines a different set of pin assignments.

Note: OpenSFF DOES NOT own nor redefine external standards such as USB, DisplayPort, or Ethernet. It references official versions (e.g. USB 3.0, DisplayPort 1.4) as used by compatible nodes and enclosures.

3.2 Management Module Interface

Enterprise Enclosures MUST include a dedicated slot for a management module, which provides chassis-level functionality such as KVM redirection, out-of-band monitoring, and power management. This slot uses a single 4C+ connector for signaling and power delivery.

Note: While the 4C+ connector is referred to as the Core connector when used in compute node slots, it serves a different role in the management module slot and uses a distinct pinout. To avoid confusion, the specification refers to it as the Management connector.

The management module slot slot is designed to support a range of module implementations. Two standard module types are defined:

  • A pass-through management module, which routes internal signals (USB, DisplayPort) directly to external ports. It also includes a standard RJ45 Ethernet jack, providing a basic wired connection to the management network.
  • A full-featured management module, which in addition to the external ports specified above, also includes:
    • A low-power CPU running a Linux-based operating system
    • A management server stack offering services like chassis diagnostics and IP-KVM
    • Local and remote configuration tools for power, network, and firmware orchestration

Additional management module designs MAY be implemented, provided they comply with the mechanical, electrical, and signaling definitions in the OpenSFF Management Module Specification.

3.3 Enclosure-Level I/O and Indicators

OpenSFF enclosures MAY expose enclosure-level I/O ports, buttons, and indicators to enhance usability, diagnostics, and system integration. These interfaces are separate from the compute node’s rear I/O shield and MAY appear on any side of the enclosure.

Enclosure-level I/O SHALL be designed based on whether the enclosure supports a single compute node or multiple compute nodes.

3.3.1 Single-node Enclosures

Single-node OpenSFF enclosures directly utilize or expose I/O signals from the installed compute node. Core nodes, in particular, provide a standard set of interfaces through its Core connector, including USB, DisplayPort, Ethernet, power/reset controls, and LED status signals (See Section 2.1 of the OpenSFF Compute Node Specification).

The enclosure MUST make all essential I/O signals electrically available, either by routing them to external ports on the enclosure or consuming them internally (e.g., integrated display or embedded USB peripherals). These essential signals include:

  • Power and reset control
  • Power status indicator
  • At least one USB interface
  • At least one Ethernet interface
  • One video output

Additional signals defined in the Core connector MAY be omitted or left unconnected if not required by the intended use case. However, any such omissions MUST NOT interfere with the node’s ability to operate normally in a compatible system.

3.3.2 Multi-node Enclosures

Multi-node OpenSFF enclosures are designed to host multiple nodes within a shared chassis. To support consistent usability across a wide range of implementations, from unmanaged clusters to fully remote-managed systems, these enclosures MUST provide a baseline set of physical interfaces.

All multi-node OpenSFF enclosures MUST:

  • Provide at least one method of powering nodes on or off, implemented as:
    • A single chassis-wide power button, or
    • Individual power buttons per node slot
  • Implement at least one LED to indicate power state. This may be:
    • A global chassis power LED, or
    • One LED per node slot
  • Include external network connectivity for all compute nodes. This MAY be implemented in one of two ways:
    • If the enclosure includes an internal Ethernet switch, one or more external uplink ports (e.g., RJ45, SFP+) to bridge the internal switch fabric to the outside network
    • If the enclosure does not include an internal switch (e.g., Core Enclosure), then each node’s ethernet signals MUST be exposed as a dedicated external port.