Main Page | See live article | Alphabetical index

InfiniBand

InfiniBand is a high-speed serial computer bus, intended for both internal and external connections. It is the result of merging two competing designs, Future I/O, developed by Compaq, IBM, and Hewlett-Packard, with Next Generation I/O, developed by Intel, Microsoft, and Sun Microsystems. For a short time before the group came up with a new name, InfiniBand was called System I/O. Intel has since left the group to continue work on PCI-Express, casting some doubt over the future of InfiniBand.

InfiniBand uses a bidirectional serial bus for low cost and low latency. Nevetheless it is very fast, 10Gbps in both directions. InfiniBand uses a switched fabric topology so several devices can share the network at the same time (as opposed to a bus topology). Data is transmitted in packets, that taken together form a message. A message can be a direct memory access read or write operation, a channel send or receive, a transaction-based operation (that can be reversed), or a multicast transmission.

Like the channel model used in most mainframes, all transmissions begin or end with a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can exchange information for security or quality of service.

The primary aim of InfiniBand appears to be to connect CPUs and their high speed devices into clusters for "back-office" applications. In this role it will replace PCI, Fibre Channel, and various machine-interconnect systems like Ethernet. Instead all of the CPUs and peripherals will be connected into a single larger InfiniBand fabric. This has a number of advantages in addition to greater speed, not the least of which is that normally "hidden" devices like PCI cards can be used by any device on the fabric. In theory this should make the construction of clusters much easier, and potentially less expensive because more devices can be shared.

External links