Every good plan in networking ultimately seems to be borged into the Ethernet protocol. Even so, there's still a place in the market for its key rival in the data center, InfiniBand, which has constantly offered more bandwidth, lower latency, and often lower power use and cost-per-port than Ethernet.
But can InfiniBand keep outrunning the tank that is Ethernet? The members of the InfiniBand Trade Association, the association that manages the InfiniBand specification, think so.
InfiniBand, which is the result of the fusion in 1999 of the Future I/O spec espoused by Compaq, IBM, and Hewlett-Packard and the Next Generation I/O contending spec from Intel, Microsoft, and Sun Microsystems, represents one of those rare moments when key players came together to create a new technology then kept moving it forward. Sure, InfiniBand was relegated to a role in high-performance computing clusters, lashing nodes together, rather than flattering a universal fabric for server, storage, and peripheral connectivity. Roadmaps don't always pan out.
But since the first 10 Gb/sec InfiniBand products hit the market in 2001, it's InfiniBand, more than Ethernet, which has kept pace with the discharge core counts in servers and gigantic storage arrays to feed them, which stipulate massive amounts of I/O bandwidth in the switches that link them. Which is why InfiniBand has persisted despite the offensive of Ethernet, which jumped to Gigabit and then 10 Gigabit speeds while InfiniBand evolved to 40 Gb/sec?