Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fabric, the next-generation Facebook data center network (facebook.com)
93 points by jamesgpearce on Nov 14, 2014 | hide | past | favorite | 11 comments


This general spine-leaf, and subsequent super-spine construction, are talked about frequently in recent networking conference talks. Using ECMP on top of OSPF/BGP are very well established ways to build super-switches to scale to super large fabrics.

I'd be really interested in the specifics that they don't describe very well regarding cable layouts and automated configuration of pods.

Also, for anybody stuck in the old paradigm of super-expensive inflexible switches from the traditional network vendors, be sure to check out the commodity stuff that was mentioned previously in this HN thread:

https://news.ycombinator.com/item?id=8400953


These topologies (both logical and physical arrangement) were quite popular for non-ethernet networks in 90's (in both super computers and phone switches). AS it is reasonable way to get large fabric that has reasonable performance yet is composed from relatively small switching elements with reasonable cable routing between them.


Indeed. Interesting on the focus of calbe lengths. Have they moved from "TOR" to "MOR"? For automated configuration the industry has been moving this way for a while, IMO. Im more curious in the mention of host based "rehashing" for mitigating large single tuple flows. If I had to guess ... hashing on dscp/vxlan/etc bits and alternating (probabilistically?) on egress packets?

For others, check out the work of Clos http://en.wikipedia.org/wiki/Clos_network.


> What’s different is the much smaller size of our new unit – each pod has only 48 server racks

48 Racks seems pretty darn large by itself, and that is the smallest unit they deal with. At only 20 servers in a rack, thats 960 servers in their smallest unit. And they make it seem like there are hundreds of these pods in a single datacenter...

A single pod is bigger than the vast majority of the top 500 super computers...


The previous smallest unit was a "cluster" - imagine for sake of example that it is the same number of racks as 3 pods. Some time ago, clusters were somewhat arbitrarily limited in size by a few things - human understanding was definitely one, management software and visualization, network layout and port density issues, and so forth. However, each cluster had a bunch of overhead associated with it that outweigh the benefits, including the primary one - failure domain. If we only needed one more pod-worth of servers, we would have to add a cluster with 3 pods worth of racks.

I don't know the actual strategy (I work in a nearby team, but my focus is mostly on load balancing and CDN infrastructure), but one could imagine that in future it may be more normal to augment existing clusters/failure domains (say, add one pod) rather than building whole new ones.


[deleted]


Using the word "fabric" for a collection of interconnected machines isn't something new:

https://en.wikipedia.org/wiki/Fabric_computing


I don't see any name collision. Data centre technology has nothing to do with iOS frameworks. And to be honest Facebook is applying the name in a way that makes the most sense.


1. It does drastically decrease searchability, even if they are in separate areas 2. It's a direct name clash with the Python community's Capistrano alternative Fabric[1], which seems pretty relevant 3. Does it really matter who uses the name 'best'?

[1]: http://www.fabfile.org/

(Note: can't see the original comment so I'm not sure if I'm duplicating anything stated there)


"Fabric" is not a product name, so there can't be any naming clash.

The HN title does not match the article title, clearly people aren't bothering to read the article.


Oops. Nevermind then.


their schematics of datacenter reminds about schematics of a big server 15 years ago. Server racks instead of CPU-boards. "The datacenter is the computer."




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: