NFS is the future, has larger bandwidth than FC, market is growing faster than FC, cheaper, easier, more flexible, cloud ready and improving faster than FC.
photo by Paul Oka
Fibre Channel
Legacy : FC is becoming legacy like Cobal and the Mainframe. Not going way, but not what you want to invest in.
Throughput : FC throughput has fallen behind NFS and is the gap is only widening.
Price : FC is much more expensive than NFS
Cloud: How can you set up FC connections dynamically in cloud infrastructure? Generally you can’t setup FC in the cloud but you can easily with NFS. NFS is perfect for the cloud.
Consolidation : with FC you are hardwired into the storage. What if you want to move two idle databases from their private machines to a consolidation machine. With NFS is just shudown, umount, mount, startup. With FC there is a lot of work involved sometimes actual phyiscal wiring.
I thought the FC vs NFS debate was dead back when Kevin Closson jokingly posted “real men only use FC” almost a decade ago.
FC’s decline has been written about in the press ever since Kevin’s blog post. A recent example is Top 7 Reasons FC is doomed
NFS is the future, has larger bandwidth than FC, market is growing faster than FC, cheaper, easier, more flexible, cloud ready and improving faster than FC.
In my benchmarking of FC and NFS I found that throughput and latency are on par given similar speed NICs and HBAs and properly setup network fabric.
Simple issues like having routers on the NFS path can kill performance. Eliminate simple things like router hops and NFS works great.
Latency
NFS has a longer code path than FC and with it comes some extra latency but usually not that much. In my tests one could push 8K over 10GbE in about 200us with NFS where as over FC you can get it around 50us. Now that’s 4x slower on NFS but that’s without any disk I/O. If disk I/O is 6.00 ms then adding 0.15ms transfer time is lost in the wash. That on top of the issue that FC is often not that tuned so what could be done in 50us ends up taking 100-150us and is almost the same as NFS.
I’ve heard of efforts are being made to shorten NFS code path, but don’t have details.
Throughput
NFS is awesome for throughput. It’s easy to configure and on things like VMware it is easy to bond multiple NICs. You can even change the config dynamically while the VMs are running.
NFS is already has 100GbE NICs and is shooting for 200GbE next year.
FC on the other hand has just gotten 32G and doesn’t look like that will start to get deployed until next year and even then will be expensive.
Performance Analysis
How do you debug FC performance? With NFS since it is based on TCP it’s pretty easy to leverage TCP traces. For example, if you are having performance issues on NFS and can’t figure out why, one cool thing to do is take tcpdump on the receiver as well as sender side and compare the timings. The problem is either the sender, network or receiver. Once you know which the analysis can be dialed in. See
Misinformation
A lot of belief that FC is good and a lot of the concerns about NFS come from misunderstandings and beliefs based on outdated information. For example NFS use to have performance issues in the 1990s it is true, but it’s no longer the cases as the technology has so far advanced and now is advancing faster and faster. Also as an example of misunderstandings, people might think “Oh, won’t NFS saturate my network?”, no that’s not correct because what happens on on network connection shouldn’t affect other connections. At a port level, this is typically not a concern since the switch segregates the traffic, and traffic/limits are port specific. At a device level, this is a concern only if a node in the topology becomes exhausted.
Summary
NFS supports higher throughput, with easier configuration and for less expense than FC. NFS technology is improving faster and growing faster than FC. FC channel isn’t cloud friendly and NFS is. It only makes sense that NFS is growing faster.
Opmerkingen