Networking-Forums.com

Professional Discussions => Routing and Switching => Topic started by: Dieselboy on April 22, 2016, 07:31:19 AM

Title: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 07:31:19 AM
When I originally set our network up in 2013 I enabled Jumbo frames on the Nexus core and the Fabric Interconnects where the storage and UCS chassis are for 9000byte packets for switching between the SANs and the virtualisation system.
The storage guy and server guy didn't enable jumbo frames so although the underlying switching network supports it, it's not in use. We don't need jumbo frames, at the moment. IOPs on our netapp sans only goes up to 4k when we're running the nightly backups, otherwise it's in the hundreds on both / either HA controllers during the day. I'm graphing CPU of the netapps and I occasionally see spikes up to 30% and may be once a day in the 40% range.

I have a 10GB switching network with Fabric Interconnects. If I do enable jumbo frames, could I expect to see less CPU use since there is less time spent reading / writing header info? Without jumbo frames I expect for 9000 bytes, 6x the header info.

I would only see a performance increase if the current throughput is being limited by time spent writing headers etc right? IE if the disks cant write data any faster then I'm not going to see a write speed improvement with jumbo frames. The switches would be cut-through since they are 10GB so the network throughput wouldn't improve with jumbo frames. Is my thinking correct?

Is anyone using jumbo frames? Are you using them on 10GB links? I would have preferred to have them enabled at my place from day 0, that way there's no risk of enabling it later on and breaking everything :)

The reason I was thinking about this is that I've just moved a bunch of network share drives off legacy storage and onto the SAN and I'm seeing single file write speeds at the OS level of around 100MB/s using an iSCSI connection. Monitoring the storage NIC of that VM I'm seeing occasionally 1gbit/s and just over, for the main reason that it couldn't pull the data from the legacy storage any quicker and it just got me thinking if it was possible to optimise the SAN. Not that it needs it though.
Title: Re: Jumbo Frames for storage <->
Post by: deanwebb on April 22, 2016, 07:40:29 AM
"jumbo frames"

I just love saying jumbo frames. We use them in traffic within the data center, but not outside. After that, I can't say much except that whenever something breaks with them, I get to say that there isn't a firewall in the path... so...

:notthefirewall:
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 08:14:58 AM
I just found this evening that jumbo frames at layer 3 is called a jumbogram. I don't want jumbograms. Strippergrams, maybe... but not jumbograms.
:problem?:

Title: Re: Jumbo Frames for storage <->
Post by: deanwebb on April 22, 2016, 08:16:39 AM
What's the header information on a strippergram?

(http://demidectalk.com/style_emoticons/default/no.jpg)

OK, sorry about that... I hate when I have to warn myself for an off-color post... :mrgreen:
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 08:22:13 AM
haha I was thinking that as I was reading :)

PS Is your datacentre using 10GB links?
I'm thinking that jumbo frames with cut-through switching is going to be a different experience than jumbo frames with store and forward. I'd like to discuss that :)
Store and forward would use up more buffers. This is the problem with 3750 switches as SAN switches for example. By default ie if you don't tweak the buffers you will overload the buffers per port in some situations and cause big issues. I've still got half a draft email from 2012 where I went to write to a very senior colleague about what I'd found RE buffer configuration and how it could help with an ongoing issue. But the email got huge and I didn't want to pee him off so it never got sent :) It didn't matter anyway as the SAN provider eventually provided the server team with similar info.
Title: Re: Jumbo Frames for storage <->
Post by: deanwebb on April 22, 2016, 08:43:02 AM
10GB links for the boxes that need them. Everyone else can live with a gig link.
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 10:21:22 AM
Before I joined my company, the developers were pushing for 10GB to the desktop

:zomgwtfbbq: :lol:
Title: Re: Jumbo Frames for storage <->
Post by: icecream-guy on April 22, 2016, 11:04:16 AM
Quote from: deanwebb on April 22, 2016, 08:43:02 AM
10GB links for the boxes that need them. Everyone else can live with a gig link.

with all the voice, video, steaming, and other application bandwidth hogs, 10GB to the desktop will be the norm soon
Title: Re: Jumbo Frames for storage <->
Post by: NetworkGroover on April 22, 2016, 11:49:20 AM
Quote from: ristau5741 on April 22, 2016, 11:04:16 AM
Quote from: deanwebb on April 22, 2016, 08:43:02 AM
10GB links for the boxes that need them. Everyone else can live with a gig link.

with all the voice, video, steaming, and other application bandwidth hogs, 10GB to the desktop will be the norm soon

Gotta get your HD multi-angle pr0n on.
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 01:47:38 PM
When it's affordable then definitely. But when we're barely pushing 1GB links, 10GB is not worth the cost for us :)
Title: Re: Jumbo Frames for storage <->
Post by: icecream-guy on April 22, 2016, 04:02:28 PM
Quote from: Dieselboy on April 22, 2016, 01:47:38 PM
When it's affordable then definitely. But when we're barely pushing 1GB links, 10GB is not worth the cost for us :)
shall I repeat....



Gotta get your HD multi-angle pr0n on.
Title: Re: Jumbo Frames for storage <->
Post by: deanwebb on April 22, 2016, 04:23:16 PM
Quote from: ristau5741 on April 22, 2016, 04:02:28 PM
Quote from: Dieselboy on April 22, 2016, 01:47:38 PM
When it's affordable then definitely. But when we're barely pushing 1GB links, 10GB is not worth the cost for us :)
shall I repeat....



Gotta get your HD multi-angle pr0n on.

(http://demidectalk.com/style_emoticons/default/no.jpg)

Lol, I might have to make that a permanent addition to the vast array of smilies...
Title: Re: Jumbo Frames for storage <->
Post by: NetworkGroover on April 22, 2016, 05:00:51 PM
Lol... I love this forum.... even if all I do half the time is annoy the crap out of Dean....
Title: Re: Jumbo Frames for storage <->
Post by: deanwebb on April 22, 2016, 05:23:27 PM
Quote from: AspiringNetworker on April 22, 2016, 05:00:51 PM
Lol... I love this forum.... even if all I do half the time is annoy the crap out of Dean....

I don't get annoyed here. I laugh. When I want to get annoyed, I go involve myself in the endless struggle between historians and gamers at the Europa Universalis IV forums... lots more people there that have no clue how to "roll with it."

Side note: networkers, in general, are not a highly-strung crowd. I don't have to explain jokes to them like I have to with the programmers...
Title: Re: Jumbo Frames for storage <->
Post by: mlan on April 22, 2016, 06:38:34 PM
I recently deployed a 10GbE iSCSI fabric on Nexus 3500's and enabled jumbo frames to make all the compute/storage teams and vendors happy.  Here is some reading material, along with some benchmarks in the two older articles:

https://www.reddit.com/r/networking/comments/3nvvrw/what_advantage_does_enabling_jumbo_frames_provide/
https://vstorage.wordpress.com/2013/12/09/jumbo-frames-performance-with-iscsi/
http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 22, 2016, 08:41:08 PM
Quote from: mlan on April 22, 2016, 06:38:34 PM
I recently deployed a 10GbE iSCSI fabric on Nexus 3500's and enabled jumbo frames to make all the compute/storage teams and vendors happy.  Here is some reading material, along with some benchmarks in the two older articles:

https://www.reddit.com/r/networking/comments/3nvvrw/what_advantage_does_enabling_jumbo_frames_provide/
https://vstorage.wordpress.com/2013/12/09/jumbo-frames-performance-with-iscsi/
http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

(http://www.drunkman.me.uk/homer3.JPG)

Thanks!
Title: Re: Jumbo Frames for storage <->
Post by: wintermute000 on April 22, 2016, 10:19:30 PM
my TLDR version

Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on April 23, 2016, 01:28:38 AM
Cheers mate.

for what it's worth apparently Jumbo frames are supposed to be layer 2 only. Theyre called something else when they get to layer 3 - jumbogram.
https://supportforums.cisco.com/discussion/12293501/what-jumbogram

Do carriers really block PMTUD ?? Why would a carrier block anything at all between source internet host and destination internet host? Unless the packet was destined for the carrier equipment, then I could understand that.
Title: Re: Jumbo Frames for storage <->
Post by: mlan on April 25, 2016, 05:23:09 PM
@wintermute - Good summary.  I only use jumbo frames in an un-routed isolated L2 environment, where the only approved application is iSCSI.  Anything else would be asking for trouble, and I have already had many headaches just supporting iSCSI.  I much prefer supporting FCoE for storage.

I wouldn't be surprised to hear of issues with PMTUD.  To make that work properly, you need an IP header flag set correctly, and also need to receive all the ICMP responses correctly.  Plenty of room for error in that equation in a complex environment.
Title: Re: Jumbo Frames for storage <->
Post by: Dieselboy on July 06, 2016, 06:58:41 AM
Implemented Jumbo frames today, I think it's my first time ever implementing jumbos  -not sure, but can't remember ever doing it.

On our red hat virtualisation env. we were finding that some VMs were taking ages to live-migrate to another host. In fact they were exceeding the default timeout of 6 minutes and so were failing. So created a new VLAN, enabled Jumbos on it and set this as the live migration VLAN. VMs take seconds to migrate now. Although this was not the only change. We had found that the migration was not fully using the 10GB network bandwidth - red hat throttles live migration to 32M! So we turned all that rubbish off at the same time.