Networking-Forums.com

General Category => Forum Lobby => Topic started by: dlots on April 06, 2017, 11:57:56 AM

Title: Gaahh server people!!
Post by: dlots on April 06, 2017, 11:57:56 AM
I can't believe I have to defend why a 1.6-2ms round trip delay for 2 servers 30 miles away from one another is acceptable.
Title: Re: Gaahh server people!!
Post by: icecream-guy on April 06, 2017, 12:15:58 PM
Quote from: dlots on April 06, 2017, 11:57:56 AM
I can't believe I have to defend why a 1.6-2ms round trip delay for 2 servers 30 miles away from one another is acceptable.

I agree way to slow.  typical latency is 18ms for every 1000 miles.  so at 30 miles you should be at .54ms.
unless I'm bad a math or there is a slight additional overhead cost.

what kind of L class USER is complaining about a 2ms delay, unless this is for a financial trading firm.




Title: Re: Gaahh server people!!
Post by: dlots on April 06, 2017, 12:26:29 PM
data-base people, they want to access the data-base in one DC from servers in another DC, and and they are posting data between them, each post waits for the one before it to get an ack, and they have 100k posts, so it takes ~160-200 seconds witch is to slow for them.  The BW is fine, it only took ~31 seconds to move a 1GB file from one DC to the other.  It just happens with your moving 480 Byte packets with an ACK between each one it can REALLY slow you down... Fix your protocol!!
Title: Re: Gaahh server people!!
Post by: icecream-guy on April 06, 2017, 12:39:20 PM
Quote from: dlots on April 06, 2017, 12:26:29 PM
data-base people, they want to access the data-base in one DC from servers in another DC, and and they are posting data between them, each post waits for the one before it to get an ack, and they have 100k posts, so it takes ~160-200 seconds witch is to slow for them.  The BW is fine, it only took ~31 seconds to move a 1GB file from one DC to the other.  It just happens with your moving 480 Byte packets with an ACK between each one it can REALLY slow you down... Fix your protocol!!

They should learn how to synchronize their databases, replicate data, and all that.  all the data should be in both data centers for redundancy or something like that.

Title: Re: Gaahh server people!!
Post by: dlots on April 06, 2017, 01:22:32 PM
Yeah, I suggested that (didn't know the terminology though)
Title: Re: Gaahh server people!!
Post by: deanwebb on April 06, 2017, 01:38:52 PM
Someone must have pulled out a calculator and determined that light can cover 30 miles in 0.16ms.

:oracle:

What a numpty.

Send them this article, which is actually a jolly nice read: https://hpbn.co/primer-on-latency-and-bandwidth/
Title: Re: Gaahh server people!!
Post by: dlots on April 06, 2017, 02:26:57 PM
Actually I pulled out that info to say "Hey 20% of the delay is the fastest thing in the universe"... I should have known... I immediately got back a reply of "lets work on the other 80%"... Personally I think it's pretty durn cool at the fastest thing in the universe is even a factor in our delay.  the 20% also assumes it's a straight line and the ISP doesn't take it back to some distribution then send it out from there.
Title: Re: Gaahh server people!!
Post by: dlots on April 06, 2017, 02:39:22 PM
the read up on BW and latency is quite good, but I am afraid to give it to them or they will try to make me look for buffer bloat or any of the other issues listed in there.  They want the magic cloud to change, not make their stuff work correctly.
Title: Re: Gaahh server people!!
Post by: SofaKing on April 06, 2017, 03:34:32 PM
Database guys are idiots!  They may be able to shave .6ms off this if they spend a lot of money but even then I would find it hard to get anything better than .2ms.
Title: Re: Gaahh server people!!
Post by: deanwebb on April 06, 2017, 03:45:03 PM
This is true, they only understand the physics that makes their stuff go faster.

Tell them they need a microwave system to bypass much of the remaining 80%.

http://us.aviatnetworks.com/products/

Who knows, maybe you get some awesome toys to play with as a result of this?
Title: Re: Gaahh server people!!
Post by: dlots on April 06, 2017, 04:10:33 PM
I actually had that written up as a "ha ha you aren't getting it" thing, but then I deleted it for fear they would insist on it being implemented.
Title: Re: Gaahh server people!!
Post by: deanwebb on April 06, 2017, 05:00:29 PM
I'd say go for it. If they buy it, you get to mess with the closest thing to a death ray any of us will get to touch.
Title: Re: Gaahh server people!!
Post by: wintermute000 on April 06, 2017, 05:20:44 PM
These BDP type discussions are simply the pits. You can't prove that you cant magically squeeze a few more ns latency out of the network, so therefore it must be the network's fault.

The only way to win is for them to actually understand TCP/IP, which involves you sacrificing your time as a free tutor, assuming they have enough brain cells to understand. Its lose-lose all round for the network guy.


Just nail thier behinds to the wall if you can prove its 480 byte payloads, the application is not fit for purpose in a WAN deployment. In fact even @ LAN latencies its suboptimal. If you want to move large amounts of data fast then you need to be able to scale out the TCP segment size and hence the window. THe OS TCP stack may also be involved here as most modern implementations should not be ACK after every packet - SACK and cumulative ACK should be in play here
Title: Re: Gaahh server people!!
Post by: deanwebb on April 06, 2017, 08:13:59 PM
Make a deal... if they run the whole thing via UDP, you'll turn on jumbo frames on the WAN circuits.

Actually, Google did that with HTTP/HTTPS traffic with their QUIC protocol. It's all the web, without all the ACKs.
Title: Re: Gaahh server people!!
Post by: Dieselboy on April 07, 2017, 12:00:39 AM
2ms latency for 30 miles... looking at my home and office fibre internet, I'm getting around that for 30kms so seems good to me.

Do your DCs take links from a company that provides links? Are they shared with other customers? Do they give an indication on what the latency should be? Do they offer a service which will reduce the latency at huge cost? If yes, then it's not your problem - it's finance  :mrgreen:

I'm not sure if a Riverbed would help.
Sometimes with problems like this you have to go back and find out why they're trying to do things this way. I've often been given problems to resolve and in the end, I find out they're trying to do something a bit weird and there's a better way to do things which wont have a problem.
Title: Re: Gaahh server people!!
Post by: dlots on April 07, 2017, 09:26:18 AM
Quote from: wintermute000 on April 06, 2017, 05:20:44 PM
Just nail thier behinds to the wall if you can prove its 480 byte payloads, the application is not fit for purpose in a WAN deployment.

I have a packet cap, they are doing database post it sends 1 480B packet (a post I assume) waits for the ACK, and repeat 100k times.

They wrote back that they want a per hop analysis of the delay. I wrote back that this wasn't really possable, and that I would need $40K of gear to do that (For GPS clocks) and for someone to install them.

I already explained to them how TCP works and they want to know why it takes so long to get the ACK back

Also it's 2ms round trip time for 30 miles over 2 WAN links (ISP?) (So 60 miles assuming it's not going back to some distribution hub somewhere along the line)... or say a ring somewhere.

Cool fact I learned that the SOL is ~30% slower in fiber, so that's 26% of the delay now.
Title: Re: Gaahh server people!!
Post by: icecream-guy on April 07, 2017, 10:44:14 AM
SOL?

are these fiber or traditional copper circuits,  pure ethernet fiber is much faster than copper or even  copper to CO, then fiber, then back to copper at the other CO.
Title: Re: Gaahh server people!!
Post by: dlots on April 07, 2017, 10:54:25 AM
No clue, our documentation sucks enough that I can't even tell you if they are ISP links or not.  I had to figure out the path via traceroute.