Networking-Forums.com

Professional Discussions => Routing and Switching => Topic started by: LynK on August 18, 2017, 09:14:46 AM

Title: VXLAN integration into current L2 DC infrastructure
Post by: LynK on August 18, 2017, 09:14:46 AM
Hey guys,

Do any of you know any best practices for this? Sure VXLAN is great, and is the future but how does one go about implementing it into existing infrastructure? Do you just take the core L2 DC switches and interconnect them into the leafs temporarily? Or am I trying to overthink things here. Lets say for example, company X does not have any nexus infrastructure, but 6509E's.

They are slated to stand up a new data center with VXLAN, but first need to incorporate it with the legacy DC. VLANs need to be exchanged/used at both DCs. L2/L3 fiber options are both available. How would you design this?
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: icecream-guy on August 18, 2017, 10:44:34 AM
Quote from: LynK on August 18, 2017, 09:14:46 AM
Hey guys,

Do any of you know any best practices for this? Sure VXLAN is great, and is the future but how does one go about implementing it into existing infrastructure? Do you just take the core L2 DC switches and interconnect them into the leafs temporarily? Or am I trying to overthink things here. Lets say for example, company X does not have any nexus infrastructure, but 6509E's.

They are slated to stand up a new data center with VXLAN, but first need to incorporate it with the legacy DC. VLANs need to be exchanged/used at both DCs. L2/L3 fiber options are both available. How would you design this?

underlay or overlay ?

I should be nice and not tell you to google "VXLAN Design guide", so I won't.

I assume by your post that you are looking for practical experience with this technology.
I have no practical experience with this technology. But I'm looking over a design and deployment guide now.

Thanks
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: LynK on August 18, 2017, 11:04:54 AM
@ristau

You should know better that I would have done my research and I have scoured through many different design guides, but nothing talks about parallel integrations.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: deanwebb on August 18, 2017, 11:31:52 AM
I went ahead and tried some google-fu and got smacked down since all the articles mentioned Nexus stuff, which you said this guy has none of.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: icecream-guy on August 18, 2017, 01:09:37 PM
Quote from: LynK on August 18, 2017, 11:04:54 AM
@ristau

You should know better that I would have done my research and I have scoured through many different design guides, but nothing talks about parallel integrations.

good thing I didn't say anything.

doesn;t seem to be alot of requirements


Physical Switch prerequisites

For successful VXLAN operation, these requirements must be met:


    DHCP should be available on VXLAN transport VLANs.

    Note: Fixed IP also works.

    VXLAN port (UDP 8472) is opened on firewalls (if applicable).
    Port 80 is opened from vShield Manager to the Hosts and used to download the vib / agent.
    VMware recommends 5-tuple hash distribution for Link Aggregation Control Protocol (LACP).

    Note: 5-tuple hash distribution adds better load balancing as the hash includes the inner frame entropy represented in the UDP source port.

    MTU size requirement is 1600.
    VMware recommends that you enable IGMP snooping on your L2 switches, to which VXLAN participating hosts are attached.
    If IGMP snooping is enabled at L2, then IGMP querier must be enabled on the router or L3 switch, with connectivity to the multicast enabled networks.
    If VXLAN traffic is traversing routers, multicast routing must be enabled.



here
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2050697
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: LynK on August 18, 2017, 03:47:56 PM
Quote from: ristau5741 on August 18, 2017, 01:09:37 PM
Quote from: LynK on August 18, 2017, 11:04:54 AM
@ristau

You should know better that I would have done my research and I have scoured through many different design guides, but nothing talks about parallel integrations.

good thing I didn't say anything.

doesn;t seem to be alot of requirements


Physical Switch prerequisites

For successful VXLAN operation, these requirements must be met:


    DHCP should be available on VXLAN transport VLANs.

    Note: Fixed IP also works.

    VXLAN port (UDP 8472) is opened on firewalls (if applicable).
    Port 80 is opened from vShield Manager to the Hosts and used to download the vib / agent.
    VMware recommends 5-tuple hash distribution for Link Aggregation Control Protocol (LACP).

    Note: 5-tuple hash distribution adds better load balancing as the hash includes the inner frame entropy represented in the UDP source port.

    MTU size requirement is 1600.
    VMware recommends that you enable IGMP snooping on your L2 switches, to which VXLAN participating hosts are attached.
    If IGMP snooping is enabled at L2, then IGMP querier must be enabled on the router or L3 switch, with connectivity to the multicast enabled networks.
    If VXLAN traffic is traversing routers, multicast routing must be enabled.



here
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2050697


I'm referring to upgrading 1 DC at a time where for example there are 6509-E cores. Would it be a simple as a L2 Port channel to one of the leaf switches through the PTP fiber links?
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: NetworkGroover on August 18, 2017, 05:31:31 PM
Hrmm.. not sure about integration in Brownfield.... you need an encap and decap point... outside of those points it's just routed normally.. so.. like in a two-tier spine leaf design you'd have a pair of edge leaves that take care of that for you, connected to routers via trunks (Since VXLAN uses VLANS tied to VNIs).  The routers then route the traffic normally... in a DCI situation you'd have a pair of VXLAN-capable devices that do the encap/decap at each end.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: NetworkGroover on August 18, 2017, 05:35:02 PM
I don't know how Cisco says you should do things... but I'd rather have VXLAN capable devices downstream that take care of encap/decap and just let the core route as normal....

But wait.. L2 cores?  If you have L2 cores then why do you need VXLAN?

A sanitized diagram may help here explain what you are trying to accomplish.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: wintermute000 on August 19, 2017, 01:36:43 AM
Bridge the real l2 domain into a layer 2 only vxlan evpn. Then move the l3 gateway via turning on anycast gateway and irb and disabling the old layer three.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: Dieselboy on August 19, 2017, 06:56:49 AM
Watching this 😊 need to design a DR site, but haven't done my research yet.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: burnyd on August 20, 2017, 09:04:37 AM
Quote from: wintermute000 on August 19, 2017, 01:36:43 AM
Bridge the real l2 domain into a layer 2 only vxlan evpn. Then move the l3 gateway via turning on anycast gateway and irb and disabling the old layer three.

I can get behind that.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: burnyd on August 20, 2017, 09:05:42 AM
You can bridge your L2 domain into a new vxlan based fabric.  I would recommend evpn.  If you are going to stick with the same infrastructure physically and everything is vmware then nsx is a great solution.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: NetworkGroover on August 21, 2017, 04:43:42 PM
Quote from: burnyd on August 20, 2017, 09:05:42 AM
You can bridge your L2 domain into a new vxlan based fabric.  I would recommend evpn.  If you are going to stick with the same infrastructure physically and everything is vmware then nsx is a great solution.

Heh - and if you go the NSX route, you've got an all-round SME in burnyd ;)
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: LynK on September 20, 2017, 12:23:24 PM
Hey guys,

digging this thread back up after a lot of research/thoughts processing. But before the outcomes. let me clarify everything.

1) Core in old DC is running l2 & l3.
2) Everything below core is L2
3) New DC is going to be VXLAN EVPN

Now to connect the two DC we have a few options.

1) Stand up 2 VTEPs (in old DC) that span across the DCI link to the new DC spines (or leafs). Remove all L3 off of cisco core, and onto VTEPs. Run vPC between VTEP and core, and make the old DC core L2.

2) Keep the old DC running L2/L3. Use separate networks in each DC for now, and run L3 between the two (between transit leaf nodes). Once new VXLAN infrastructure is ordered for old DC, we then migrate everything over and it is done.

Honestly. Both are good options, but both would require some work. Option 1 requiring Migration off of the DCI line to new DC spines, and then back onto old DC spines, then running link leaf to leaf for DCI. Option 2 requires pretty much the same work, but does not require purchasing additional gear to run at old DC (and migrating everything off of old DC core onto VTEPs, which then run as cores essentially).

Thoughts?

Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: wintermute000 on September 23, 2017, 12:03:53 AM
2.) is by far the better option. L2 span is bad, no matter what whizbang technology you are employing.

With 1.) you do realise that in a VXLAN EVPN, anycast GW must be consistent across the entire fabric i.e. you can't have a SVI that exists only on one switch - the closest analogue would be the pre-VXLAN routing solution of using a handoff to separate L3 router on a stick. So you're merging L2 domains across sites, moreover, its the same fabric (same overlay) = stretched failure domain. I would not suggest doing this unless its a disciplined short term migration measure (you know how duct tape temporary fixes that work end up panning out)

Check out multi-site EVPN. This is the future. Each DC remains a separate VLXAN EVPN fabric, and you tie the two together with a third multi-site EVPN for the best of all worlds (separate overlays and failure domains, but with seamless L2/L3, and its all BGP based still so happy days)

https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=95611&tclass=popup (https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=95611&tclass=popup)
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: LynK on September 26, 2017, 10:26:15 AM
@Winter

1) Yes I a completely aware the SVIs need to be consistent

That document is amazing. I really need to start leaning on more cisco live documentation. I rarely use it...

I have a question for you though. For those of you that have gone the DCI route already. When you are stretching your infrastructure do you use 1 HA pair of firewalls at each DC (A/A or A/P, it doesn't matter) or separate clusters for each DC? I personally think separate clusters are the way to go... however most of my experience is theoretical at best.

It discusses all the options in the document, I am just curious what path do you go and why?
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: LynK on September 28, 2017, 09:53:33 AM
Adding another comment to this thread. When looking into VXLAN limiations, we were considering how awesome it would be to run FCoE over VXLAN, but of course there are limiations

According to document:https://www.cisco.com/c/en/us/td/docs/switches/datacenter/pf/configuration/guide/b-pf-configuration/Forwarding-Configurations.html (https://www.cisco.com/c/en/us/td/docs/switches/datacenter/pf/configuration/guide/b-pf-configuration/Forwarding-Configurations.html)

"FCoE
FCoE over the VXLAN fabric is not supported. However, FCoE and VXLAN can co-exist. FCoE and VXLAN services are provided on separate ports

To enable FCoE, use separate links from the fabric to MDS and connect to the target device. Refer Cisco NX-OS FCoE Configuration Guide for Nexus 7000 Series and MDS 9000 and Cisco Nexus 5600 Series NX-OS Fibre Channel over Ethernet Configuration Guide for details."

Does this mean it can only be done as L2 on the leaf switches? That would suck.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: wintermute000 on September 29, 2017, 02:01:37 AM
Separate clusters, a very big non fan of stretched HA. Google Ivan Pepjlnaks rants on the topic
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: deanwebb on September 29, 2017, 06:36:57 AM
Quote from: wintermute000 on September 29, 2017, 02:01:37 AM
Separate clusters, a very big non fan of stretched HA. Google Ivan Pepjlnaks rants on the topic

Stretched HA is an abomination.

:developers:
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: Dieselboy on November 03, 2017, 02:57:27 AM
So I have a question if that's ok. I can see the use case in a multi tenant env. providing hosting for customers. What about a DR scenario? Where one site would be active and DR site would be standby. In a DR situation, the VMs would be spun up at the remote site and their network addresses would not need to change. I'm thinking that this would be best for active/active DR.
Title: Re: VXLAN integration into current L2 DC infrastructure
Post by: wintermute000 on November 05, 2017, 07:16:03 PM
Think about routing (always think about routing). You have to get the traffic IN and OUT somewhere. Which site? asymmetric or symmetric? hairpin or not? stateful devices? failover? symmetry/asymmetry in prod vs failover? What about partial failures?

There are duct tape solutions of varying levels of elegance (/32 host routes, LISP, etc.)

Also even if you have diverse dark fibre you are basically designing one inter-dependent stretched failure domain which will split brain if DCI is completely lost. Active/passive is preferable if you can afford the spare capacity.

Ivan Pep and packetpushers have ranted on this topic many a time

Unfortunately thanks to vmotion and legacy server stacks we'll never get away from stretched L2, it lets every other IT department push their complexity down to the 'invisible' network plumbing stack. Remember this: doing a complex L2 stretch solution is effectively pushing your technical debt away from legacy must be L2 adjacent clustered apps / can't change IP apps + servers to the common network. If everything was L3+DNS aware (brave new stateless micro-services container model anyone? How do you think GOOG + FB do it?) then you could have a 'simple' L3 network