Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - LynK

#31
Routing and Switching / Re: ACI and ERSPAN
February 19, 2019, 02:07:15 PM
Dean,

We were facing the same issue. We are using counterACT, as well as implementing VXLAN via DCNM right now. We were looking at simple using two counteract devices, but we would not be able to do exactly as you mentioned over the fabric.

The solution I came up with?

Flexible licensing, and multiple virtual appliances with SPANs at each location to gather the SPAN traffic. We have not deployed VXLAN yet, but we are hoping it will work well.
#32
Routing and Switching / Re: DCNM 11 question
December 17, 2018, 03:33:05 PM
@winter

correct, but that is for border-gateways for like you mentioned. I guess I will just try going back to the leaf profile, but I will plug in a router for testing to see if layer 3 works.
#33
Routing and Switching / DCNM 11 question
December 14, 2018, 01:55:45 PM
for those of you who have tested/deployed DCNM with your vxlan fabric. Do you know why, when you assign a leaf as a border leaf it removes the SVI configuration?

According to David Jansen in building networks with VXLAN BGP EVPN, he states a single leaf can perform all 3 functions of a leaf "service, border, and normal". Yet in DCNM, when you configure a border leaf it removes all SVIs.

I wonder if this is a bug, or if I should try wiping my leafs and re importing them fresh to see if there is a difference.
#34
Routing and Switching / Re: Cisco ACI vPC Scenario
September 14, 2018, 08:16:25 AM
@winter

thanks for the response. I understand VXLAN EVPN quite well, and we are considering it an option. But we are also consider EVPN. Can you explain the "isolated local vPC pair" if you can, because that is generic.

If you have a vPC pair, with a peer link and keepalive failure, this is what happens in NX-OS:

"
vPC Peer Link Failure Followed by a Peer Keepalive Link Failure
If a peer link failure occurs, the vPC secondary switch checks if the primary switch is alive. The secondary switch suspends its vPC member ports after it confirms that the primary switch is up.

vPC Keepalive Link Failure Followed by a Peer Link Failure
If the vPC keepalive link fails first and then a peer link fails, the vPC secondary switch assumes the primary switch role and keeps its vPC member ports up.

If the peer link and keepalive link fails, there could be a chance that both vPC switches are healthy and the failure occurs because of a connectivity issue between the switches. In this situation, both vPC switches claim the primary switch role and keep the vPC member ports up. This situation is known as a split-brain scenario. Because the peer link is no longer available, the two vPC switches cannot synchronize the unicast MAC address and the IGMP group and therefore they cannot maintain the complete unicast and multicast forwarding table. This situation is rare.
"

AKA causing suspended ports, or loops, or something else. But there is no documentation proving this happens the same in ACI.
#35
Forum Lobby / Re: Questions We Dread to Hear
September 13, 2018, 07:50:51 AM
"can you help me with my iphone?"
#36
Routing and Switching / Re: Cisco ACI vPC Scenario
September 13, 2018, 07:32:21 AM
@winter

100% green, first implementation. We were looking at NX-OS and ACI options for VXLAN (and NSX). We are using remote leaf because the cost savings of not having spines at each site's data centers. We believe that between our dark fiber, and MPLS we should be safe. But we were wondering in the event of both going offline, I fail to see how vPC will work when the peer links go offline because they are tied into the fabric that is now completely isolated. In NX-OS this would not be an issue because there are dedicated linked between VTEPS. But in ACI vPC are no dedicated links between leafs for vPC.

@Otanx,

I think you mean having a vPC pair stretched across DCs, this is not what I mean. I am talking about 2 leafs in a small datacenter B, which tie back to 2 spines in a larger DC A. The issue is, is that in ACI the heartbeats for vPC run over the fabric, and there are no dedicated links between the two vPC hosts. Therefore if they lose access to the fabric, they will continue to run (obviously), but how will vPC work if the keepalives goes through the fabric become offline. I'm guessing split-brain with orphaned ports that do not go anywhere. This is not an issue in NX-OS because in NX-OS your leafs have dedicated links between each other for vPC heartbeats.
#37
Routing and Switching / Re: Cisco ACI vPC Scenario
September 12, 2018, 03:35:41 PM
Quote from: ristau5741 on September 12, 2018, 11:16:22 AM
I hope you didn't design that network.

white paper might help
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-740861.html

In the paper you sent me, which I was referencing it states:

"ACI does not require the use of a vPC peer-link between leaf nodes"

"When Remote Leaves are configured as a vPC pair, they will establish a vPC control plane session through over the upstream router"

We have dark fiber, and MPLS with which we are looking at designing remote leafs. I brought up to cisco what happened if both dark fiber and mpls went down in a DR scenario? Would the VPC pair go split-brain and then think all ports are orphaned, and then disable all ports? They are looking into it.


#38
Routing and Switching / Cisco ACI vPC Scenario
September 12, 2018, 10:01:55 AM
I cannot find this problem, nor a solution. But if you have remote leafs in ACI running vPC, and the spines are in a different data center. If both leafs lose connection to the spines and controllers in that data center, how does vPC operate? Because according to my understanding there is no direct peer connection between leafs in ACI mode, it uses the fabric for its keepalives. But what if that goes down?
#39
Routing and Switching / Re: Interesting DCI Problem
August 29, 2018, 01:06:54 PM
Splitting will not work unforurnately because we want to be able to properly failover. Looks like our options are

1) move a link or two (or buy 2) for L2 extension

2) VXLAN
#40
Routing and Switching / Re: Interesting DCI Problem
August 29, 2018, 08:57:17 AM
Okay guys,

So here is a visio.

Things I have tried (got a gns3 server and attempted on 9000v's)

1) sub interfaces with encapsulation dot1q 666 and no IP (just IP on int vlan 666). - DOES NOT WORK
2) sub interfaces configured with a VRF and running ospf on 66.66.66.0/24 network on both sides and DCI links with private IPs - DOES NOT WORK - because each side "knows" about the /24 network, causes routing to fail
3) removing the sub-interface private IPs I created in step 2 and try to put 66.66.66.0/24 IPs in there. The problem with this is that I will only be able to use 1 interface... which removes resiliency. - DOES NOT WORK.

Im out of ideas... I think a tunnel might work (mentioned above).. I might give that a try too.
#41
Routing and Switching / Re: Interesting DCI Problem
August 23, 2018, 02:13:02 PM
the link between the two nexus's is a point to point dark fiber link that we have running as a layer 3 link with OSPF.

The reason why I cannot just advertise both without a link between them is because it would bring a discontinuous network. What if I made an SVI for the vlan interace and make a subinterface with encapuslation for that vlan, but no IP assigned on the subinterface? would it broadcast over subinterface to the other datacenter?


We are going to be going with a VXLAN/NSX design in the future. The problem is I am knee deep in a firewall design right now.
#42
Routing and Switching / Interesting DCI Problem
August 22, 2018, 12:25:35 PM
Gentlemen,


I have a problem, and I am wondering if this would work.

The problem I have is I have two data-centers with layer 3 links between them. Each DC has its own ISP, but they are using different public IP address spaces at each data center. We are on a waiting list to get our own public IP addresses... the pain... it hurts.

The question I have is on the interim. How can I make this work. What I want to do is advertise the same carrier owned IP block at both sites, but prevent asymmetrical data flows. The only way I can think of doing this is putting the public IP address range on a vlan, and stretch it over the datacenter so our virtual firewalls have the IP block at both sites for failover. The problem is we have layer 3 connections.

My question for you is this. Lets say my internet range is 66.66.66.0/24 vlan ID 666 at datacenter A. What would happen if I created a subinterface ethX/X.666 and did encapsulation dot1q 666 on the subinterface at both datacenters with no IP address (or the IP of the network). Would that VLAN then be stretched over the subinterface to the other datacenter? So then at datacenter B's vlan 999 I have an edge router in the 66.66.66.0/24 network. Would I be able to ping the 66.66.66.X host on the other side?

I am thinking this will work... but I have no way to verify.
#43
@dean

why is forescout being so retarded when trying to get a demo VM for testing in our environment. I just want to get hands on and they are dragging their feet. They always like this?
#44
Ok. Maybe it will make sense if I clarify the design.



You are talking about running multiple VRFs directly to the firewall with subinterfaces/physical per VRF.


I am talking about having no VRFs on the firewalls, but having a single VRF on the core side which goes to the "inside" interface of the firewall. This VRF is used as a transit VRF for shared internet access. The reason I am trying to design something like this is if the firewall team refuses to run VRFs on their side, or if they refuse to run multiple interfaces for internet access.


see attached for a dumbed down visio
#45
yeah running VRFs directly to the firewall is definitely the cleanest rout (and probably the best). But I was just testing a scenario in my lab for simple internet access through a transit VRF on the core to firewall links (separate obviously).