If you have a layer 2 service between multiple sites do you still need a firewall? Using something like OTV for example either on a N7K or ASR. Since it's a private line maybe you don't have to, or do you still need to pass it through the firewall for best practice?
This question is impossible to answer without proper context.
But in general you would enforce policy at the layer three boundary. If you need intra vlan policy, time to get with the sdn times and use host based firewalls
Forgive me. Let me try and elaborate further. We want to connect another site across layer 2. Since it will use private lines and not go over the public Internet I was thinking a firewall won't be needed. I am not sure if that gives better context. Sorry I might not be able to explain it better. Trying to learn if possible and if it is research more on firewall side to make that happen. still new to the idea.
We do use L2 firewalls for datacenters, since they're not going to do any routing. "Private line" means that you're not exposed to external hackers, but still vulnerable to internal threats, should you have no precautions taken.
Quote from: fsck on October 01, 2015, 08:04:50 PM
Forgive me. Let me try and elaborate further. We want to connect another site across layer 2. Since it will use private lines and not go over the public Internet I was thinking a firewall won't be needed. I am not sure if that gives better context. Sorry I might not be able to explain it better. Trying to learn if possible and if it is research more on firewall side to make that happen. still new to the idea.
The fact that you are extending L2 over WAN is not really relevant. What is your organisation's security architecture? (probably isn't one if you're asking this!). What are your security requirements? Do you need to filter intra VLAN traffic? if so, why are you filtering over one WAN point instead of within the entire VLAN (even locally)?
The easy answer is, 95% of companies would not bother with the simple justification that its internal.
If you want to secure internal traffic, the SOP is to enforce policy / security at the north south boundary to the DCs, or for the really serious, between all security zones.
If you're stretching a VLAN over a WAN then presumably its in the same security zone.
If you don't have security zones, and you haven't thought that far, you probably don't need intra VLAN firewalling (though you probably do need some kind of internal protection - the 'all internal is trusted' approach, although common, is not ideal - typically FWs in front of the server farm suffices).
Finally, L2 extension is usually not a great idea in and of itself, you should have a compelling reason as well as a well thought out routing architecture to deal with the inevitable split brain possibilities as well as traffic hairpinning consequences (especially involving flows through your service layer).
This site has a wealth of articles on this topic as well as specific L2 extension scenarios. The author is a networking legend. I recommend getting work to shell out for a subscription and reading through his data centre design case studies.
http://blog.ipspace.net/2013/09/layer-2-extension-otv-use-cases.html
More ammo
http://searchnetworking.techtarget.com/feature/Long-distance-vMotion-traffic-trombone-so-why-go-there
http://packetpushers.net/podcast/podcasts/show-27-layer-two-2-data-centre-interconnect/
http://datacenteroverlords.com/2013/12/01/death-to-vmotion/
https://www.gartner.com/doc/3028423/stretching-data-center-network-break (yes, even gartner says its bad)
having said that ^above^
how paranoid is the company? do you trust your ISP not to sniff traffic on your private line?
if the answer is yes to Q2, don't worry about a firewall, if and answer is no, firewall and encrypt.
reread wintermute's post again.
Looks like I came to the right place for help. Wintermute you have some great points and great links to help. I have the podcast running right now. This will help a great deal and shed light.
I think a good idea will be to maybe lab and test the results and see how they will show. Not quite sure how to do this but i will look for some gear to test.
Security is low priority but our new boss is pushing for us to make it better. We need to be HIPAA, FERPA, PHI compliant. We want to vmotion to another site so I thought a L2 would be a good idea. I hear a lot about OTV but I'm starting to see that's merely a certification criteria and not used so much in the real world as I thought.
To answer your question ristau5741 I think the company wouldn't like it if somebody sniffed the traffic, so I think it's better to be safe than sorry. I will say yes they are paranoid.
If they're paranoid then route over the OTV and put everything inside an IPSEC tunnel with maximum security (i.e. PKI, not pre shared keys). Stick firewalls at your DC ingress/egress at both sites.
Just because a link is layer 2 doesn't mean you can't stick routers on it (or better, stick a switch on it the split off VLANS to routers or as straight L2 segments depending on their requirement).
Vmotioning something across is a lot more than just the ESXi bit. What are you going to do about their shared storage? routing? services (load balancers? firewalls? reverse proxies?) So if it lives on a host at the other side but its shared storage, services and routing is still at the primary site, whats the point?
As Wintermute said, we don't VMotion between sites. We'll do it cross-campus, but not across more than a few km. It just gets nasty.
Right now, we're dealing with speed issues between datacenter nodes... the 10G links go down to 5G with the firewall and IPS running on them, doing their bits.
We are in the same boat and looking to do something similar. I'm looking to get Layer 2 connectivity between our data center and SWITCH in Las Vegas. We are only looking at a 1Gbps link, because the business owners won't dish out for a 10Gbps links. I will slowly persuade them as I show them the need for it.
We are a Hyper-V shop, so we were looking at Live Migration between sites. But right now looks like we are going to just use RecoverPoint from EMC. We'll have an XtemIO array at primary site, and a VNX array at SWITCH. I'm still up in the air on the method of connectivity between data centers, but this thread has definitely shed light on the issues at hand.
Cheers,
Quote from: wintermute000 on October 02, 2015, 05:08:07 PM
If they're paranoid then route over the OTV and put everything inside an IPSEC tunnel with maximum security (i.e. PKI, not pre shared keys). Stick firewalls at your DC ingress/egress at both sites.
Just because a link is layer 2 doesn't mean you can't stick routers on it (or better, stick a switch on it the split off VLANS to routers or as straight L2 segments depending on their requirement).
Vmotioning something across is a lot more than just the ESXi bit. What are you going to do about their shared storage? routing? services (load balancers? firewalls? reverse proxies?) So if it lives on a host at the other side but its shared storage, services and routing is still at the primary site, whats the point?
We were thinking to run clusters between data centers. But from the sounds of things here that could get messy. Or what about replication and dedup. We are trying to keep the sites in sync so if something happens, at least one data center will still be working.
It seems like I'm making problems for myself so I ask how are others protecting there data centers and making sure data is off-site and people can still work.
Quote from: fsck on October 05, 2015, 11:56:41 AM
It seems like I'm making problems for myself so I ask how are others protecting there data centers and making sure data is off-site and people can still work.
moving it to the cloud.
though, it's a bit harder when mainframes are involved. we just do site-to-site replication.
funny, customer wanted to replicate 2TB in 4 hours over a 1Gb circuit. we told him no.
Could be worse, you could be like the poor guy over at https://www.reddit.com/r/networking/comments/3muslm/selecting_internet_option_for_a_small_business/
has 2 people at a site who each work on multi-gig files on a remote server. The guy wants to hook up this site with a T1/E1, I even did the math for him.
E1=~2Mb/s
1GB=8000Mb
So you need to download 8000Mb at 2Mb a second, that's 4000 second, or about 1 hour and 11 min to copy over a file to open it, and that's assuming that nothing else is going on eating your bandwidth.
Quote from: fsck on October 05, 2015, 11:56:41 AM
It seems like I'm making problems for myself so I ask how are others protecting there data centers and making sure data is off-site and people can still work.
It depends. First we don't try to protect the entire data center. Specific applications are deemed mission critical with SLAs. Then we design that specific application to meet the requirement. Most applications these days are browser based, and backed by a database of some sort to store the data. So the front end is pretty easy. It is just a web server. I can spin up a bunch of those, and throw them behind an A10 or F5 with health checks to determine if the server is alive or not. Then fail the front end over to another server when one fails. The same thing can be done for the database. However, the database servers all need to share the same storage.
This is where the real challenge is. I don't really get involved in this part so I may have some details wrong, but if you are trying to do active/active setup long distances you would use something like XtreemFS that supports replication between nodes that are geographically separate. However, the downside is your performance takes a nasty hit. Otherwise you can replicate short distances (less than 10ms) with something like Gluster which has good performance, but is limited in latency between nodes. 10ms is just over 2,000 km fiber distance assuming no latency in switching/routing. The link below is the AT&T latency map so you can see how hard getting under 10ms is.
Going active/passive on the storage is much easier, but can cause it's own problems. The big problem as I understand is that even if the storage fails over automatically. The database servers are going to go ape shit (this is a technical database term), and may want to do consistency checks before they come back online. Especially if you have writes that did not replicate when the active storage failed. This can take a significant amount of time, and may require some manual intervention.
The best part is when the application owner says they need 100% uptime and multi-datacenter redundancy no matter what the cost. Then all of a sudden 99% in one data center is good enough when you show them the bill for 100%. If you actually get requirements typically you will find that management is worried about a Katrina type event that shuts down a city for days. They don't need a sub-second fail over. They just need a good off site backup, and COOP/DR plans in place, and tested.
AT&T Latency - https://ipnetwork.bgtmo.ip.att.net/pws/network_delay.html
-Otanx
well written.
Far too many outfits approach the problem by viewing L2 extension as some kind of magic hammer, and then twisting the infrastructure into knots, when they should be working out what exactly they mean by DR and how to design the apps to suit. hint: its how all the big boys - Goog, Amazon, Facebook et al do it - AT LAYER SEVEN
Instead, they don't even want to even work out how the eff their apps actually work, and just insist that the network always has work and has to look exactly the same to the apps/storage whatever location and under whatever failover condition.... then they cry at the cost and complexity and wonder in a couple of years why they're locked into a retarded active/passive L2 stretched VLAN with clustered appliances split across both sites so all traffic has to necessarily hairpin back to prod instance always...
Disaster recovery? That's a matter of enough ammunition in your weapon and enough fuel in your generator. All them fancy computers and what-not gonna be wiped out by an EMP.
Unless you buy my EMP-proofing solution for big $$$ :problem?:
I want to lab this to see how it works and learn it. I think it will be good to see it in lab and understand the problem. The piece I'm wondering about is how to mimic the 1Gbps link. If I simply use the LAN interface on the router whether its on ISR or ASR, and maybe adding some kind of overhead to show it as real world. Not sure how to do that but I will look around. Some people on team think it will work and others are skeptical. I think a real world demonstration type lab be best to show results.
If you are familiar with Linux you can bridge two interfaces together, and use tc to generate some latency for the WAN link. I had to look this up for some testing we were going to do that then got shelved. So never got to try it, but it looks easy enough.
http://bencane.com/2012/07/16/tc-adding-simulated-network-latency-to-your-linux-server/
-Otanx
Just build a wanem box. Normal pc with two Nic cards, the appliance bridges them at a speed and latency and loss you want.
Thank you both Otanx and wintermute000! These will be very helpful!
QuoteJust because a link is layer 2 doesn't mean you can't stick routers on it (or better, stick a switch on it the split off VLANS to routers or as straight L2 segments depending on their requirement).
@ wintermute000
I was wondering if you could elaborate on this statement please. I want to learn why sticking a switch is better. I'm thinking it's because this is another layer of security and you add another layer in between. Because you can nest the traffic might be a way of explaining it inside that VLAN, then maybe you would route only the traffic that is needed. Am I correct in that statement?
Quote from: fsck on January 14, 2016, 12:35:50 PM
QuoteJust because a link is layer 2 doesn't mean you can't stick routers on it (or better, stick a switch on it the split off VLANS to routers or as straight L2 segments depending on their requirement).
@ wintermute000
I was wondering if you could elaborate on this statement please. I want to learn why sticking a switch is better. I'm thinking it's because this is another layer of security and you add another layer in between. Because you can nest the traffic might be a way of explaining it inside that VLAN, then maybe you would route only the traffic that is needed. Am I correct in that statement?
a switch ain't no additional layer of security. more like a layer of insecurity.
Quote from: ristau5741 on January 14, 2016, 01:39:57 PM
Quote from: fsck on January 14, 2016, 12:35:50 PM
QuoteJust because a link is layer 2 doesn't mean you can't stick routers on it (or better, stick a switch on it the split off VLANS to routers or as straight L2 segments depending on their requirement).
@ wintermute000
I was wondering if you could elaborate on this statement please. I want to learn why sticking a switch is better. I'm thinking it's because this is another layer of security and you add another layer in between. Because you can nest the traffic might be a way of explaining it inside that VLAN, then maybe you would route only the traffic that is needed. Am I correct in that statement?
a switch ain't no additional layer of security. more like a layer of insecurity.
I should not of said security in that sense more like security by segregating the network. I'm just trying to figure out why you would go down that path.
Network segregation is security if and only if there is something to prohibit or regulate traffic between segments. L2 security is useful if you don't want the security devices to be involved in the routing and switching decisions. That would be the case if one wanted high throughput on the line and running the security devices in L3 mode would slow them and the traffic down too much. L2 devices also, by their nature, do not reveal their presence to the traffic being inspected, so they make it more difficult for attackers to defeat security measures on the network.
I'm just referring to the flexibility of being able to run a trunk (or even q-in-q if you want to take it to the next level), then you can do different things per VLAN.
e.g. have one VLAN routing and another VLAN as pure layer 2.
Ive had this discussion multiple times with a number of security people.
My last gig had a large 20-30+ site metro-e in each city. None of them used firewalls. No regulations in place that required encryption or security.
IMO it comes down to a business decision. Whats the risk of not securing the traffic? If the business is fine with the risk then fine.
Firewall at each site costs money. But so does a spreading worm. However, you can get smart with routing, e.g. only advertise subnets to other sites containing services, not end users. Only advertise the HQ with data center and set the firewall there.
Layer 2 or layer 3 WAN doesn't matter in the discussion I think. It's a trade-off cost versus security versus manageability versus throughput (small firewalls is smal throughput).
Small, old firewall is REALLY small throughput. When the security guy shows up, you better hope he's got new gear, or you're going to have the slow.