A SRX is a “security device”, or as we call it conventionally, a firewall. Modern layer-3 firewalls route packets just like a router, but unlike a router, a firewall can organize packets into connections (flows) and run ACLs on the entire flow. This unique functionality is the fundamental building block of every “advanced” security feature offered by a firewall: dynamic NAT (PAT/NPT), zone-based firewall (ZBFW), ACLs for in or out connections only, L7 filtering, etc. For the connection (flow) tracking to work, all the packets in a connection must go through the same device, and the 5-tuple of all the packets in a connection must be of expected values, which usually means:
- The packets from A to B and the packets from B to A must all go through the firewall at some point
- There shouldn’t be single-sided stateless NAT happening on the route
This was never an issue when everyone was single-homed and all the routers had only one routing table. But not today. SRXs now have built-in support for virtual routers which can create an asymmetric flow easily. Let’s look at this simplified topology:
Cisco Aironet 1800i is a cute little device that is just a little smaller than my hand. They are light in weight, not very hot (not a good replacement of the old 3502i model if you also have a cat around your home) and require less power to operate. I recently got one 1800i in my room, so I’d like to write a little about this model since it is so different from the old PowerPC-based ones.
VXLAN has been around for a while, so how do router vendors support it? Well, let’s use a dead simple topology to test them out.
Our setup today:
- All routers connected to the same dumb switch using IP range 169.254.0.0/24
- Multicast signaling on address 220.127.116.11, No PIM
- VXLAN UDP port 4789
- Network 10.0.0.0/24 on VNI 5000 (layer 3 termination / inter-VXLAN routing)
So you have a handful of brand new ESXi servers, and want VMs to automagically move here and there based on host availability and resource usage; vCenter have you covered with the DRS and HA but obviously you need to put all the hosts into a cluster for these thing to work. What you might not know is that there are 3 ways of creating a cluster which differs in certain things, and you will regret it if you choose the wrong one. Trust me, I learned it the hard way.
Note: we are using ESXi 7.0 and vCenter 7.0 here.
When I was replacing all my buggy little MikroTik RouterOS boxes and VMs with some new shiny (and also buggy) Cisco ISR1000s and CSR1000vs a few years ago, there were several things that I missed so much that existed on the former but not on the latter. One of them was the “MAC Winbox” and “MAC Telnet” capability with which you can plug your maintenance workstation into the router with an Ethernet cable, fire up a Winbox, and it will let you configure the router through a layer 2 connection. It require no valid IP configuration, so it would work as long as you doesn’t shut down the port and there is no wild switch ACL in place. Newer routers have USB console ports, and I do have a console cable in my EDC, but a router’s ability to be configured without a console cable is still its big advantage to me.
Imagine my face today when I learned that Cisco routers (IOS and IOS XE) do support a layer 2 protocol with remote console capability. And the protocol is not new. The protocol is from the 1980s and IOS has been quietly supporting it for years. It has even been enabled by default for years. It is still being supported (as of IOS XE 17.2).
RouterOS has nothing to do with security, so this article will focus on usability rather than security. All configurations related to security will be marked as optional.
First of all, let’s review all the limitations we have on the OpenVPN client on RouterOS 6.x:
- Supported protocol: TCP (TLS mode) only, no UDP, no static key
- Supported ciphers:
none BF-CBC AES-128-CBC AES-192-CBC AES-256-CBC
- Supported digest algorithms:
none MD5 SHA1
- Supported authentication methods: username, password and optional client certificate
- Does not support MPLS even if running in TAP mode
Two things happened in 2017:
Linux finally got native, working MPLS (L3VPN) and VRF support. 3 years later, a thorough documentation of MPLS configuration on Linux is still largely missing. Recently, after digging into all kinds of codes and documentation, I had a standard MPLS core network up and running in my lab. This article is a write-up for my lab setup.
Today I’m starting an English version of my blog, on the purpose of translating some of my discoveries and configuration stanzas into English so more people on the random Internet can find it.