Juniper Interface-Range: A Use Case

It’s holiday break! That means I have more discretionary time for topics such as… Juniper interface-ranges! Yay! Right?

Unenthusiastic Fry from Futurama

Ok, not the most exciting topic in the world, but hear me out — I think there’s a pretty good use case for using Juniper interfaces-ranges, which is a Junos feature that I think really stands out from other NOS’ that I’ve had exposure to. What is the use case? Simple: campus switch port configurations.

What is a Juniper interface range?

Let’s start by explaining what it is and is not, because if you’re coming from other vendors, the concept may be foreign and/or you may conflate it with features from others.

An interface-range is a logical construct in a Juniper configuration that aggregates multiple port configurations and places them into one configuration. Here is an example:

Here I have two interfaces that are members of the interface-range ‘workstations201’; they both are access ports and both belong in VLAN 201. Here’s the configuration if I were to separate out the interfaces into separate configurations:

As you can see above,  interface configurations are often repeated multiple times if you configure them one-by-one, but interface-ranges simplify port configurations and reduce the number of lines of these repeated configurations (in a 48-port switch with each port the same, theoretically using an interface-range could remove up-to 384 lines). If you need to change a port to a different VLAN or different configuration, simply delete the interface’s membership from one interface-range and place it in another.

What interface-ranges are not: CLI commands for making mass interface changes. For that, Juniper has the wildcard range command. For more info on that here.

Configurations Beyond the Individual Interface

So reducing the number of lines of configuration is great, but so what? If you come from the world of other vendors, you’re already used to the idea of keeping port configurations and their protocols all in the interface’s configuration, so configuring edge ports this way isn’t going to offer you much.

In the Junos world though, interface’s don’t hold all the configurations for the ports. For example, if you want to configure a voice/auxillary VLAN, that configuration on Junos (well, in post ELS change*) is in the switch-options stanza of the configuration. Sure you could configure switch-options like this and have all access ports configured for VoIP like this:

But what if you have devices that will utilize the voice/auxiliary VLAN, but you want them specifically in a separate VLAN than your voice/auxiliary VLAN? You would have to configure each port specifically for that protocol like this:

If you needed to make a change for an interface, and you’re doing this via CLI, managing these changes could get daunting.

Let take another place where configurations aren’t located in the same location: spanning-tree. It’s not enough in Junos to just turn on spanning-tree on all ports; you need to specify which ports are edge (to block BPDUs) and which ones are a downstream switch (ignore BPDUs). Often I have seen Junos configurations for campus gear look like this:

But from my experience, all that does is turn on RSTP, and ‘bpdu-block-on-edge’ does nothing because no ports are designated as ‘edge’. In order to accomplish the above, you need to designate edge ports and ports for downstream switches (no-root-port):

Great, so now you’re having to manage port configurations in three stanzas, and you’re adding a hundreds of lines of configurations to each switch configuration. WTH, Juniper?

If only there was a way to simplify this…

Where Interface-Range Shines

Interface-range does simplify all of this and reduce the lines configuration! Instead of blabbing on and on about this, let me just show you exactly how this is simplified:

Using interface-range, we can treat the interface-range as an interface object and apply configurations to it just like you would individual interfaces. For ports needing voice/aux VLANs, we can configure only the ports needing it; for spanning-tree, we can designate edge and no-root-port more simply.

Another great example: what if you needed to quickly power cycle all IP speakers? Sure, you could set the following in CLI:

But what if you’re working in a virtual-chassis, and your wildcard range command won’t work to get all the ports in one line? Using interface-range, you can get them all in one line like this:

Then BAM! You’ve turned off PoE and you can just rollback the config (or delete it, whatever floats your boat), and the IP speakers are rebooted.

Interface-range, for me, just seems like a more efficient way of managing ports configurations (even if you automate). Here’s an example that brings it all together:

Why Not Use Junos Groups Instead of Interface-Range?

From my experience, and from I can tell from working on campus switch configurations, groups don’t seem to offer the simplicity for port configurations like interface-range does, and, from all that I can gather, I cannot configure VLANs for interfaces and interface modes with  the group feature like I can with interface-range. When dealing with hundreds of campus switches and VC stacks, with all the port additions and changes, it just seems like interface-range is best approach for campus switching.

That said, once you get into the distribution and core layers, I’m less certain about the applicability for interface-range, but groups seem to be better suited here.

Edit (20191124.1945) – For a perspective from some people with more experience with apply-groups than myself, check out this post I made on the Juniper subreddit. There’s even examples for how to approach this from an apply-groups model. That said, I still think interface-range is a better approach for campus switching simply for the operational benefits you get with using them.

Caveats/Miscellaneous

There are a few caveats to keep in mind with interface-range, so I’ll note them quickly:

  • You can’t stage interface-ranges. An interface-range needs a member in order for it exist, so you can’t preconfigure them (although perhaps you could comment them out), and if your interface-range loses all members, you’ll need to either delete them or comment them out. Edit (20191124.1945) – This isn’t entirely true; one comment on Reddit suggests creating fake interfaces to stage them, but I’m not entirely sure if there aren’t any effects with that.
  • There are two ways to add members: member or member-range. I prefer to avoid member-range because if an interface changes in the range, you have to break up the range (IIRC). That said, if ports are static, you could use member-range or member (wildcard) statements to configure members.
  • I have had the CLI bark, but still commit, when interfaces were blank/missing, but had configurations in the interface-range area. I should verify this, but as of the time of writing this, this is what I recall.
  • Network admins/engineers who come from other vendors can get confused with interface-ranges, especially if you mix interface-ranges and standalone interface configurations. They make think something is missing and so forth; just make sure to brief them on it.

Final Thoughts

Junos has a lot of ways of accomplishing what you want, so what I’ve demonstrated here is just one way of accomplishing interface configurations; but from my experience, this seems like the more efficient way for campus switches.

Please let me know if there any mistakes in here or if you have a different way of using interface-ranges. Would love to be corrected or see other uses!

Thanks for reading.

* Enhanced Layer 2 Switching. Junos changed how configurations were done a few years back. Follow the link for more info.

Juniper EX3400: How to Recover from PoE Firmware Upgrade Failure

Did you know Juniper EX switches have PoE firmware updates to be applied?

Chelsea Lately - Great question. I had no idea.

Well, I didn’t until about a year ago when I did an upgrade and was checking on PoE power. Looking at the controller info from show poe controller, I noticed the following:

Juniper poe firmware available

Huh. Ok. Well, I’ve got a eight unit stack here, and the Juniper EX software upgrade is usually pretty solid, so let’s upgrade it — and it goes off without a hitch.

Fast forward nine months later, and I’m running into strange issues with PoE and Mercury door controllers, particularly model ‘MRE62E’. Basically the Juniper switches won’t provide power to this model, but the older MRE52’s had no problem. Checking out the firmware version using show chassis firmware detail, I noticed that the switch had the older 1.x firmware and not the new 2.x.

PoE firmware 1.6.1.21.1

 

Alrighty then — let’s upgrade this stack. I upgrade the software using the latest JTAC recommended version (staying in 15.x), then upgrade the PoE firmware — no problem. Door controller is now getting power, I see a MAC address. Everything is hunky dory.

Now let’s upgrade this other stack.

No problems on EX software upgrade. Great. Now upgrade PoE firmware…

Ten minutes later, I get the following on the terminal:

Magic Thread Message

Of note, and the thing that made me panic, was that out of nine switches in the stack, only one came back online. Checking the firmware versions, I see the following:

Various PoE firmware versions, some missing, some 0.0.0.0.0, only one 2.x

Okay… F***. Well, let’s reboot the stack; perhaps a reboot is needed*. After reboot, I get the following:

PoE Device Fail on FPC 8. All but FPC 2 are missing.WTF.

Guy shaking head mouthing WTF

In the past when I’ve done a PoE firmware upgrade (between now and when I first learned about it), I had no recourse but to RMA the switch. Well in this case, I don’t have eight spare switches to fill this temporarily while I wait for an RMA! WTF am I going to do?!

Solving the PoE Firmware Upgrade Failure

If you’re in the same situation as I was in, take a deep breath — you’re not dead in the water.

There are two scenarios for a PoE firmware upgrade failure that I’ve encountered, and I have a solution for both:

  • PoE Firmware Failure #1 – After firmware upgrade, you see a mixed result of firmware versions, some being 0.0.0.0.0, some being correct (2.1.1.19.3**), and some missing/blank (see picture above showing mixed/missing versions)
  • PoE Firmware Failure #2 – Perhaps you did as I did and rebooted and the PoE controller shows one with the message ‘DEVICE_FAILED’ (see above)

Solution for PoE Firmware Failure #1

If you encounter this failure, DON’T REBOOT THE STACK. You’ll make your life harder if you do.

Next, Juniper TAC (finally) has a solution — and it requires remote/on-site hands. If you’re going on-site or working with someone remotely, get yourself a cup of coffee (or beverage of choice) and some podcasts lined-up, because you’re going to be doing this awhile (~10 minutes for each switch/fpc).

From their site, the solution is the following (with my own notes):

  1. Power cycle the affected FPC (re-seat the power cord). Do not perform a soft reboot.
  2. After the FPC joins the VC or the standalone device reboots, execute one of the following commands in operational mode:

    OR

    JTAC Note: You need to change the fpc-slot number accordingly. Also, it is recommended that you push the PoE code one by one instead of adding all members in the virtual-chassis setup. (Emphasis mine)
  3. After the above command is executed, the FPC should automatically reboot. If not, reboot from the Command Line Interface.
    Note: Be patient and wait. No, seriously…wait. It takes awhile. If you need to reboot, you’re rebooting the whole unit AFAIK:
  4. After the FPC is online, check the PoE version with the show chassis firmware detail command. The PoE version should be the latest version (2.1.1.19.3) after the above steps are completed.
  5. If the version is correct, the PoE devices should work.
  6. Repeat the above steps to upgrade the PoE versions on other FPCs in the virtual-chassis setup.

The one thing to note that when it’s doing its upgrade is that you can see the progress with show poe controller, but at some point it will hang at 95%, then disappear, then come back, then the process will be complete — in other words…WAIT, unless you want to try out the solution for failure #2. 😆

Solution for PoE Firmware Failure #2

In this scenario, you rebooted the stack and something failed. The following is similar to solution #1, but the failed PoE controller requires to basically upgrade it twice. The steps:

  1. Execute the following command to reload the firmware on the FPC:

    Note: You need to change the fpc-slot number accordingly.
    The PoE controller will disappear when you run show poe controller, then come back and start upgrading like this:
    PoE firmware upgrading
  2. After the firmware upgrade completes, the firmware will likely be incorrect (it always was for me). Power cycle the affected FPC (re-seat the power cord). Do not perform a soft reboot.
  3. After the FPC joins the VC or the standalone device reboots, execute one of the following commands in operational mode:

    JTAC Note: You need to change the fpc-slot number accordingly. Also, it is recommended that you push the PoE code one by one instead of adding all members in the virtual-chassis setup. (Emphasis mine)
  4. After the above command is executed, the FPC should automatically reboot. If not, reboot from the Command Line Interface.
    Note: Be patient and wait. No, seriously…wait. It takes awhile. If you need to reboot, you’re rebooting the whole unit AFAIK:
  5. After the FPC is online, check the PoE version with the show chassis firmware detail command. The PoE version should be the latest version (2.1.1.19.3) after the above steps are completed.
  6. If the version is correct, the PoE devices should look like this:
    Successful PoE firmware upgrade
  7. Repeat the above steps to upgrade the PoE versions on other FPCs in the virtual-chassis setup.

Just like solution #1, one thing to note is that when it’s doing its upgrade you can see the progress with show poe controller, but at some point it will hang at 95%, then disappear, then come back, then the process will be complete — in other words…WAIT! You don’t really want to re-apply this whole process, do you?

Final Thoughts

Here’s the kicker for me: I’ve had this work just fine for stacks and single switches alone, and fail on stacks and single switches alone — I can’t find the common denominator here. Perhaps there’s a hardware build that has this more than others, but I can’t figure it out. The official documentation doesn’t hint on a best practice for this (other than maintenance hours), so I’m uncertain on the best approach.

Here’s some ideas I have to change my PoE firmware upgrade procedure (unsure if this will help):

  • Turning off PoE on all interfaces
  • Upgrading one at a time.
  • Trying an earlier version of the JTAC software, the going to the latest recommended. Example: I had no problems with 15.1X53-D59.4 or 15.1X53-D590, but the sample size for determining that is small (only two stacks attempted).

Time will tell.

Hope this helps! If it doesn’t I’d love to know the different experiences others have.

* I swear I saw a message that a reboot is required, but I can’t confirm this (I didn’t screencap it)

** There is a version 3.4.8.0.26, but that’s on the 18.x software version line, and it requires a whole different set of upgrade procedures. This is outside the scope of this post.