Channelizing Ports: The Case of the Missing 1 Gbps Interfaces on EX4650 Switches

A few weeks ago while helping deploy a demo Juniper EX4650 aggregation/distribution layer switch, we ran into a problem where 1 Gbps interfaces would not function correctly; i.e., 1 Gbps interfaces were missing and wouldn’t appear on the EX4650s. The issue went something like this:

  • Plugged-in 1 Gbps SFP modules
  • Ran show chassis hardware and verified SFPs were installed
  • Ran show interface ge-0/0/4 to check out the interface but received “error: device ge-0/0/4 not found

Steve Brule Looking Confused

Huh? Come again?

Truth be told, we were using Cisco 1 Gbps SFPs on the switch, and not having any Juniper SFPs (we never have), we chalked up the issue to this being a newer switch and Juniper not capable of supporting non-Juniper SFPs. Thus we ordered Juniper SFPs, waited for them to ship to us, and then tried again — same result.

Steve Brule saying 'k'.

Fast forward an evening of troubleshooting and waiting for TAC to get involved, eventually a Juniper engineer gave us the solution: we needed to channelize our ports.

Channelizing Ports on Juniper EX4650s

First off, I recommend reading the documentation from Juniper on channelizing ports on EX4650s. It’s not required for what’s below, but if you want more information about it, I recommend reading up on it.

Trivia time! EX4650-48Y-8C switches are the same hardware as QFX5120-48Y-8C, just a different OS package!

EX4650 switches come with 48 SFP+ ports that are capable of up to 25 Gbps ports, but come configured by default as 10 Gbps ports; they also come with 8 QSFP+ ports that are capable of up to 100 Gbps speeds, but can operate as 40 Gbps, or can be broken up individually into 4 channels of 25 Gbps (100 Gbps to 4-25 Gbps via breakout cables) or 4 channels of 10 Gbps (40 Gbps to 4-10 Gbps via breakout cables). Breaking them up would be done via a cable like this (there are multiple options for break out cables; this is just one) option:

Breakout cable - 1 40 Gbps QSFP+ module to 4 SFP+ 10 Gbps modules

If you’re not familiar with channelizing ports (I wasn’t until this), channelizing is the process of configuring interface ports to operate in different capacities. The most important thing to note about QFX5120 or EX4650 switches is that the QSFP+ uplink ports are the only ports that perform a process called auto-channelization, a process in which if you plug-in a module, the port will automatically switch between 100 Gbps and 40 Gbps (not sure if you plugged-in a 10 Gbps module if it would do this). If you wish to use the 25/10 Gbps break-out cables, you’ll need to disable auto-channelization and manually configure the ports to operate as such (that’s outside my scope here, but read the link above for more info).

Why this is so important for my issue is that on EX4650s, the 48 SFP+ ports do not perform auto-channelization! These ports, by default, come configured as 10 Gbps ports, and if you wish to use 1 or 25 Gbps modules, you have to manually configure the switches to perform this. This is exactly why the 1 Gbps modules were not appearing, because the ports were not configured to operate in 1G mode!

To configure the ports for the speed needed, here is the configuration we needed:

Under the chassis > fpc 0 > pic 0 stanzas, we configured speeds for the ports. However, note that the ports configurations above are broken up every four ports; this is because for the 48 SFP+ ports, port speeds are configured in groups of four (quads), and each quad can be 1, 10, or 25 Gbps. Here’s a visual of the quads:

EX4650 Port Quads - every group of four ports are colored and labeled by the first port - Port 0, port 4, etc.
(Click to enlarge)
Each quad is colorized above (port/quad 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44)

Therefore, in order to configure 1 Gbps interfaces on an EX4650 switch (or a QFX5120 for that matter), you need to manually set the configuration speed in the chassis configuration.

Problem solved. And Bob’s your uncle.

Some additional items that I’ve discovered in this process:

  • Because the ports are configured in groups of 4, all four ports in a quad will be the same speed. You cannot configure port 1, for example, at a different speed. This makes me suspect that on the backplane side of things, each quad is really a 100 Gbps port of some kind, like the uplink ports on the right, but is channelized in some way on the backend. Maybe. Not sure. This post makes question that logic.
  • Like other EX series switches, non-Juniper copper 1 Gbps modules do not work. The copper modules must be Juniper (or Juniper-coded) SFPs.
  • Non-Juniper 1 Gbps optic modules do work correctly.
  • The eight uplink ports on the right are configured individually, not as quads.
  • Unlike older Juniper equipment, a system reboot is not required for changing port speeds.

Juniper Interface-Range: A Use Case

It’s holiday break! That means I have more discretionary time for topics such as… Juniper interface-ranges! Yay! Right?

Unenthusiastic Fry from Futurama

Ok, not the most exciting topic in the world, but hear me out — I think there’s a pretty good use case for using Juniper interfaces-ranges, which is a Junos feature that I think really stands out from other NOS’ that I’ve had exposure to. What is the use case? Simple: campus switch port configurations.

What is a Juniper interface range?

Let’s start by explaining what it is and is not, because if you’re coming from other vendors, the concept may be foreign and/or you may conflate it with features from others.

An interface-range is a logical construct in a Juniper configuration that aggregates multiple port configurations and places them into one configuration. Here is an example:

Here I have two interfaces that are members of the interface-range ‘workstations201’; they both are access ports and both belong in VLAN 201. Here’s the configuration if I were to separate out the interfaces into separate configurations:

As you can see above,  interface configurations are often repeated multiple times if you configure them one-by-one, but interface-ranges simplify port configurations and reduce the number of lines of these repeated configurations (in a 48-port switch with each port the same, theoretically using an interface-range could remove up-to 384 lines). If you need to change a port to a different VLAN or different configuration, simply delete the interface’s membership from one interface-range and place it in another.

What interface-ranges are not: CLI commands for making mass interface changes. For that, Juniper has the wildcard range command. For more info on that here.

Configurations Beyond the Individual Interface

So reducing the number of lines of configuration is great, but so what? If you come from the world of other vendors, you’re already used to the idea of keeping port configurations and their protocols all in the interface’s configuration, so configuring edge ports this way isn’t going to offer you much.

In the Junos world though, interface’s don’t hold all the configurations for the ports. For example, if you want to configure a voice/auxillary VLAN, that configuration on Junos (well, in post ELS change*) is in the switch-options stanza of the configuration. Sure you could configure switch-options like this and have all access ports configured for VoIP like this:

But what if you have devices that will utilize the voice/auxiliary VLAN, but you want them specifically in a separate VLAN than your voice/auxiliary VLAN? You would have to configure each port specifically for that protocol like this:

If you needed to make a change for an interface, and you’re doing this via CLI, managing these changes could get daunting.

Let take another place where configurations aren’t located in the same location: spanning-tree. It’s not enough in Junos to just turn on spanning-tree on all ports; you need to specify which ports are edge (to block BPDUs) and which ones are a downstream switch (ignore BPDUs). Often I have seen Junos configurations for campus gear look like this:

But from my experience, all that does is turn on RSTP, and ‘bpdu-block-on-edge’ does nothing because no ports are designated as ‘edge’. In order to accomplish the above, you need to designate edge ports and ports for downstream switches (no-root-port):

Great, so now you’re having to manage port configurations in three stanzas, and you’re adding a hundreds of lines of configurations to each switch configuration. WTH, Juniper?

If only there was a way to simplify this…

Where Interface-Range Shines

Interface-range does simplify all of this and reduce the lines configuration! Instead of blabbing on and on about this, let me just show you exactly how this is simplified:

Using interface-range, we can treat the interface-range as an interface object and apply configurations to it just like you would individual interfaces. For ports needing voice/aux VLANs, we can configure only the ports needing it; for spanning-tree, we can designate edge and no-root-port more simply.

Another great example: what if you needed to quickly power cycle all IP speakers? Sure, you could set the following in CLI:

But what if you’re working in a virtual-chassis, and your wildcard range command won’t work to get all the ports in one line? Using interface-range, you can get them all in one line like this:

Then BAM! You’ve turned off PoE and you can just rollback the config (or delete it, whatever floats your boat), and the IP speakers are rebooted.

Interface-range, for me, just seems like a more efficient way of managing ports configurations (even if you automate). Here’s an example that brings it all together:

Why Not Use Junos Groups Instead of Interface-Range?

From my experience, and from I can tell from working on campus switch configurations, groups don’t seem to offer the simplicity for port configurations like interface-range does, and, from all that I can gather, I cannot configure VLANs for interfaces and interface modes with  the group feature like I can with interface-range. When dealing with hundreds of campus switches and VC stacks, with all the port additions and changes, it just seems like interface-range is best approach for campus switching.

That said, once you get into the distribution and core layers, I’m less certain about the applicability for interface-range, but groups seem to be better suited here.

Edit (20191124.1945) – For a perspective from some people with more experience with apply-groups than myself, check out this post I made on the Juniper subreddit. There’s even examples for how to approach this from an apply-groups model. That said, I still think interface-range is a better approach for campus switching simply for the operational benefits you get with using them.

Caveats/Miscellaneous

There are a few caveats to keep in mind with interface-range, so I’ll note them quickly:

  • You can’t stage interface-ranges. An interface-range needs a member in order for it exist, so you can’t preconfigure them (although perhaps you could comment them out), and if your interface-range loses all members, you’ll need to either delete them or comment them out. Edit (20191124.1945) – This isn’t entirely true; one comment on Reddit suggests creating fake interfaces to stage them, but I’m not entirely sure if there aren’t any effects with that.
  • There are two ways to add members: member or member-range. I prefer to avoid member-range because if an interface changes in the range, you have to break up the range (IIRC). That said, if ports are static, you could use member-range or member (wildcard) statements to configure members.
  • I have had the CLI bark, but still commit, when interfaces were blank/missing, but had configurations in the interface-range area. I should verify this, but as of the time of writing this, this is what I recall.
  • Network admins/engineers who come from other vendors can get confused with interface-ranges, especially if you mix interface-ranges and standalone interface configurations. They make think something is missing and so forth; just make sure to brief them on it.

Final Thoughts

Junos has a lot of ways of accomplishing what you want, so what I’ve demonstrated here is just one way of accomplishing interface configurations; but from my experience, this seems like the more efficient way for campus switches.

Please let me know if there any mistakes in here or if you have a different way of using interface-ranges. Would love to be corrected or see other uses!

Thanks for reading.

* Enhanced Layer 2 Switching. Junos changed how configurations were done a few years back. Follow the link for more info.

Juniper EX3400: How to Recover from PoE Firmware Upgrade Failure

Updated 20200117. See below.
Updated 20200308. I might have a path for upgrade success. Maybe.

Did you know Juniper EX switches have PoE firmware updates to be applied?

Chelsea Lately - Great question. I had no idea.

Well, I didn’t until about a year ago when I did an upgrade and was checking on PoE power. Looking at the controller info from show poe controllerI noticed the following:

Juniper poe firmware available

Huh. Ok. Well, I’ve got a eight unit stack here, and the Juniper EX software upgrade is usually pretty solid, so let’s upgrade it — and it goes off without a hitch.

Fast forward nine months later, and I’m running into strange issues with PoE and Mercury door controllers, particularly model ‘MRE62E’. Basically the Juniper switches won’t provide power to this model, but the older MRE52’s had no problem. Checking out the firmware version using show chassis firmware detailI noticed that the switch had the older 1.x firmware and not the new 2.x.

PoE firmware 1.6.1.21.1

 

Alrighty then — let’s upgrade this stack. I upgrade the software using the latest JTAC recommended version (staying in 15.x), then upgrade the PoE firmware — no problem. Door controller is now getting power, I see a MAC address. Everything is hunky dory.

Now let’s upgrade this other stack.

No problems on EX software upgrade. Great. Now upgrade PoE firmware…

Ten minutes later, I get the following on the terminal:

Magic Thread Message

Of note, and the thing that made me panic, was that out of nine switches in the stack, only one came back online. Checking the firmware versions, I see the following:

Various PoE firmware versions, some missing, some 0.0.0.0.0, only one 2.x

Okay… F***. Well, let’s reboot the stack; perhaps a reboot is needed*. After reboot, I get the following:

PoE Device Fail on FPC 8. All but FPC 2 are missing.WTF.

Guy shaking head mouthing WTF

In the past when I’ve done a PoE firmware upgrade (between now and when I first learned about it), I had no recourse but to RMA the switch. Well in this case, I don’t have eight spare switches to fill this temporarily while I wait for an RMA! WTF am I going to do?!

Solving the PoE Firmware Upgrade Failure

If you’re in the same situation as I was in, take a deep breath — you’re not dead in the water.

There are two three scenarios for a PoE firmware upgrade failure that I’ve encountered, and I have a solution for both:

  • PoE Firmware Failure #1 – After firmware upgrade, you see a mixed result of firmware versions, some being 0.0.0.0.0, some being correct (2.1.1.19.3**), and some missing/blank (see picture above showing mixed/missing versions)
  • PoE Firmware Failure #2 – Perhaps you did as I did and rebooted and the PoE controller shows one with the message DEVICE_FAILED (see above)
  • PoE Firmware Failure #3 – #2 option doesn’t work and nothing you do is getting the PoE controller to upgrade. You may also have the process hang during the download, or if the controller is still at DEVICE_FAILED and you try to upgrade, you get a message Upgrade in progress, even after a reboot.

In all these solutions, here are some tips/info about the Poe upgrade procedure until Juniper fixes the process for upgrading them all at once:

  • Upgrade one at a time.

Solution for PoE Firmware Failure #1

If you encounter this failure, DON’T REBOOT THE STACK. You’ll make your life harder if you do.

Next, Juniper TAC (finally) has a solution — and it requires remote/on-site hands. If you’re going on-site or working with someone remotely, get yourself a cup of coffee (or beverage of choice) and some podcasts lined-up, because you’re going to be doing this awhile (~10 minutes for each switch/fpc).

From their site, the solution is the following (with my own notes):

  1. Power cycle the affected FPC (re-seat the power cord). Do not perform a soft reboot.
  2. After the FPC joins the VC or the standalone device reboots, execute one of the following commands in operational mode:
    request system firmware upgrade poe fpc-slot <slot>

    or
    Note: This is the method I used
    request system firmware upgrade poe fpc-slot 1 file /usr/libdata/poe_latest.s19
    JTAC Note: You need to change the fpc-slot number accordingly. Also, it is recommended that you push the PoE code one by one instead of adding all members in the virtual-chassis setup. (Emphasis mine)
  3. After the above command is executed, the FPC should automatically reboot. If not, reboot from the Command Line Interface.
    Note: Be patient and wait. No, seriously…wait. It takes awhile. If you need to reboot, you’re rebooting the whole unit AFAIK:
    request system reboot
  4. After the FPC is online, check the PoE version with the show chassis firmware detail command. The PoE version should be the latest version (2.1.1.19.3) after the above steps are completed.
  5. If the version is correct, the PoE devices should work.
  6. Repeat the above steps to upgrade the PoE versions on other FPCs in the virtual-chassis setup.

The one thing to note that when it’s doing its upgrade is that you can see the progress with show poe controller, but at some point it will hang at 95%, then disappear, then come back, then the process will be complete — in other words…WAIT, unless you want to try out the solution for failure #2. 😆

Solution for PoE Firmware Failure #2

In this scenario, you rebooted the stack and something failed. The following is similar to solution #1, but the failed PoE controller requires to basically upgrade it twice. The steps:

  1. Execute the following command to reload the firmware on the FPC:
    request system firmware upgrade poe fpc-slot 1 file /usr/libdata/poe_latest.s19
    Note: You need to change the fpc-slot number accordingly.
  2. The PoE controller will disappear when you run show poe controller, then come back and start upgrading like this:
    PoE firmware upgrading
  3. After the firmware upgrade completes, the firmware will likely be incorrect (it always was for me). Power cycle the affected FPC (re-seat the power cord). Do not perform a soft reboot.
  4. After the FPC joins the VC or the standalone device reboots, execute one of the following commands in operational mode:
    request system firmware upgrade poe fpc-slot 1 file /usr/libdata/poe_latest.s19
    JTAC Note: You need to change the fpc-slot number accordingly. Also, it is recommended that you push the PoE code one by one instead of adding all members in the virtual-chassis setup. (Emphasis mine)
  5. After the above command is executed, the FPC should automatically reboot. If not, reboot from the Command Line Interface.
    Note: Be patient and wait. No, seriously…wait. It takes awhile. If you need to reboot, you’re rebooting the whole unit AFAIK: request system reboot
  6. After the FPC is online, check the PoE version with the show chassis firmware detail command. The PoE version should be the latest version (2.1.1.19.3) after the above steps are completed.
  7. If the version is correct, the PoE devices should look like this:
    Successful PoE firmware upgrade
  8. Repeat the above steps to upgrade the PoE versions on other FPCs in the virtual-chassis setup.

Just like solution #1, one thing to note is that when it’s doing its upgrade you can see the progress with show poe controller, but at some point it will hang at 95%, then disappear, then come back, then the process will be complete — in other words…WAIT! You don’t really want to re-apply this whole process, do you?

Solution for PoE Firmware Failure #3 (Update 20200117)

I recently had some more issues, and solution #2 just wasn’t doing the trick, so I offer solution #3, which I’ve had success with but there’s a caveat/rabbit hole that may come of it. This is the nuke-from-orbit approach on the switch if you want to avoid doing an RMA (or if you have no choice).

The gist of it: disconnect the switch from the VC (if connected), perform an OAM recovery, zeroize and reboot the switch, then perform the firmware upgrade.

From my experience, there are a few different scenarios that you’ll encounter when you need to use this method:

  • During the firmware upgrade, the process just hangs/stalls. You’ll run show poe controller and at some point the download hangs/stalls like this:Terminal shows download hangs at 50%
  • You receive a DEVICE_FAIL for any reason and nothing is resolving it, like this:PoE Device Fail on FPC 8. All but FPC 2 are missing.
  • You’re switch is stuck at upgrading the firmware. No matter what you run, the switch displays the following message: Upgrade in progress. In this scenario, the switch just thinks it’s still in the process of upgrading, but no matter how long you wait (or if you can’t wait some indefinite period of time for it to upgrade), the switch won’t upgrade the firmware.

What we need to do at this point is just get the switch to fresh state so that we can upgrade the PoE controller; and believe it or not, this is actually one of the awesome things about Juniper equipment: when one component of the switch is hosed, the entire switch isn’t hosed and can still function normally. For instance, I have had a switch have a failed PoE controller, but the switch still operated like a non-PoE switch without issue; i.e., Juniper allows for components to be recoverable.

Here’s the solution I came up with:

  • Step 1: Zeroize the switch: request system zeroize
    In this step, we’re just starting fresh and clearing out the configuration, which takes about 10 minutes and then reboots. If the switch still thinks there’s an upgrade in progress for the PoE controller, we’re clearing it out. It’s possible that this may fail due to storage issues. If that’s the case go to the next step, otherwise skip to bullet #3.
  • If step 1 fails: Perform an OAM recovery: request system recover oam-volume
    This is an optional step, and I’ve had to do this when zeroize would fail. If step #1 happens, try this first. takes about 10 minutes as it copies the OAM partition then compresses it for the Junos volume.
    Caveat: EX3400s, even in 18.2 land, still have storage issues sometimes. I have one switch that couldn’t recover from oam-volume, and I’m not sure why. I’ll update this once I have a solution.
  • After the switch reboots, the controller will still come up as failed when you run show poe controller. Go ahead and run the upgrade again:
    request system firmware upgrade poe fpc-slot 1 file /usr/libdata/poe_latest.s19
    It should behave like this after running the command:PoE upgrade process for Juniper
  • The switch should behave normally at this point, upgrading normally. If it doesn’t then you’ll likely need to replace the switch (or live without PoE).

And reminder, just like solution #1 and #2, one thing to note is that when it’s doing its upgrade you can see the progress with show poe controller, but at some point it will hang at 95%, then disappear, then come back, then the process will be complete — in other words…WAIT! You don’t really want to re-apply this whole process, do you?

Final Thoughts

Here’s the kicker for me: I’ve had this work just fine for stacks and single switches alone, and fail on stacks and single switches alone — I can’t find the common denominator here. Perhaps there’s a hardware build that has this more than others, but I can’t figure it out. The official documentation doesn’t hint on a best practice for this (other than maintenance hours), so I’m uncertain on the best approach.

(Update) Juniper does have an official bug report for this, and is apparently fixed in 15.1X53-D592, but I had the issue on 18.2R3, so I’m not convinced it isn’t resolved yet.

Here’s some ideas I have to change my PoE firmware upgrade procedure (unsure if this will help):

  • Turning off PoE on all interfaces
  • Upgrading one at a time.
  • Trying an earlier version of the JTAC software, the going to the latest recommended. Example: I had no problems with 15.1X53-D59.4 or 15.1X53-D590, but the sample size for determining that is small (only two stacks attempted).
  • Update: I can’t find any rhyme or reason, TBH. I’ve had it fail multiple ways, so not sure the above will help.
  • Update 2: I have had some success with the following (but I don’t feel that confident about it yet):
    • Use the 18.2 branch
    • Upgrade one at a time
    • Waiting for a period of time after a software upgrade and reboot. Don’t get upgrade-happy. Give the hardware some time to get back up and going.
    • Cross your fingers. And legs. On a full moon.
  • Update 2: If you have a controller showing DEVICE FAIL, I’ve had success fixing it just by running:
    request system firmware upgrade poe fpc-slot 1 file /usr/libdata/poe_latest.s19 (change fpc-slot # accordingly)

Time will tell.

Hope this helps! If it doesn’t I’d love to know the different experiences others have. Please share if you’ve had success or failures with any of this!

* I swear I saw a message that a reboot is required, but I can’t confirm this (I didn’t screencap it)

** There is a version 3.4.8.0.26, but that’s on the 18.x software version line, and it requires a whole different set of upgrade procedures. This is outside the scope of this post.

ztpgenerator: A ZTP Python Script for Juniper Devices (Maybe more?)

This is a script I’ve been working on for simplifying the ZTP process for Juniper switches.

https://github.com/consentfactory/ztpgenerator

What does it do?

  • Updates ISC-DHCP for ZTP devices (creates DHCP reservations)
  • Creates Juniper configurations based on Jinja2 templates
  • Can also create virtual chassis configurations if desired

There’s some pre-work that needs to be done for set up, but it’s fairly simple to deploy and doesn’t require a lot.

All of the documentation is there on Github.

Hoping this helps others out there.