Monday, 29 June 2015

Portable AP Power for the modern Wi-Fi Professional

There have been various methods used to power APs over the years when doing a wireless survey. One of the first was very much home-brew; a 12V car battery and if you were powering most indoor Cisco APs, an inverter to achieve the required 48 volt input was required. I found this to be a messy, heavy, unscaleable solution. You could find a more elegant alternative such as a sealed motorbike battery but this solution was still far from fantastic. Another solution was to use a UPS; this provides you with a more complete solution as the UPS includes the inverter however you still need to connect either a PoE injector or a power brick to power your AP. On top of this requirement, the weight and size of the UPS is a big concern. Sure, if all your work is in your local area or accessible by car, transport is less of an issue to the site, but moving this equipment around with the AP once you’re on-site and performing the survey - well that is another matter. Various other methods have been used by people over the years but I am yet to see any as elegant as the solution I wanted to detail in this post.

An all-in-one solution came along in the form of the Terrawave PoE Battery. This has been a favourite amongst WLAN professionals for many years, not least because it is bundled with the popular Caster Tray survey rig. You get a fairly compact piece of equipment, you get the inverter (built-in) and you get a convenient RJ45 output. I’ve been happy with this battery for many years but I feel it’s had its day. It has a special output for Cisco’s first 802.11n AP (the 1250) which shows you that it has had a good run but like the 1250, maybe it is time it is put to rest.

The main reason I started look for alternatives to the Terrawave was because it is just too big to travel with in many cases. As I’ve gathered more Wi-Fi tools, the Terrawave has been squeezed out of my survey case as it takes up far too much room and adds too much weight. In addition, there was always a question mark when flying with it. It contains a sealed lead-acid battery and whilst I put it in the same category as a wheelchair battery (which airlines typically have no issues with), I know at some point I will lose it to airport security – you know how they love random boxes of electronics these days, after all! The last problem is that, due to its size and weight, only in certain circumstances could I justify brining a second Terrawave. When travelling for Wi-Fi work I like as much redundancy as possible which a second battery affords. When I did travel with a second Terrawave I had AP battery redundancy that allowed for any issues with one Terrawave as well as extended days surveying (12+ hours) that a single battery couldn’t offer.

My requirements for a Terrawave replacement were as follows:
  1. Small
  2. Lightweight
  3. Airline-friendly
  4. Ability to power an AP for a similar period to the Terrawave (quoted as 6 – 8 hrs depending on the number of radios enabled)
  5. Ability to power a Cisco AP with its non-standard voltage requirements
Fortunately I found a solution in the Energizer XP18000AB external laptop battery – thanks to @keithparsons for pointing me towards it. As I intended this post to be short I am going to cut to the main highlights of the battery. The main purpose of this post was demonstrate how this battery can power a Cisco AP (due to the non-standard voltage input requirements) so if you need more information on the battery generally, take a look at the numerous online reviews or get in touch. If you just want to see the uptime achieved on both a Cisco and Aruba AP, scroll to the bottom of the bullet points:

  • It has a USB output, 9-12V DC output and 16-10V DC output;
  • If offers additional flexibility over the Terrawave in its ability to power laptops and other devices. Whilst I survey with a laptop that can see through a full day of surveying (assuming a lunch break charge), an extended day will require more power. Being able to throw one of these into my backpack and tether my laptop to it in order to finish, for example, the last hour of a survey is mighty useful (see comments on this post);
  • It comes with adapters (referred to as tips) for most common laptops and devices but additional tips can be purchased from Energizer if required;
  • It can power an Apple laptop however unfortunately this requires a few more adapters, one of which can only be bought second-hand and isn’t particularly cheap;
  • I won’t list the specific dimensions as you can compare yourself however as you can see in the photo below I can easily fit two of these in my survey kit whereas I would have to ‘vacate’ a large amount of tools in order to fit the Terrawave;

Snug as a... pair of batteries... in a survey case!


  • When it comes to improving over the Terrawave when it comes time to fly, it is a mixed bag. Looks-wise, it is obviously much more likely to fly under the radar however as it uses lithium batteries that presents another set of challenges. Airlines and specifically, the International Air Transport Association (IATA) has started to toughen restrictions on lithium batteries. The specific issue with the Energizer is whether it is considered to be equipment or a spare. If it is considered equipment then is can be carried in checked-in luggage whereas if it is considered a spare then it must be packed in carry-on luggage. The major issue is that the spares that the IATA refer to are from an era when a spare lithium battery was commonly a spare internal battery for a laptop, complete with exposed connectors. A more recent example would be a spare (internal) mobile phone battery, once again with exposed connectors. It seems clear that an external lithium battery with no ability to power itself on automatically, nor any exposed connectors, should be classed as equipment and not their traditional interpretation of a spare battery but that doesn’t appear to be the case, at least with some of the airlines I’ve looked at. Note, some airlines I looked at actually had stricter regulations than those dictated by the IATA though most were inline with the IATA.
  • All of the APs I’ve looked at besides the Cisco’s (*sigh*) can be powered via 12V. This includes Aerohive, AirTight, Aruba, Meraki, Motorola and Ruckus. With any luck, you’ll be able to power your non-Cisco AP out of the box with the included tips. At worst, you may need to order an additional tip from Energizer.
  • In order to power the Cisco APs I bought a Tycon TP-DCDC-2448GD-HP PoE Injector. Yes this does mean I lose the ‘all-in-one’ solution offered by the Terrawave but the numerous other advantages make up for that shortcoming. This PoE injector is unique in that it steps up from 24V (more correctly 18 – 36V) to 56V used to provide 802.3at power. If you only require 802.3af or 100 Mbps support there are additional models available but I think it’s worth paying the extra because I know at some point the Gigabit, 802.3at support will come in handy. Another alternative model, the Tycon TP-DCDC-1248GD-HP takes a 12V input and could also be used with the Energizer. The PoE injector uses a screw terminal for input and therefore I bought a DC power cable with a 5.5mm (OD) / 2.1mm (ID) connector at one end and bare wires at the other. The DC connector plugs into one of the Energizers included DC power cables and I screwed the bare wires into the injectors screw terminal.

The Tycon TP-DCDC-2448GD-HP PoE Injector with DC cable attached to screw terminal

  • And finally, the most important information, AP uptime! A few comments first – obviously the uptime you achieve will depend on the AP model, radio transmit power, number of radios enabled and so on. These numbers give a reasonable indication of what can be achieved across both a Cisco and non-Cisco (in this case, Aruba) APs that you may use for an AP-on-a-stick survey, measuring wall attenuation or whatever other purpose you may have for portable AP power. This was not meant to be provide strict scientific results but merely provide a ballpark figure that you may be able to achieve. Adjustments to the configuration I used would obviously yield higher or lower overall AP uptime. My test setup was as followes:
    • 2.4 GHz TxPower - 4 dBm
    • 5 GHz TxPower - 14 dBm
    • Wired uplink - Disconnected
    • Uptime determination method – Pinging from associated client to AP (Obviously this additional traffic would lower the uptime of the APs a little so you can expect higher uptime than I achieved during your average passive survey or wall attenuation measurements)

  • Cisco 3602i uptime from a single Energizer XP18000AB: 7.75 hrs (with required PoE injector connected)
  • Aruba IAP-225 uptime: from single Energizer XP18000AB: 10 hrs (without PoE injector connected as it is not required)
The complete solution


I will keep my existing Terrawave batteries as they certainly can still be useful, but as with a few other pieces of bulky survey equipment, they will remain in the cupboard and only brought with me when required for a specific job.

All in all, I am happy with the two Energizer XP18000AB + Tycon PoE injector solution that comes in at a quarter of the weight of the Terrawave, provides much more versatility and leaves me with much more room in my survey case!

Lastly, I should mention, the homebrew or UPS solutions mentioned at the start of this post do still have a place today. There are always going to be scenarios where a custom power solution may be required and there isn’t anything wrong with homebrew!

Wednesday, 3 June 2015

Band Steering v2.0

It has taken many years since 802.11n was ratified to reach a reasonable penetration of 5 GHz clients out there in the Wi-Fi landscape. In particular, only in the last ~2 years have we started to see a reasonable percentage of the most common BYO devices (smartphones and tablets) shipping with dual-band support. Whilst this has been a long time coming, many of these clients (more correctly, the clients wireless driver) still either prefer the 2.4 GHz band or will often flip between the bands.

There is a method that has been discussed in Wi-Fi circles recently that is the subject of this post but before I get to that I want to look at Band Steering v1.0! I have used a number of methods over the years in order help shift as many dual-band clients over to the 5 GHz band as possible. There are of course times when you may wish to distribute clients between the two bands more evenly (though typically with a higher percentage onto the 5 GHz band) but I won’t address those here; in this post I’m assuming you want to move the majority of clients onto the 5 GHz. Additionally, this post not address dual-band clients where you would be better off statically configuring them for a single-band (this includes most application specific devices in healthcare, retail, logistics, etc.).

Some of the traditional methods of shifting clients to the 5 GHz band:

  • Separate SSIDs for 2.4 GHz and 5 GHz clients: Pre-band-steering this was one of the few methods available. As band-steering emerged and (at least some) drivers improved, I generally stopped using this method except in the odd corner case;
  • Updated client drivers: Client drivers did become smarter and where I’ve had control, I ensure that client’s drivers are up to date. In most cases however, I can however only advise the customer to make this change and as-such, it is rarely done. Unfortunately many clients still prefer the 2.4 GHz band even when they receive a high signal strength from a 5 GHz radio and therefore this method is far from perfect. Another significant issue lay with the increased adoption of BYO devices – obviously an enterprise ICT team has little, if any control over these clients coupled with the poor ability to update the wireless chipsets driver on the most prevalent type of BYO client (smartphones and tablets).
  • Band Steering: Whilst band steering is the main method we rely on today and it certainly helps, it is still far from perfect. It is a non-standard method and therefore introduces the possibility of causing connectivity issues. Despite this risk, we typically use it as it is quite often our only choice.
  • Band Steering Tweaking: In addition to enabling band steering, you can often tweak its operation in order to increase the number of clients hitting the 5 GHz band. However, the further you stray from the vendor defaults, the more likely you are to end up with client connectivity issues.

In addition to these methods, there are some vendor-specific methods that could help. For example, Meraki hides the broadcast of the SSID on the 2.4 GHz radio when band steering is enabled. Whilst this method can help shift more clients to the 5 GHz band, hiding the SSID isn’t typically a great idea – none the less, another available option if always nice, even if you can’t disable the hidden SSID aspect of the band-steering functionality.

So what other methods are available? Recently in the Wi-Fi world there has been talk of another method to help shift more clients to the 5 GHz band. This method involves ensuring that the transmit power of your 2.4 GHz radios is sufficiently lower than the 5 GHz radios. What I absolutely love about this method is that it is taking a negative and turning it into a positive. In the same way that MIMO took a negative (multipath) and didn’t just neutralise it (diversity antennas) but actually turned it into a positive (higher throughput!). Some client drivers will prefer a higher signal strength even if the available signal strength from a more preferable radio is lower, but still very high. For example, the client might see -45 dBm from the 2.4 GHz radio of an AP and -51 dBm from the 5 GHz radio of the same AP. -51 dBm is still a very strong signal and therefore the many additional benefits offered by the 5 GHz band would ideally result in the client associating to the 5 GHz radio. As many clients are single-minded in their association choices, they often primarily (if not solely) use signal strength when deciding on which radio to associate to. There are of course other factors that a client should take into consideration when deciding which AP to associate to and the radio being 5 GHz shouldn’t be the only consideration. However often these other factors end up resulting in a preference for the 5 GHz band anyway (SNR, retries, packet loss, channel utilisation, etc.) So in short, this method takes the negative (clients preferring a high signal strength above all else) and turns it into a positive (clients associating to the 5 GHz radio, yet still with a high signal strength) by ensuring that the 5 GHz radios signal strength that the client sees is sufficiently higher than the 2.4 GHz radios signal strength.

One of the best sources of information on this design trick is provided in Aruba’s awesome Very High Density VRDs. I do however feel the values in the Aruba VRD are insufficient but perhaps I am missing something ‘Aruba-specific’ as I usually live in the Cisco world (please chime in if you have more information). Aruba states that there should be a 6 dB - 9 dB difference between the 2.4 GHz EIRP and 5 GHz EIRP. There is already a 6 dB difference between the two bands so I don’t feel these values are high enough. Even when not using this method, my 2.4 GHz radio is almost always 6 – 9 dB lower than my 5 GHz radio to compensate for this 6 dB difference (amongst other reasons). Note, this does not factor in AP and client antenna gain and polar plot. In the case of the Cisco 3700 AP that I performed testing on, the antenna gain is identical between the two radios. The polar plots however favour the 2.4 GHz radio overall; it varies across the 360 degrees of course. In addition, some 5 GHz sub-bands are worse off than others (UNII-3 typically faring poorly). Lastly, client antenna gain also often favour the 2.4 GHz and lower 5 GHz sub-bands. I won’t go into the details of 2.4 GHz vs. 5 GHz cell size except to say that in most modern BYO/office deployments, you typically want a higher SNR at your cell edge in the 5 GHz band than in the 2.4 GHz band; this is yet another factor requiring a higher AP transmit power/EIRP for the 5 GHz band. These factors coupled with the testing I’ve performed demonstrate a difference closer to 9 - 12 dB is required to be confident of this design trick working well, with 12 dB much preferred – at least with the AP I have tested this on. In the modern Cisco world, this would mean something like a 2.4 GHz radio transmit power of 2 dBm and a 5 GHz radio transmit power of 14 dBm. Of course, these values would need to be tweaked for the individual design. There are other factors that must be taken into account of course – for example you don’t want to end up with your 5 GHz radio transmit power at an overly high value. More important than the values I have mentioned here is that you test your particular deployment to ensure that your most difficult clients are seeing a 6 – 9 dB difference between the two bands if you wish to make use of this design trick.

I feel that this design method is a fantastic way to bridge the gap between what band steering has offered us so far and the world we could be living in – that is, where WLANs could be today if 802.11n had mandated 5 GHz-only support! Whilst every design method or trick should be considered before implementation, I think this is one of the most useful to have come along in a while.

I’ll end this post with this quote from the Aruba VRD – This technique is extraordinarily effective with clients that are available as of the time of writing, including legacy HT clients and newer VHT clients. That’s good enough praise for me!


Tuesday, 17 June 2014

By Far The Biggest Issue I Encounter in Wi-Fi Deployments Is…

… high airtime utilisation caused by infrastructure support for low data rates. Actually, it is one of the two biggest issues that I see however the solution for this issue is far more plug and play than the other. If you’re interested, the other issue is poor coverage caused by automatic AP transmit power algorithms (RRM) – more on that in a future post.

Now, this will not be a revelation for anyone working in Wi-Fi. Unfortunately the vast majority of WLANs are deployed by people with minimal Wi-Fi knowledge. This post is not a criticism of those folks however. The purpose of this post is also not to delve into the technical details behind this issue but to look at the party that could essentially solve (or at the very least, significantly reduce) this issue overnight – the enterprise Wi-Fi vendors.

Out of the box, all enterprise vendors equipment that I’ve looked at ship with the lowest data rates enabled – the 1 and 2 Mbps rates from the original 802.11 standard and the 5.5 and 11 Mbps rates that the 802.11b amendment brought us. These rates came about 17 and 15 years ago, respectively and yet vendors still ship equipment supporting these rates by default despite the devastation they cause.

The term ‘junk band’ is sometimes used to describe the 2.4 GHz band that these data rates operate in and as a reason why Wi-Fi often performs poorly in this band. The huge irony here is that many Wi-Fi deployments are by far the biggest contributor of ‘junk’ in the band – the junk of course are these airtime-hogging frames. Yes, non-Wi-Fi interference does consume some airtime (in many cases, less than it did when Wi-Fi was much younger!) and yes, APs outside of the customers WLAN also consume a portion of airtime but often it is these low rates supported by the customers own WLAN that consumes by far the largest amount of airtime. In addition, most non-Wi-Fi interfering devices also do not operate 24/7 so the interference is sporadic. Many low data rate frames are operating as long as clients are using the WLAN (for example, 8 hours in the day) whilst others are sucking up airtime, 24/7/365!

You may be thinking that it isn’t the vendor’s job to design the WLAN for the customer and that the vendor stresses the importance of disabling these low rates through documentation, training and vendor seminars. Whilst this is all true, it clearly isn’t enough or this wouldn’t be such a massive issue. The complexity of Wi-Fi is only matched, inversely, by the degree to which it is poorly understood. It just isn’t fair to push all of the blame on the customer. If the WLAN was deployed by a VAR then it may be fair to push some of the blame in their direction however once again, the reality is that most VARs, like customers, have minimal Wi-Fi knowledge.

These default rates also hurt the vendors. Numerous times the Wi-Fi vendors are blamed for a performance issue that is simply a result of a poor WLAN design. If these rates were disabled out of the box it would be one less (but significant) issue that uninformed customers could throw back at the vendor.

These are certainly other default, 'out of the box' pieces of configuration that I feel should be changed so why single this one out? Simply put, no other default that I’ve come across causes anywhere near as many issues and on such a large scale. Not only does it affect the customers WLAN but also the neighbouring business and home users.

Despite all vendors having qualified staff that realise this is an issue, why haven’t they made a change? Most likely because, yes, there are still some 802.11b (and even some problematic 802.11g/n) clients out there and by disabling these rates out of the box, these clients will be unable to associate. But so what? Out of the box, many things have to be configured to work and this will just be another. For example, there is a good chance that the out of the box WLAN you create will only support WPA2/AES by default. So if you have clients that only support WPA/TKIP they’re not going to be able to associate. You’re going to have to change those defaults to support your legacy clients. How is this any different? In fact it would be preferable if clients could NOT associate due to these issues. At least this way, the problem would be identified and fixed before the WLAN went into production. Most low data rate utilisation issues persist for a long time, often years, many of which will never be fixed. 

It doesn’t have to be a brute force approached either. I can see a number of options to  ease customers into a life of low channel utilization!
  • A setup wizard used to create the WLAN could ask whether the customer has any 802.11b clients that need to be supported (or problematic 802.11g/n clients that won’t associate with the rates disabled – ok, most customers won’t know this until they flip the switch!) .
  • The setup wizard could ask what vertical the WLAN is being deployed in and if one of the likely candidates (retail, warehousing and healthcare) are chosen, suggest that low data rates may need to be left enabled but that the customer should start at 11 Mbps and work backwards. If these verticals are not selected, the low rates are disabled.
  • If customers do enable low data rates, the wizard might suggest that this configuration could have a significant negative impact on their WLAN and that if they must support low rates, to minimise the number of WLANs advertised on each AP.
  • Back in the day Cisco APs shipped with the default SSID of tsunami configured. Cisco obviously realized this was something of a security issue and removed the SSID from the default configuration, shipped with the radios disabled and put a nice bright yellow sticker on the AP box informing the customer of the fact. Maybe APs could have such a sticker or a slip put into the top of the AP or, where applicable, WLC box. 
So now you’re thinking, “oh but the 5 GHz band will save us!” A recent piece of customer trouble-shooting showed why this is short sighted; the issue – severe performance problems. One of the first things I checked was the airtime utilisation reported by the APs; the highest I saw was a new record for me, an AP at 93% channel utilisation (beating my old record by 1%!). The rest of the APs weren’t much better. I couldn’t work it out at first though; 80% of the clients were associating to the 5 GHz band where the utilization was typically low so why such big performance issues? A look at client association history showed the majority of clients were fluctuating between bands. Yes, a driver update would likely have helped somewhat, but even the latest clients with the latest drivers may still prefer the 2.4 GHz band – I saw this often with early dual-band 802.11n clients. Whilst it’s been 7 years since these initial 802.11n clients came about and more client vendors have started to prefer 5 GHz over 2.4 GHz, this is not a universal truth. I expect a large enough percentage of 802.11ac clients will still make significant use of the 2.4 GHz band and therefore the importance of ‘cleaning up’ the band remains. This trouble-shooting experience was certainly not unique; I’ve seen this many times.

Yes, this proposed solution won’t help with all Wi-Fi airtime issues - non-Wi-Fi interference, external and internal ACI and CCI from SOHO rogues, non-overlapping ACI (AP co-location), sticky clients, clients probing at low rates, clients probing for every WLAN they’ve ever associated to, overly high AP density, overly high transmit power… the list goes on. It will however help with one significant Wi-Fi issue and one that has a very simple plug and play solution. Would this have been advisable 10 years ago? Of course not! Even 5 years ago? Perhaps not even then. But 17 years is a long time in technology circles – it’s time we moved on!

Lastly, I need to acknowledge the fantastic proposal from Cisco’s Brian Hart and Andrew Myles. In short, they’re proposing that the Wi-Fi Alliance start looking at making low data rates optional. Whilst I suspect that the onus behind this proposal may have come from the issues seen in stadium Wi-Fi in recent years (1 Mbps probes + very high client density + very open space = choas), it would obviously benefit all new Wi-Fi deployments where the equipment had this certification. But this leads to the next logical question, “how about cleaning up Cisco’s backyard first?” Obviously this isn’t a Cisco-specific issue but even if this proposal sees the light of day, it will likely be several years before we see it bear fruit. Why wait? – take the lead!

Despite the marketing claims of 802.11ac, Wi-Fi in the 2.4 GHz band is going to be around for the foreseeable future and it’s about time the mess was cleaned up!

Sunday, 15 June 2014

You down with TDD (yeah you know me)

Recently I was performing a piece of wireless trouble-shooting and came across something I hadn't seen before. I was called out because of wireless issues. You know; those vague, all-too-common wireless issues!

Fast-forward to me being on-site. Whilst surveying I often try to simultaneously perform as many of the required tasks as is practical. So I performed a survey to check out the customers WLAN coverage, looked for internal and external CCI and ACI and performed a spectrum analysis. Later on came a spot of analysis and sniffing.

In one area I noticed a high level of utilisation on channels 44 and 48.

40 MHz Wi-Fi channel... right?
This was clearly a 40 MHz channel where a file transfer or something similar was occurring… wasn’t it? I looked at my survey results but none of the customers APs were on these channels in this area. I then took a look for rogue APs in the vicinity.

Found the culprit?
OK – this looks like it. According to the customer this AP was being used because the corporate Wi-Fi wasn’t working well. Well yes, that was indeed why I was on-site. To confirm what I was seeing I pulled out the Fluke AirCheck.

What the.....
Hold up, what do we have here? 89% of utilisation from non-Wi-Fi sources? The customer mentioned they were running a Raspberry Pi. I know nothing of the Pi’s and wondered if it was performing some non-Wi-Fi Wi-Fi look-alike transmissions. Something along the lines of the Nuts About Nets AirHORN? A few questions later and it was established that the Pi was 2.4 GHz-only and that a dual-band Netgear wireless router was also in use. The Netgear was broadcasting the Swifty5 SSID pictured above. So was the Netgear to blame or was it a bug in the AirCheck reporting Wi-Fi transmissions as non-Wi-Fi? I powered off the Netgear and the non-Wi-Fi utilisation didn’t stop. I continued on with the survey, planning to return later on.

Later back at my desk I was going through my notes and remembered a screengrab from the WLAN controller I took the day before when doing some pre-visit preparation. I probably should have remembered this earlier but at least 18 hrs had elapsed! – so right there, you can see the problem!

Light bulb moment!

Ah ha! A quick confirmation of AP location and it was confirmed; TDD was the source. Yes, channel 36 is reported but later I noticed another AP in the area reporting TDD on channel 44 also. I had seen TDD transmitters detected by the APs on-board spectrum analyser previously and had seen reference to it in vendor documentation countless times however I had never delved any deeper. TDD stands for Time Division Duplex. Just from the name it sounded like something a licensed microwave, outdoor P2P link would use but was in fact operating in an unlicensed band. I suspected a P2P link mounted on a nearby building shooting a narrow beam of non-Wi-Fi ‘bite me’ through the customers building. Further analysis revealed this to be the case.

I suspected that what I could see on channels 36 + 40 in the first spectrum analysis image was another P2P link, albeit causing lower utilisation. A quick Google later and I suspect this may in fact be FDD – Frequency Division Duplexing with the uplink and downlink running on 36+40 and 44+48, respectively. Whilst the transmission was a continuous transmitter (100% utilisation) it did not operate 100% of the time, like some continous transmitters. The AirCheck showed it was bursty which is what you may expect to see on a P2P link.

As you would hope, the result of these interferers is that the RRM algorithm in the wireless infrastructure has chosen to use other channels on this side of the building. I can see that another business on the bottom floor of the building is running an enterprise WLAN also and those APs have also chosen not to use these channels. Losing four channels is not ideal, fortunately the customer is running 20 MHz channels so another eight are available (supporting UNII-2e is far from plug and play, particularly in this part of the world, so enabling these channels is unlikely). Before discovering this issue I was considering moving the customer to 40 MHz channels but that may not be worthwhile now.

As for the previously mentioned rogue AP that the customer had decided to use, it just happened to be running on the exact two channels that the interferer is running on. This presented a red herring whilst trouble-shooting due to the very similar signature (Wi-Fi vs. TDD). It also meant that the customer shot themselves in the foot – the SOHO wireless router remained on the problematic channels despite high utilisation whilst the enterprise WLAN performed as you would hope and didn’t use those channels. A pat on the back for me having tweaked the WLAN infrastructures AP spectrum analysis configuration 12 months earlier ;). 

A few closing thoughts
  • Whilst many non-Wi-Fi interferrers have unique signatures, some are misleadingly similar.
  • Metageek features I'd love to see in the future:
    • As much as I like Chanalyzer, I hope to see improved hardware from Metageek in the future to allow better signature detection to become a reality.
    • Tabbed support in Chanalyzer – it would really help when examining multiple files, post-capture.
    • Utilisation-specific 802.11 frame analysis; despite this example of severe non-Wi-Fi interference, the majority of interference I see is still from CCI. I’m not talking packet sniffer level stuff; even something as simple as what the AirCheck can do (x% Wi-Fi utilisation / x% non-Wi-Fi utilisation).
  • Whilst you shouldn’t rely on spectrum analysis signatures, they can certainly be helpful. Sure, you can purchase a whole bunch of non-Wi-Fi interferers for you lab in order to learn the different signatures (certainly a worthy venture) but you’re unlikely to ever get your hands on all of them – I’d certainly never have forked out for this interferer in order to learn its signature!
  • AP-based spectrum analysis is not a replacement for a stand-alone spectrum analyser and vis-versa; they complement one another.
  • Although the majority of non-Wi-Fi interference is seen in the 2.4 GHz band, 5 GHz is not immune.
  • In Australia, much like the US, UNII-1 is restricted to indoor-use only. It is likely that a call to the ACMA (FCC equivalent) may be required. The US is in the process of opening up UNII-1 for outdoor use and I expect Australia will follow at some point.
  • When the utilisation was closer to 60% (when I initially noticed the issue) it was my backup device (the AirCheck) that raised the red flag that this utilisation wasn’t from Wi-Fi - my favourite new toy of late! 
  • Finally, a side-by-side – Wi-Fi vs. TDD/FDD. The amplitude differs as expected but the significant difference is the lack of side-lobes on the TDD/FDD.
    Wi-Fi (left) vs. (TDD/FDD)
  • Despite the images above, in one instance (admittedly out of many) the TDD/FDD did actually show side lobes making it all the more difficult to identify. The 100% utilisation gives it away though.

If you’ve dealt with this type of interferer before and can provide any more detail please provide a comment or hit me up on twitter.