It’s time to have “the talk” about wireless Sep15


Related Posts

Share This

It’s time to have “the talk” about wireless

Now with spherical cows

Everyone gather ’round, it’s time to have “the talk” about wireless. I don’t mean the birds and the bees, even though wireless standards do seem to reproduce at alarming rates. No, what needs discussion is the part where wireless throughput claims are an obvious pack of lies. More importantly, how does this marketing malarkey affect the real-world business of supporting users in an increasingly mobile world?

Let’s start off with a VMworld anecdote. While anecdotal evidence isn’t worth much, it’s illustrative of the actual theory behind how wireless networks work, so it’s worth bringing into the discussion.

VMworld 2014 kicked off on Sunday, August 24th. Despite this, the wireless network outside the main “Solutions Exchange” convention hall was up and running the Friday before.

On Friday, with only a handful of nerds clustered outside the Solutions Exchange it was entirely possible to pull down 4MiB/sec worth of traffic from the internet. Some things were clearly being traffic managed, as wrapping up the traffic in a VPN tunnel could increase throughput by up to 10x.

Saturday was more crowded. 100 nerds become 1000, and 4MiB/sec became a fairly consistent 500KiB/sec. Still entirely useful for web browsing, but pulling down 100GB VMDKs from my FTP back home suddenly got a lot harder.

By early Sunday one critical element had changed; where Friday and Saturday say maybe 3 different access points scannable from outside the Solutions Exchange, suddenly there were over 100. Almost all of them MiFi hotspots, or smartphones that had had their hotspot feature enabled.

Sunday morning still saw only about 1000 devices vying for access outside the Solutions Exchange, but all these additional MiFi points had turned the 2.4Ghz spectrum into a completely useless soup. A consistent 500 KiB/sec became an erratic, spiky mess ranging from 2KiB/sec to 750KiB/sec with an average of about 400KiB/sec.

By Sunday afternoon, when the even was in full swing things had changed considerably. A little under 40,000 people attended the event and 1000 people outside the Solutions Exchange became tens of thousands. I stopped tracking individual devices because doing so was draining my battery at an alarming rate and tracked access points instead.

The 100 MiFi hotspots from the Sunday morning became well over 3000, and the 2.4Ghz band descended into absolute chaos. What’s more, the WiFi now required you to open a web browser and acknowledge that the event’s WiFi was sponsored by Trend Micro before it would pass packets.

This lead to thousands of devices screaming to do their updates, but not being able to get a response from the relevant servers, further magnifying traffic. Average throughput dropped to about 40KiB/sec, with lots of periods of zero packets and latency that was all over the place.

VMworld employed some of the best Wi-Fi technology on the planet set up by some of the most competent engineers one could hope to find. There were less than 20,000 people in or around the Solutions Exchange at any given time, and the Moscone is built like a nuclear bunker so I promise you that people wandering the streets above weren’t imposing on the signal space.

Now consider for a moment what it takes to provide LTE for something like the Great London area, with over 8 million people. It means that if you manage to achieve – let alone sustain – a fraction of the advertised speeds your carrier has done a spectacular job of designing their network.

A small bone of contention

Network contention in a wired world is fairly easy to understand. Everything is switched today, so collisions between the computer and the switch basically don’t happen. Even the crappiest chipsets on the market can deliver 90% of wire speed and even the cheapest of Chinese landfill silicon can handle all ports running at full capacity.

If you have 12 computers running 1Gbit ports and 4 gigabit ports lashed into a trunk between that access switch and the next tier up then you have a 3:1 contention ratio. That’s more than good enough for most desktop deployment scenarios; outside of certain corner cases, 12 desktops users are unlikely to push or pull 4Gbit of traffic from the next tier of network.

Turn that around and look at servers. Servers in the datacenter increasingly talk to one another (east-west traffic) instead the more traditional pattern of storage –> server –> desktop (north-south.) What’s more, servers are virtualised and resources are pushed as close to the redline as we can possibly go. There isn’t much room for “overbuilding” in the datacenter, because spare capacity disappears faster than a bottle of wine in the WBT underground lair on a Friday night.

In the datacenter, even 3:1 becomes an unacceptable contention ratio. Tens of billions of dollars a year are being sunk into research and development to solve that problem in a manner which doesn’t involve feeding Cisco the CTO’s firstborn offspring. We aren’t anywhere close to seeing the end of the software defined network wars and wired networks are significantly easier to do the maths on than wireless!

Would you like to take a survey?

Wireless network capacity is determined before hardware installation by doing a site survey. This involves looking at all the other signals in the area that want to use the same band. It isn’t a once-and-done affair, and a properly survey can take months.

People go to work, come home, and devices move about. 100,000 cars filled with chatty devices driving past your building twice a day during the daily commute can have a serious impact on your network design. Especially since they all have different capabilities, speak different protocols, standards and even have different transmit powers.

Consider, for example, my house. If you ran a site survey for only a day you would find a fairly average distribution of 2.4Ghz devices for a residential North American neighbourhood. With proper network design and limiting of transmit power you could expect to obtain and sustain 100Mbit/sec of throughput to the access point using a 300Mbit 802.11n setup.

Of course, this doesn’t take into account the little old lady who lives 3 blocks away across the lake and still uses the same 2.4Ghz cordless home phone she bought in the mid 90s. Every single Friday night at 11pm she gets a phone call. When she answers it absolutely obliterates the 2.4Ghz spectrum within a 5 block radius of her house for the 45 minute conversation.

When I considered working from home I ran a site survey for a month, discovered this cute local artifact and triangulated the nice lady’s house. I then went over there, explained the situation and offered her $100 + a new cordless phone if she’d hand over the offending unit for recycling. She agreed and the network has been good ever since.

Sometimes, this sort of diplomacy works, and sometimes it doesn’t. Businesses in crowded areas may not have the ability to do this kind of investigation, and carriers have been known to go to the mattresses over spectrum disputes. There are, however, steps we can all take to make wireless working better for us all.

It’s not the size that counts, it’s how you use it.

It’s easy to spot a wireless rookie. I’m not talking here about someone using a consumer device with defaults, but someone who thinks they are “technologically savvy” and who have gone into the guts of their wireless gear and started tweaking settings.

The wireless rookie is the guy who has discovered that his high-end (or third-party ROM enabled) WiFi access point comes with a “transmit power” feature, and has cranked it as high as the settings will allow. This is a really stupid idea and it causes problems for everyone.

The ideal network is as close to a wired network as you can get: the endpoint talks to the switch and no other devices else can hear the conversation between the two. Two devices won’t be trying to talk at the same time, so collisions functionally won’t exist.

The worst possible scenario is to have some great big loudmouthed git on a pole somewhere screaming and hollering at the top of his lungs, drowning everything else out. It doesn’t matter how loud the access point is if it can’t hear the devices it’s supposed to be talking to; your smartphone doesn’t have a “go to 11” knob, and even if it did you shouldn’t be touching it.

In a crowded urban environment you want a lot of small and quiet access points. You don’t want the signal to go very far and you don’t want too many devices per access point. This lets you get density – more wireless devices per area – and it allows for spectrum reuse.

First, you presume a spherical cow

It’s time for a thought experiment. Let’s say that we have a warehouse 100m by 100m covered in radio shielding paint (or, buried deep under several layers of concrete). Signals don’t get in, and they don’t get out.

Now, we want to set up a wireless environment for a few thousand employees. In the 2.4Ghz spectrum, for example, there are basically three usable channels. 1, 6 and 11. In a perfect world I would lay out my access points such that any two access points on the same frequency – say channel 1 – wouldn’t be able to “hear” each other.

Some of that can be accomplished with directional antennas, but the rest gets done by turning down the transmit power – and optimising spacing – such that two access points on the same frequency see the devices they are supposed to service as being much louder than they see each other.

Now, you might ask why this is. If everything speaks the same protocol, surely it can all coexist? The answer, of course, is both yes and no.

If you have two wireless devices (and only two wireless devices) in an RF shielded room chattering away at each other you might get 80% of listed speed. You might even get 90%, if you did a lot of math.

Throw a third device in there and you won’t end up with two devices each getting 40% of the rated speed, you’ll end up with somewhere between 30 and 40% of rated speed because every now and again more than once device will talk at the same time, a collision will occur and device will have to resend.*

Every new device you add into the environment increases the number of collisions and thus lowers the bandwidth for everyone. Two access points on the same channel “within earshot” of each other doesn’t increase your bandwidth or the number of devices you can support. Making your devices quieter, on the other hand, does.

Now, let’s remove the paint from our theoretical warehouse. Suddenly, you not only have to factor in the access points and devices designed to support that one business, but you need to deal with an unknown number of devices with unknown transmit power that may or may not be speaking the same protocols and may or may not coexist peacefully with other devices in the same spectrum.

Frequency counts

Trying to use this knowledge practically may be a little difficult unless you have experience interpreting the output of site survey tools. In a lot of cases – such as where employees are travelling or you are trying to work through wireless issues for telecommuters – you may not have much control over the site, thus making site surveys functionally useless.

We can, however, take some basic theoreticals away from the above that will help us try to figure out how to deal with mobile connectivity in a fairly sane fashion.

Higher frequencies don’t travel as far through our atmosphere and they are more easily blocked by various objects. 5Ghz, for example, is far better to use for short-range radios like Wi-Fi than 2.4Ghz, while things in the 800Mhz or lower spectrum are great for long distance transmissions into rural areas.

This brings me to mobile carriers. If you’re out in the sticks and trying to figure out which carrier will serve you best, look for the one with the lowest frequency holdings in the areas you care about. A carrier that has only 2Ghz or higher holdings is going to have to put so many towers into a rural region that the costs will be astronomical.

Similarly, you don’t want a carrier blasting 700Mhz LTE into an urban center because that signal is going to go on forever. What you ideally want is a carrier with a bunch of 2 or even 3Ghz towers on every other building with the power turned down low.

Bonus points to the carrier that will install pico or femto cells to cover not-spots. The less your smartphone or notebook are cranking up their transmit power to be heard by some distant cell tower, the better.

The take home

The old axiom of “use Wi-Fi wherever possible and cellular only as absolutely needed” isn’t entirely accurate. 5Ghz Wi-Fi is still rare. Plenty of notebooks, smartphones and access points just don’t have it. If you’re stuck in the middle of downtown Bigcity (or in a convention), then you may well have better luck using LTE than trying to punch a hole through an overcrowded 2.4Ghz spectrum.

If your primary internet connectivity is wireless, you will do far better to run your connection to your provider on one frequency and then use a second radio to connect all your other devices to the first one. For example, connecting at 2.4Ghz to the hotel Wi-Fi, but using the 5Ghz radio in your notebook as an access point to connect up your smartphone and tablet.

This is also how a MiFi works: it connects to the cellular provider on one set of frequencies, then offers a (typically very low-powered) Wi-Fi access point for devices to connect to.

Also note that a “proper” MiFi is better than turning your Smartphone into a hotspot. MiFi devices typically are low power and extremely short range. Less than 3 meters is normal. Smartphones in hotspot mode typically have their Wi-Fi set to “loud and proud” and can cause more problems than they solve.

Faster is unquestionably better. When you realise that the majority of issues with wireless networking boil down to “too many things trying to talk at the same time” then the ability of those various “things” to get their conversation over with faster is important.

If you sit on your wireless network and move 100GB around over 802.11b, you’re a jerk. Not only is that going to take forever, but you’re running the spectrum for everyone else. Get newer, faster equipment and then all your various applications spend less time demanding data from the network…until you get new applications that want more data, that is.

HSPA+ is probably the slowest connectivity you can reasonably support in any sort of real-world environment that needs more than the occasional e-mail delivered. Drop below that and even just having Skype running in the background periodically polling for updates can start to dangerously crowd towers, if enough users have it running.

LTE is good, LTE advanced is better, and Wi-Fi is better still…but only within the constraints of spectrum crowding. No matter what we want to believe, the LTE network cannot handle all of us streaming Netflix HD at the same time. That isn’t the carrier’s fault, and it isn’t the technology. Physics places some hard limits on what’s possible.

How’s the signal where you are?

*It’s actually way more complicated than this, and decades of technology have gone into trying to solve this problem, but we’re going to keep this to “assume a spherical cow” level of physics for the moment.

About Trevor Pott

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley start-ups better understand systems administrators and how to sell to them.

Visit My Website
View All Posts