Wifi is like oxygen for so many of us these days. I expect to be able to hop on the internet in public spaces even though I don’t have a smartphone. If I’m in a library, coffee shop, conference hall, hotel, or my house and there’s no wifi, I can hardly concentrate on anything else even if I really wasn’t doing anything on the wifi anyway. And for libraries, as we move more and more of our collections towards e-content, having robust wifi is like having robust shelving for our physical collections — it’s absolutely essential.
But here’s the thing. Robust wifi for people wandering around with laptops in large spaces is technologically really really hard to accomplish, especially given the history and assumptions behind current wifi protocols. Carleton’s network architect (a networking genius and also amazing at explaining things in words I understand), recently attended a conference where he learned the very latest in how to set up more robust wifi networks, and he invited me to attend the debrief presentation he then gave to his IT colleagues. This presentation went over the main points from a talk by Peter Thornycroft of Aruba Networks, apparently one of the great thinkers on this topic.*Here are some of the salient points that you might want to know when talking to your IT folks about wifi in your libraries.
—
Philosophically, network architects have so far been far more concerned about coverage than about speed. When push comes to shove, they say, “Well, nobody will be working at their very fastest, but there’s plenty of wifi for all.” As it turns out, this has been the wrong decision. The nature of network communication is very “bursty.” Each device gets a time slice out of the access point’s attention and then waits while the attention turns to other devices or to the routine housekeeping it has to get done. If the network optimizes for speed, then more happens in each time-slice — more bursty bits of communication or housekeeping happen each cycle — and therefore more devices get along merrily while sharing the access point. The fast devices will get their work done and out of the way faster and leave some time and attention for the slower devices that would otherwise get crowded out.
On the device side, all the decisions are made by the manufacturers, and faster wifi rarely makes it on the priority list. Apple, cell phone companies, etc prefer to put their money into other parts of their products to keep the over all device price point lower. What’s more, when they list their device specs, they list the results of tests that were done in ideal environments (no other devices in range, access point ideally placed, etc). In the real world, devices almost never operate in such a clean network environment. The result: optimizing the network for speed will help get signal to all of these slow devices, and there will never be enough incentive for the solution to come from the device side of the market.
A word about institution- and library-type networks. They are very different from home wifi setups. Your access point at home is made for a stable environment in which it either can or can’t reach your device. It doesn’t know anything about other access points and it doesn’t communicate much about the environment with your client devices. On the other hand, WLANs (wireless local area networks, like those at most of our institutions) are made up of many access points that talk to each other, adjust themselves to maximize their signals as the environment changes, and communicate a lot with the various devices in the area. We’re talking about WLANs here.
When a device moves, a new access point takes over where the old access point’s signal peters out. However, this process is plagued by three issues.
- Device manufacturers market their products based on battery life. It costs (slightly) less battery power to lock onto a single wifi signal. Therefore the devices are made to prefer staying in contact with a favorite access point rather than move from access point to access point throughout the day.
- Networking, in bygone times, was assumed to be useful in an environment where things didn’t move around. Hence the “sticky client” issue I wrote about previously.
- Early on, it was assumed that the client device would be the smartest about its network needs, so the device should choose a favorite access point rather than have the access points choose devices.
How do these issues play out in a WLAN environment?
- The 2.4GHz signal carries farther than the 5GHz signal, so as you approach a building your device may glom onto a far-away 2.4GHz signal. Meanwhile, in all but the fanciest access points, having a device access at 2.4GHz will slow down the entire access point for everyone else.
- Your computer may not want to let go of a particularly satisfying relationship with an old access point that is no longer in range. This means that you may enter a room and find that you have no signal even though other people there are internetting along quite happily. Your device is refusing to try out a new relationship in hopes of remaining faithful to its lost love.
- A well-behaved device would take the access point’s report “you’re getting a little far away from me, so please transfer to this other access point over here” and would do exactly that. What usually happens, though, is that the device hangs on until things get dire and then probes every access point in the area (a lot of wasted time slices and energy) and then chooses a new access point often based on previous familiarity rather than optimal signal. Meanwhile this whole process bogs down the entire network.
How to improve performance given all these issues?
- One solution is denser signals (visualized to the right, where each dot is the number of ones and zeros the network can process at a time). If we do the exponential bump-up from a 4×4 network to an awesome 8×8 network more things can happen at once. If Jack has a 3×3 laptop and Jill has both a 1×1 smartphone and a 2×2 tablet, our awesome 8×8 access point could serve both of them in a single time-slice rather than cycling through them one by one. The problem here is that you need more and more signal strength to decode the denser signals, so we’ll need more access points with higher signals strength and greater speed. The problem, of course, is that more access points means that each access point has more neighbors, and neighbors interfere with each others’ signal. So this will be a delicate balancing game.
- Another solution is to use software to force clingy devices to allow a handoff to a new access point. In this scenario the network tells a device “Look, you should really move to that access point over there for best results.” If the device doesn’t do as it’s told, the current access point will actually shut off communication, forcing the device to choose a new access point. Carleton’s network has been running this upgrade for a few months now and all indications are that it has helped with performance.
- A third solution is to upgrade access points. Older ones simply don’t have the memory or CPU power to handle all the stuff they need to do these days. Also, older access points are only able to handle 1/3 to 1/2 the number of connected devices as newer ones, and people come into our areas these days with a laptop and a phone and maybe a tablet… That’s a lot of devices for a single person. Also newer access points can emit both fast and slow signals so that the entire area doesn’t slow down when one slow device connects.
So those are the main points that I understood from the presentation. There is clearly not enough here for you or me to start improving our WLANs, but hopefully there is enough here that if you’re in conversations with your network folks you may have some context and even some concrete suggestions to offer.
—
*You can find a lot of useful videos of Peter Thornycrof presenting online. Many of them will be more useful to you if you speak network, but even I was able to get a lot out of the clips Carleton’s network guy showed us. One of the ones we watched some of was about Beamforming.
Comments closed