Wireless boffins boost Wi-Fi hotspot performance 700%
Researchers at US college North Carolina State University claim to have worked out how to allow Wi-Fi hotspots to fling up to 700 per cent more data back and forth, freeing large-scale Wi-Fi networks from the congestion that keeps users waiting for web pages to load and, in worst cases, to think they’ve been disconnected.
And the technique’s effectiveness in direct proportion to the number of clients connected to the base-station.
Called WiFox, the NCSU system is essentially an algorithm that monitors the volume of traffic an access point is handling. If the access point is building up a backlog of data to send to clients, WiFox tells it to prioritise the transmission of that data over dealing with new requests. The more data in the queue, the more a given access point is given the go-ahead to clear it.
In boffin-speak, WiFox “adaptively prioritises access points’ channel access over competing STAs avoiding traffic asymmetry... provides a fairness framework alleviating the problem of performance loss due to rate-diversity/fairness and... avoids degradation due to TCP behaviour”.
The down-to-Earth upshot, say the boffins, is that data flows through the system more efficiently, allowing more data to be sent. In practical terms, that means delivering requested web pages and the content they contain much more quickly than before.
The NCSU team trialled WiFox on their own wireless network, capable to supporting up to 45 users. With few users connected, the access points can handle the traffic, just as any home network does, but as the number of clients rises, the requested-data backlog builds up. With WiFox switched on, not only did users experience a better response, but the more users connected, the more data the system was able to keep moving.
Improvements, the scientists say, ranged from 400 per cent with around 25 users to 700 per cent when the network was supporting the maximum number of clients. And the average response time falls by 30-40 per cent.
This is, they say, a major improvement in "goodput" - a wonderfully Orwellian coining if there ever was one.
And the WiFox algorithm can easily be added by vendors’ to their access point firmware, the team’s leader, Arpit Gupta, a PhD student in computer science, said.
Gupta and fellow coders Jeongki Min and Injong Rhee have written their work up in a paper entitled ‘WiFox: Scaling WiFi Performance for Large Audience Environments’ which will be presented at the ACM CoNEXT 2012 conference in Nice, France next month. ®
In this case, it's a problem with how WiFi is setup, rather than TCP. WiFi is a shared medium, so you're going to get collisions (CSMA/CA attempts to give fair access to the medium). TCP is affected as it sees a collision as a drop, so it scales back throughput wise. That's why they're dropping new sessions, and giving priority to the existing data flows (It's kinda cheating, throughput wise).
TCP has quite a few throughput hacks in it (Window Scaling, SACK, binary backoff Vs exponential etc), and is quite predictable and mature. The real issue here is wireless ethernet being "non switched", thus having collisions and packet loss with many users.
I didn't read the entire article, but I got to the part where it said this new "technique" is dependent on how many clients are connected to the AP.
If that's true, I've been performing this technique for several years now, when I would log into my home router and dis-associate my grandfather's laptop so he would stop watching YouTube and I could have the internet connection all to myself.
The article seems to make a distinction between hotspots and access points. I'm not that familiar with WiFi topology but I've always considered the two to be the same though usage seems to imply that hotspots are public access points for a particular network over a large area such as a city and access points generally relating to more or less closed networks such as hotels and conferences.
This sounds like network management optimisation which is unlikely to have much effect with just one device. I'm also not convinced that if everyone is downloading you can increase the yield. This approach sounds like load balancing across access points. Surely, before any effort is made in that direction, you need to make sure that the access points are set up to consider both the effects of the environment, user density and interference from each other? Not really up on much of this so would appreciate an explanation.
Has there been any work done on Bluetooth 3 networks which use Bluetooth as a d-channel to manage clients while data is carried on WiFi?