Cutting the cord: future mobile broadband tech
How internet on the go is going to get much faster
Wireless telephony is undergoing a revolution, with technology and implementation philosophy each holding the other back in turn as the industry struggles towards wireless nirvana.
Engineers have spent the last few decades squeezing more data into the same wireless bandwidth, a process that is deep into the realm of diminishing returns. Meanwhile, regulators are loosening rules about what technologies can be deployed by whom, allowing the engineers to exploit new techniques that are defining the standards that will carry the next few decades.
Mobile telephony was predicated on the ability to make voice calls without wires, but the next generation of wireless standards consider voice calling to be a peripheral function. Instead, data takes central stage. Voice is, after all, just another kind of data. The wireless standards used for telephony today still have a separate channel for voice connections, but LTE (Long-Term Evolution) and WiMax (Worldwide Interoperability for Microwave Access) are both data networks designed to carry broadband data which might incorporate the occasional voice call too - a clear change of focus for the so-called fourth generation (4G) of mobile technologies.
Downloading some history, very slowly
Data was very much an afterthought in the design of the original mobile telephone - the first car phones could be connected to a modem which would squawk down the line at a horribly slow speed. In the UK, tech corporation ICL fitted all its engineers' cars with such modems, and phones to go with them, but it never managed to get a data service working reliably over a network that was ill equipped to handle mission-critical data.
Second-generation technology GSM wasn't a lot better, so wireless data remained something for the desperate specialist despite the launch of Wap (Wireless Application Protocol). Wap promised to deliver the internet on the move, a promise that was made though legendary TV ads that have actually been blamed for killing Wap by setting unrealistic expectations:
There followed a sequence of steps in which the industry repeatedly deluded itself that as soon as data connections could be made slightly faster, users would leap to the wireless internet - and be delighted to pay for it.
The original wireless data was based on CSD (Circuit Switched Data) connections: using the connection that would normally carry a voice to transport data. The problem with CSD, apart from its lamentable lack of speed, is that the amount of bandwidth used remains the same even if no data is being transmitted. Early Wap sessions were commonly billed by the minute like phone calls.
From 2001, the 8390: Nokia's first GPRS phone
That changed with GPRS (General Packet Radio Service), which did at least use a digital connection and one that slipped packets into unused voice slots rather than converting data into audible tones. But the speed was still poor and mobile data only expanded from the specialist to the geek - still far from the mobile internet promised in the adverts.
The slots available to GPRS exist because while GSM is an FDD (Frequency Division Duplex) technology it's also TDMA (Time Division Multiple Access). The first term means that one frequency is used for sending and another, simultaneously, for receiving. That's in contrast, incidentally, to TDD (Time Division Duplex) where the same frequency flips between send and receive many times a second.
TDMA means that where more than one user is on the same frequency they are allocated time slots in rotation, up to eight of them in GSM, later upgraded to 16. If fewer than eight people are on the same frequency then GPRS can drop data into those unused time slots.
Handsets have even got quite good at using more several slots. Up to four are commonly used giving a speed something in the region of 53.6Kb/s with a following wind, if the empty slots are available - voice still takes precedence every time. That's a huge improvement, but users still proved obstinately slow to embrace mobile data.
3GPP GSM/GPRS/Edge Timeline
Then we were told that Edge (Enhanced Data rates for GSM Evolution) would deliver what we had been promised. Edge uses better encoding than GPRS to squeeze three times the data into the same slot, usually with a basestation and handset software upgrade as everything but the data encoding remains the same.
Edge is far from dead - many operators around the world still haven't deployed 3G technologies, and even in the UK once you venture outside major cities Edge is the best you can hope for. Edge is even evolving to support dual-carrier connections: basically one handset supporting two separate Edge connections, doubling the bandwidth without requiring any network upgrade.
So much for O2's decision to go straight to 3G: Apple highlights the iPhone's use of Edge
But even with Edge users still resolutely refused to start using mobile data in a big way: ironically it was O2 who realised that no one was going to be impressed by Edge, and decided against deploying it while pushing ahead with 3G, a step forward it was obliged to go back on when it got the exclusive deal for Apple's iPhone, which only supported Edge connections. The project was even codenamed 'Bono', on the basis that it was worthless without The Edge.
But back in 2000, the mobile industry's self-delusion continued with the promise that 3G services would take the world by storm, and operators paid billions for some 2.1GHz spectrum in which to run 3G networks - their licences prevent them using the technology anywhere else. The 3G GSM standard is W-CDMA (Wideband Code Division Multiple Access) and offers much better data rates by taking advantage of a technology that was already popular in the US whereby multiple users can share the same frequency without mucking about with time slots.
The American Way
Unlike Europe, which was happy to have a central authority mandating GSM as a wireless standard, US carriers deployed a wide range of technologies on the basis that a free market would reward the best one. This resulted in a bundle of different mobile networks, largely incompatible, that have now paired down to a few, most notably CDMA and GSM. These days, GSM is displacing CDMA, but that's thanks to worldwide deployments driving down the cost while pushing up the pressure for compatibility, rather than any inherent technical superiority.
CDMA 2000 EVDO Timeline
CDMA technology is largely owned by Qualcomm, and is based on the idea that each transmission is identified by an encoding system rather than a time or frequency slot. W-CDMA is an FDD standard, so every connection needs a pair of 5MHz-wide bands: one for transmitting, one for receiving. Each pair can be shared with other users thanks to the code division system.
Each user's data, be it compressed voice or IP packets, is multiplied by what's called a 'spreading code' which is unique to that user. The receiving basestation gets the combined signals from every user on the frequency, but dividing that received data by the user's spreading code reproduces the original data for that specific user. Applying different spreading codes to the same received data can, in W-CDMA, reproduce the communication for almost 200 voice calls - depending on the codecs being used.
Did anyone really make video calls?
Voice still takes precedence, but W-CDMA can support data rates of up to 384Kb/s with minimal connection times. Slower rates are also possible, as well as maintaining several connections at the same time. When 3G data services were developed, it was commonly thought that users would want multiple data connections billed at different rates: a slow connection for e-mail notifications might be free to use, a slightly-faster one for instant messaging could be always connected as part of a premium tariff, while the full 384Kb/s would be available on demand for video calling and the like.
But it's not just video calling that punters proved reluctant to do - not enough were downloading music from the network operators, or subscribing to video streaming services, or really using any of the whiz-bang services by which the operators had justified the billions spent on the licences. Unable to understand the reluctance of users the engineers decided that if only the connections were a little faster then the mobile internet would finally happen, so set about squeezing a little more speed out of the W-CDMA standard.
It's all in the angles - OFDM
OFDM (Orthogonal Frequency Division Multiplexing) isn't a new technique, or one unique to the cellular industry: it's also used by the Wi-Fi as well as ADSL broadband and DAB radio, but it is worth understanding as it provides remarkable facility as long as one has a wide enough band in which to use it.
The main problem OFDM solves is one of timing. A transmitter sends a chunk of data, followed by another one: assuming the chunks arrive sequentially then everything is fine, but if the first chunk bounces off a wall and thus arrives slightly later then it can interfere with the second one as their arrivals coincide. Obviously, this problem can be addressed by leaving a timing gap between the chunks, but OFDM instead puts them in different frequencies so the second can be sent immediately after the first.
That requires a fast radio and generally contiguous spectrum - though OFDM can be used in non-contiguous blocks, it's just more difficult. Orthogonal in this context means 'discrete' or 'separate' rather than anything to do with angles or Greeks.
In-filling the generation gap
The first thing was improved download speeds - pushed from 384Kb/s up to (a theoretical) 14Mb/sec - a significant increase, though that's what HSDPA (High Speed Data Packet Access) technology will support, rather than what operators yet have rolled out.
HSDPA uses improved 3G encoding, in much the same way that Edge improved GPRS connections, but it also requires more intelligence in the basestation to schedule data delivery to take maximum advantage of the bandwidth available. That intelligence also extends to resending missed packets, which reduces latency significantly but at the cost of processing and storage at the basestation.
3GPP UMTS Timeline
Latency on W-CDMA networks runs between 100ms and 200ms, compared to something in the region of 300ms for Edge, but HSDPA can keep the latency well under 100ms, enough to make the difference between parrying the blow and having one's arm lopped of in any decent online world.
For uploading data, the industry standardised on EUL (Enhanced Uplink), also known as HSUPA (High Speed Upload Packet Access). EUL creates a new uplink channel specially to carry up to 5.8Mb/s from the handset to the basestation, though most handsets can only manage about 2Mb/s for the moment, which should be enough for most applications.
The combination of HSDPA with EUL is known as HSPA (High Speed Packet Access) and is widely deployed in the UK, at least as long as one doesn't wander far from a major city. Like 3G, HSPA isn't very good at getting in to buildings, being stuck at 2.1GHz for historical reasons that will shortly cease to exist. The 2.1GHz doesn't offer much in the way of range either. Being allowed to deploy 3G at lower frequencies is on the wish-lists of most operators, a wish that regulators around the world are well on the way to granting.
HSPA is also evolving into HSPA+, also known as HSPA Evolution, which promises to deliver download speeds of 42Mb/s, with 11Mb/s on the return path, at least in theory. HSPA already uses a basic form of Mimo (Multiple Input Multiple Output, but HSPA+ allows for the creation of two independent connections on separate frequencies - thus occupying two 5MHz-wide bands - as well as more-traditional Mimo utilisation.
Sector Throughput (Capacity): HSPA vs LTE
HSPA+ also promises an all-IP architecture, using the Internet Protocol for everything, a technique also applied by 4G technologies. In fact, HSPA+ employs many technologies associated with 4G, with the notable exception of OFDM.
Welcome to the less caring, less conformist generation
Both WiMax and LTE make use of ODFM and Mimo. LTE only uses ODFM for the downlink (to the handset), but its implementation of Mimo is more advanced than WiMax and the technology is largely irreverent at this stage: it comes down to which technology is going to achieve the economy of scale necessary if it is to dominate.
3GPP Long-Term Evolution Timeline
Over the last few years, a war has been fought, largely between the companies owning the patents on the different technologies. Intel heavily invested in WiMax and has been fighting it out with Qualcomm, the latter's CMDA interests incorporated into LTE. Nokia has weighed in on the LTE site - when not fighting Qualcomm over ownership of CDMA. Intel's lack of experience in the mobile industry hasn't helped the WiMax cause, and LTE was specifically designed to appeal to network operators - Ericsson estimates that a WiMax network at 2.6GHz would need between 1.7 to 2.5 times as many basestations as a 3G network on the same frequency, figures which mean lot to network operators.
WiMax had a time advantage - the first version of the standard was completed back in 2005, though when comparing with the competition the WiMax Forum tends to use the recently ratified release 1.5 (802.16e revision 2) while ignoring the more-recently-published versions of HSPA. But that lead was eroded as regulators struggled to embody a new philosophy that has changed the way that radio spectrum is distributed and used.
Mimo, your Mo, everybody's mo
Mimo (Multiple input, multiple output), is basically the idea of having several aerials receiving different signals simultaneously, either to increase bandwidth, reliability or both. The technique is old news to the Wi-Fi crowd, but still leading edge in cellular.
In 4G networks, Mimo is used for two purposes: to create directional signals, and to increase the bandwidth available by sending multiple signals at the same time to be picked up by separate antennae. LTE, for example, uses the former technique to connect users at the edge of the cell, and the latter to provide greater bandwidth to those nearer the middle, at least in theory.
Huawei's E182: boosting HSPA to HSPA+ with Mimo
The problem with having multiple antennae is that they need to be physically separated, generally by at least quarter the wavelength being used. For 2.6GHz that's easy enough - a wavelength of around 12cm makes for aerials at least 3cm apart, ideally 6cm. But reduce the frequency into the digital dividend space and things get more crowded. A transmission at 600MHz has a wavelength of nearly 50cm, putting our antennae more than 12cm apart, which is tricky on a modern handset.
On the basestation things are easier, but only a little. A Mimo basestation can operate with antennae half a wavelength apart, but several times the wavelength works a lot better - up to ten wavelengths isn't considered excessive. That may be practical in some circumstances, though it does make a 600MHz basestation 5m wide.
In the past, governments, through their regulatory bodies, awarded radio spectrum to groups who could most benefit the citizens governed, ensuring compatibility by mandating technologies and shifting radio spectrum around to provide competition where it was thought beneficial. More recently, governments have begun to see radio spectrum as a nice little earner, with the fall-back ideology that the highest bidder would have the biggest incentive to make efficient use of the spectrum - a process that came to a head with the 3G auctions where companies fell over each other to bid more money than they had for radio spectrum in which they were required to deploy a specific technology, over a specific quantity of the country, within a specific time.
Mobile WiMax Timeline
Many of the bidders failed in their obligations on coverage and deployments, though all followed the technology requirements. In the UK, all the operators, except Vodafone, bid for the 5MHz of spectrum allocated for TDD use. But no one is using TDD networks, and the licence forbids the deployment of anything else in the band, and so that part of the spectrum lies empty.
Examples like that, of which there are several, have helped pushed the regulator towards selling off radio spectrum without restriction, but it's still not possible for the regulator to avoid revealing a technological leaning. Taking Ofcom's plans for the Digital Dividend as an example: the vast majority of the spectrum will be sold off in paired lumps, ideally suited to an FDD technology like LTE, but not ideal for the TDD system currently used by WiMax.
A couple of blocks will be sold off without pairing, but the delays in working all that out - and a couple of conveniently-timed court challenges from T-Mobile and Telefonica - have seen the sale of spectrum that might have been used for WiMax delayed until well after the lead the standard has established had ceased to be relevant.
Just as radio spectrum has become more flexible, so the standards have become more encompassing. 4G mobile standards aren't happy just connecting mobile phones to basestations, they want to be used for backhaul too - connecting the microwave relays dotting the nation at very high speed and very high frequencies, not to mention competing with ADSL for fixed broadband connections. Both LTE and WiMax can operate on just about any frequency, and both are expected to be deployed in a wide variety of roles.
Bandwidth and latency: LTE vs HSPA
But LTE is more flexible than WiMax. It's able to operate in 1.25MHz increments of spectrum up to 20MHz - a bandwidth that can offer 160Mb/s downstream using two antennae, or more than 300Mb/s if you can squeeze in four aerials. On the upstream, LTE promises 86Mb/s eventually, but like the downstream that will be phased in over time with initial deployments offering closer to a quarter of that. WiMax may already be deployed, notably by Clearwire in the states, but those deployments are only offering 4Mb/s: a speed that's also already attainable on a 3G network with HSPA and hardly an advertisement for the next generation of technology. Rival US operators are committed to deploying LTE over the next few years.
In the UK, WiMax is only used for fixed connections, a space where it will probably continue to be deployed for a few years until the Intel dollars dry up - LTE has aspirations there to.
Range considerations: WiMax vs HSPA
But nothing is going to happen until the middle of 2010 when Ofcom launches the mega-auction of Digital Dividend spectrum - between 400MHz and 800MHz - along with a chunk of 2.6GHz that should have been auctioned off last year, but couldn't because of T-Mobile and O2 legal actions that are only now being resolved. Once that spectrum is in private hands, we'll start to understand to what use those hands intend to put it, though deployments will probably wait until some time around 2012.
What isn't clear is if we'll see many phone handsets capable of switching between 2G, 3G and 4G technologies. In the US, WiMax is being pitched as a wireless data network for laptop computers, rather than a voice service, and it's probably computers that will drive LTE adoption rather than handsets, at least initially.
Few regions have sufficient 3G coverage to even contemplate switching off 2G services, though with greater spectrum liberalisation allowing 3G deployments at 900MHz and lower, that should change. So the only question is if network operators will expand their 3G networks the fill the cities with LTE, or simply deploy LTE everywhere and quietly forget that 3G ever existed.
Modem makers are already gearing up for LTE
Image courtesy Mobil.se
WiMax will probably still be lingering around then: Intel has spent millions promoting the standard and won't walk away easily. But LTE is effectively unstoppable now and the standard has been formally endorsed by so many mobile operators as to guarantee its eventual domination of, and possible monopoly on, wireless communications in the long term. ®