WTF is... LTE Advanced?
Data download speeds up to 1Gbps and 500Mbps uploads - but how is it done...
Feature Britain now has a 4G network, run by EE, and others are being rolled out. We’re behind the curve, though.
The world’s first 4G network, based on the LTE (Long-Term Evolution) specification defined by mobile telecommunications standards-setter 3GPP (Third-Generation Partnership Project), went live at the very end of 2009, and many more followed in 2010 and 2011.
The standard hasn’t stayed still, either. LTE was defined in a specification called "3GPP Release 8", which dates back to 2008. The organisation is now thrashing out Release 12, but for 4G watchers, the more interesting specification is 2011’s Release 10. It defines the successor to LTE, LTE Advanced.
Release 10 was published early in 2011, so it’s now more than two years old. No wonder, then, that some of the world’s mobile phone networks have already begun to implement it. Russia’s Yota claimed it was the first (Russian), going live in October 2012.
However, there are, as yet, no handsets capable of taking advantage of it - nor yet any chips upon which they will have to be based, though some chipmakers, such as Qualcomm, have promised to deliver them shortly. Trials are in place, but they are using devices with pre-release or custom silicon.
China’s ZTE and China Mobile said they’d tested LTE Advanced and achieved a download speed of 223Mbps, but they didn’t detail the kit they’d used to get it.
Other carriers have already begun talking about putting in place parts of the standard, plans that prompted the 3GPP to insist this past April that operators must not make up LTE-based names for the elements of Release 10 they have cherry-picked. "LTE-A" and "LTE-B" have both been used by certain networks, a practice the 3GPP frowns upon.
Numerical superiority: LTE Advanced vs LTE
The problem is that Release 10 defines a number of complementary technologies any of which can be adopted to deliver higher data transfer speeds, which is what LTE Advanced is all about. Release 10’s goal is, simply, to get data download speeds up to 1Gbps (30 bits per Hz) and 500Mbps uploads (16 bits per Hz).
It uses a number of approaches to make this possible: making better, more efficient use of the spectrum, increasing the available bandwidth, and improving coverage to bring client devices closer to the base stations.
No, not that kind of carrier...
An example is "carrier aggregation", also known as "channel aggregation". In this case, "carrier" means the frequency bands being used to transmit a certain block of data, not the network operator. Each 3GPP Release 8 (and 9) band is subdivided into smaller sub-bands, the "carriers".
They can be set to any of six fixed widths: 1.4, 3, 5, 10, 15 or 20MHz, according to need, the state of the local radio environment, the capabilities of client and base station, and how far apart cell tower and phone are. Each carrier is used to transmit different data sets. The ZTE trial centred on carrier aggregation.
But the widest channel width available to Release 8, 20MHz, isn’t sufficient to deliver 1Gbps downloads. You need at least 40MHz. But rather than define a new, wider channel size, LTE Advanced combines existing, 3GPP Release 8 channels to ensure backwards compatibility. Aggregation allows maximum of five "component" carriers to be used to send a single data set, increasing the peak bandwidth from 20MHz to at least 40MHz and, as defined by the standard, potentially 100MHz.
The standard says any one channel becomes the primary component carrier, to be joined by one or more secondary component carriers. It’s the primary channel that hosts the Radio Resource Control (RRC) protocol data used to configure the connection.
The number of channels aggregated for uplinks don’t have to match the number of component carriers combined for downloads, though they cannot exceed it. Individual component carriers can be of different widths. Aggregation can be used on both FDD (Frequency Division Duplex) and TDD (Time Division Duplex) networks.
Ideally, the 4G base station and the phone, tablet or laptop can agree to use carriers in adjacent frequencies in the same operating band, but that’s not always possible. Device A might want to aggregate carriers X and Y, but if device B, which is only compatible with 3GPP Release 8 is using Y, A will have to make do with X and Z.
LTE Advanced not only permits the use of non-contiguous carriers within the same LTE band - the example I’ve just described - but also carriers in different bands altogether. Which extra bands can be used depends on which ones are supported by both the base station and the client device, and which carriers within them are available for aggregation.
Aggregating channels that are not adjacent, even if they’re in the same band, requires clients and base stations to incorporate separate transceivers for each non-contiguous component carrier.
Contiguous carriers, on the other hand, can be treated as a single channel from a radio perspective, so will only need one transceiver. Adding transceivers, especially to client devices, adds complexity and cost: the price of the extra transceivers and of the bigger battery or power management system needed to ensure running them maintains a decent runtime.
Carrier aggregation is LTE Advanced’s central speed-boosting technique, and it’s what will allow the standard to get data flowing at up to 1Gbps. But not without the help of MIMO, the use of multiple antennae to make more efficient use of a given band.
Finding MIMO: But it's NOT universally available
LTE already supports MIMO (Multiple Input, Multiple Output), the trick of using multiple antennas to send and receive multiple data streams on the same carrier in parallel.
LTE supports 4 x 4 MIMO: up to four transmitters and up to four receivers for downloading. Each receiver picks up the signals from all the transmitters, and the client’s radio uses clever signal processing to separate the various parallel data streams and to remove errors from them. Fewer transmission errors means less information needs to be re-sent, effectively speeding up the flow of data.
LTE Advanced takes MIMO further. It supports 4 x 4 MIMO for uploading, but downloading can take place over an 8 x 8 antenna matrix, though that’s not a configuration you’re likely to see in the real world for some time, at least not in mobile devices.
Adding antennas increases the size and complexity of the radio sub-system, which is generally not something manufacturers of handheld devices are keen to do. But 4 x 4 and 8 x 8 MIMO may very well be used to improve point-to-point links between cell towers, sub-cell infrastructure and other fixed pick-ups. The standard has aspirations to replace proprietary microwave links and backhaul standards too, and MIMO is the technique that gives it a chance of doing so.
Aggregating component carriers to give between 40MHz and 100MHz of bandwidth yields peak data rates of 300Mbps to 750Mbps using 2 x 2 MIMO. Release 8 LTE’s peak bandwidth of 20MHz can deliver 300Mbps, but only with 4 x 4 MIMO.
Clearly, LTE Advanced’s channel aggregation should deliver a comparable bit-rate with a much less complex antenna array, or better as you connect more aerials. Aggregating channels to provide 100MHz of bandwidth and running it over 4 x 4 MIMO delivers that promised peak speed of 1Gbps.
MIMO is only practical close to the base station
MIMO isn’t universally available: it only kicks in when the base station detects a good signal-to-noise ratio, which is usually the case when the client device is close to the base station. That leaves devices that are further away at a disadvantage, especially those on the edge of the cell.
The obvious solution is to increase the number of base stations, in turn making each cell smaller and so bringing each client closer to a base station. Unfortunately, installing base stations is expensive. A cheaper approach is to implement "Relay Nodes": junior base stations without their own backhaul that work with their cell’s central tower - called the "Donor Node" in the jargon - to improve performance throughout the cell.
LTE Advanced allows LTE networks to make use of these relay stations. They are not repeaters, re-broadcasting the signal sent from the client device, but stations, able to decode and process that signal, and re-transmit its content on to the main base station. The system has to be configured to minimise self-interference - to make sure the receive antenna isn’t picking up what’s being sent out on the transmit antenna. Ideally, the Relay Node is designed to keep these antennas isolated, but that’s not always possible, so nodes can be configured to time-share the links between the Relay Node and client, and Relay Node and Donor Node - a half-duplex relay rather than full-duplex one.
Client, Relay and Donor need not operate in the same band, and they don’t even need to appear to the client as a cell in their own right, though they can be set to do so if that’s advantageous to a given installation.
Size doesn't matter
Being closer to the client, Relay Nodes don’t require as much transmit power as a full base station. So they don’t need to be large. And with no need for a wired connection, they can be rolled out more quickly even, some suppliers suggest, on an almost ad hoc basis, perhaps to support users and a newly provisioned area until a full-scale base station can be put in place.
And while they can be used to improve connectivity at the edge of cell, Relay Nodes can also be used to extend cell coverage in specific directions, with a series of them defining a limb that reaches beyond the edge of the Donor Node’s circular cell, for instance. That can be handy in areas where geography hinders circular cells - down valleys, say.
Relay Nodes are part of a broader LTE Advanced technique called Heterogenous Networks - or HetNets. Beyond Relay Nodes, 3GPP Release 10 envisages the addition of "picocells", mini-base stations with their own backhaul. R10 has techniques in place to manage the interference these extra sub-cells generate and to manage to which of them - or the primary base station - clients will connect.
Again, it’s about bringing users closer to base stations to make techniques like 2 x 2 MIMO feasible and to improve performance. In turn, this reduces the number of clients communicating with the prime base station, improving performance for them too. It makes access more fair, HetNet proponents say.
Qualcomm, which is in the picocell business, reckons a 180 per cent increase in a cell’s average data rate can be achieved just by complementing a main cell tower with four randomly placed picocells and by using LTE Advanced’s methods for making the most efficient use of them. Doing the same thing in an LTE environment would only yield a 40 per cent improvement, it says.
Another key advantage of HetNets is the ability to power down the sub-cells when they're not needed. A HetNet covering Wembley Stadium, for example, would normally consist of only a single base station. Add sub-cells - picocells and relay nodes - and these can be powered up when the main station starts to get overloaded when an event is being hosted, and can be shut down at other times.
This allows the infrastructure to become more responsive to demand, especially since it’s much cheaper to roll out sub-cells than main ones, without operating at all times to a peak bandwidth requirement.
All well and good, but putting the higher speeds LTE Advanced makes possible into the hands of punters is still some way off. Of the UK networks, only EE has voiced enthusiastic support for LTE Advanced, though it’s surely on all the operators’ roadmaps.
US carrier AT&T has indicated it will begin testing 3GPP Release 10 technologies this year, while T-Mobile has hinted it might be one of the first carriers in the States to implement them commercially, thanks to its relatively late implementation of LTE: its kit is newer and therefore more readily upgradeable. Down Under, Telstra has said it will introduce LTE Advanced to Australians later this year too. ®