This is the first in a series of blogs in which I and my good friend Dan Picker, CTO of Purewave Networks, will discuss the needs, challenges and huge potential advantages of what is gearing up to be a revolutionary new approach to the design and deployment of broadband cellular networks. These so-called “Small Cell” networks have the potential to scale much higher data usage per user than could practically be achieved with existing cellular networks or traditional approaches to network infrastructure build-outs.
The traditional cellular network paradigm of colossal “macro” base stations and large cell radii, in support of high-coverage networks that carry low-bandwidth voice and messaging traffic, were never conceived to serve the demands of today’s mobile broadband users and applications, let alone the predicted exponential data explosion which has barely yet begun. As this demand for mobile data delivery has continued to grow, networks have become congested, prompting their continued migration to ever more efficient wireless standards, the latest of these being LTE or “Long Term Evolution.”
However, even the most cutting-edge wireless technology, such as LTE, can affect only incremental improvements in spectral efficiency, compared to the exponential growth in data throughput demand. Moreover, spectrum is an extremely limited (and expensive) resource that cannot simply be scaled to compensate. Therefore, existing wireless networks are quickly reaching their physical limit, requiring a new approach to wireless network architecture.
While macro base stations continue to provide a good solution for “umbrella coverage,” particularly with respect to voice and high-speed mobility, it is now crystal clear that that the most promising and accessible solution to the growing data consumption need is to re-architect broadband networks to employ much smaller cell sizes which facilitate dramatically better spectral efficiency, simply through improved spectral re-use.
To illustrate this, consider a single cell site that has been assigned a particular allocation of spectrum. That chunk of spectrum has the ability to carry a corresponding amount of data traffic – the larger the chunk of spectrum, the higher the data capacity it can sustain. Obviously, the larger the cell radius, the larger the population that will share that capacity, which results in lower average data rate per user. Conversely, the smaller the cell radius, the higher average data rate per user.
Continuing with the same illustration, if additional, non-overlapping cells are then added to the network, they may re-use the same chunk of spectrum and, in doing so, effectively multiply the number of users that can receive a similar data capacity. Simply stated, the more non-overlapping cells that can be accommodated in a given geographical area, the higher the aggregate data capacity that may be delivered within that area.
The easy conclusion here is that smaller cells allow greater capacity per geographical area. This is, however, an inaccessible advantage if the cost and practicality of such a deployment is prohibitive. In that case small cells would become a viable possibility only for the most troublesome hotspots and not a general coverage solution. Fortunately this is not the case, and coverage needn’t be sacrificed.
Generally speaking, the lower the RF transmission power, the smaller the cell radius, so it is possible to scale up capacity by lowering transmit power. Another general truth is the lower the transmit power of a base station, the less costly and the smaller its size (less surface area required for convective cooling). This creates an additional win-win scenario. Small size is critical, especially for typically highly visible, high-density urban deployments. Similarly, low cost is a critical enabler to high-density small cell deployments, as they would otherwise be impractical.
It makes sense to mention some additional aspects of cost, beyond just equipment cost. Installation is a key component of infrastructure CAPEX, and there will be many more installation points of small cells than macro sites. However, their small form-factor allows them to easily and flexibly mount virtually anywhere – on utility poles, flag poles, rooftops, small buildings, walls and even in indoor enterprise and venue environments. This is compared to macro base stations, which typically involve tower climbs, long and lossy cable runs, much larger and more costly antennas and often times additional infrastructure, such as air-conditioned enclosures. Industry installation time targets for small cells tend to be on the order of an hour or less, compared to macro base station installations which are often on the order of days.
In terms of OPEX, each small cell site will of course require much less power than traditional macro sites, and rents will generally be lower for this smaller, lighter equipment. Additional benefits can be achieved by including advanced technology such as multi-carrier and even multi-operator support into small cells, and this will be explored in future blog topics.
Finally, network planning and optimization affects both CAPEX and OPEX, and this is where Self Organized Networking (SON) steps up. SON algorithms offer the promise of not only plug and play provisioning, but also continuous optimization and reorganization of the network. This means that as additional sites are brought up, or as external factors arise with the potential to affect network performance, the SON algorithms will continually work to automatically keep performance optimized. This includes the adjustment of parameters such as channel selection and transmission power, among many other things. SON is a very exciting, but still evolving technology, which means that early small cell deployments will enjoy continuous performance improvement as time goes on.
Challenges certainly still exist. The biggest impediment to practical and cost-effective small cell deployments can be backhaul, which often needs to be wireless to be affordable, and one size never fits all in this respect (non-line-of-sight vs. line of sight, licensed vs. unlicensed, etc). Fortunately, there is an entire industry working to solve this problem and there are already many choices to satisfy any type of deployment scenario.
Equipment size and cost, although already disruptive, can certainly be greatly improved as new processor, RF, filter and antenna technology is developed. The industry is working very hard on these components of small cell infrastructure and progress is steadily being made. I will discuss some of these efforts and advancements in future blog topics.
Frankly, the biggest impediment to the benefits that can be enjoyed by the wide scale deployment of small cells – both indoor and outdoor – is the tentativeness of many cellular operators in changing the paradigm and diving in. It would appear that most of the world’s largest operators are finally convinced that small cells will be a huge part of their future success. Indeed, several operators are aggressively and enthusiastically making plans and projections for their small cell deployments. However, only a few are starting to dip their toes into the water, and many are dragging their feet and simply watching. For this reason, relatively few large-scale rollouts have yet begun.
Many operators believe that indoor deployments are the place to start, as they pose less of an interference threat to the existing macro network. This is a sensible plan, and much knowledge can be gained from indoor small cell deployments. However, whereas indoor WiFi is quickly becoming ubiquitous in public spaces, outdoor ultra-high-speed data capacity is still sparse on a global scale. It is my view that operators that fail to deploy small cells as a primary part of their 4G network rollouts, and that instead intend to rely upon them for their eventual in-fill needs, will find that they have built a network that will have difficulty profitably scaling to the future data capacity demands of its customers. That is an eventually no operator will want to realize.
Dr. Eslambolchi, Dr. Picker