VISION 2020 – Projecting Forward, a Computing Perspective

Dr. Hossein Eslambolchi
April, 2012

We have been on the bandwidth bandwagon ever since Claude Shannon developed communication theory.

His theory described communication between computers or between humans and computers. In either version, he maintained that, if you are forced to pick between bandwidth and power, you pick bandwidth and compensate for power later. This premise led directly to wire-line and wireless technologies that continue to exceed Moore’s law of computing. Every person and every business is hungry for bandwidth. When you consider the bandwidth-intensive technologies that are around the corner — think of about viewing 3D videos on various IP end points — you can understand that hunger.

We have seen improvements in bandwidth with wire-line. We’ve progressed from landline speed to high speed fiber-to-the-home technologies. As for wireless, we’ve moved from GPRS, with tens of Kbps to advanced LTE technology that can provide peak bandwidth of 1 Gbps.

Consider Moore’s law. The law states that the prices of technology will double every 18 months, while performance remains stable, or vice-versa. This represents a 60-fold improvement every decade. In 1980, we had about 256K of available RAM memory — even Bill Gates said in mid 80s that we had enough memory for the rest of our lives. Obviously this did not turn out to be the case. By 2000, we developed standard laptop computers with 256 MB of RAM. If we assume Moore’s law stays on-track and use it to project the development of computer RAM space, we can clearly see 256 GB RAM in our future — a massive amount of memory compared to today’s computers, that are designed with up to 16 GB of RAM.

While it may seem hard to believe, 256 GB of RAM is not out of question. I believe that when true quantum computing arrives, this number could be even higher than Moore’s law predicts.

Consider disk space. In 1980, we could muster 10 Mbytes of disk space. Using Moore’s Law predicted 10 GB of disk space by 2000, which is exactly what we wound up with. Moore’s law wins again. Let’s take this example forward another step. By the year 2020, I predict that disk space will have reached 10 TB of disk space. With solid state technologies, we can assume the figure will be higher, especially with quantum computing.

Finally, let’s look at access bandwidth. In 1980, we all had to deal with analog modems offering 1.2 Kbps. In 2000, companies like AT&T, Verizon, and Covad offered cable and DSL technologies that reached 1.2 Mbps bandwidth. Let’s extrapolate those speeds. We can imagine that, by the year 2020, 1Gbps using Fiber, bonded Cable Head-End Systems like CMTS 4.0 and wireless systems will be available. LTE advanced will offer 1 Gbps of peak traffic.

The nature of access bandwidth is changing for both consumers and businesses. Consumers will have access to what I call the “high-speed train of the future

By 2020, consumers will expect broadband to deliver 1+ Gbps. Only fiber and/or narrow beam fixed wireless will be capable of meeting these requirements (cellular and licensed spectrum solutions will be too expensive or too short-range). I also predict that in-building connections will be wireless. Continuing to run fiber through the walls, at least in homes, is not a practical approach. Proliferation of connected, smart devices, including sensors, will ease us into a totally wireless, virtual world.

In summary, the definition of broadband and its applications is undergoing rapid change, with new technologies and applications on the horizon:

  1. In the mid-80s, I predicted that IP would eat everything and change the model of network deployment. My observation was based on my experience in circuit- switched industry, which was not as innovative as the IP-based networks of today. For example, IP separates the service layer from the network layer; previously these two layers were intertwined or connected. That is why we had separate wired and wireless systems with both network and back-office
  2. With this separation of the service and the network layer, the service layer can be agnostic to the network layer, allowing one service platform to serve many networks — offering the same service for wired and wireless networks. This is one reason for some of the telecommunications consolidations and synergies that we continue hearing about.
  3. IP is much more robust and cost-efficient now than it once was, from just 1.5 9’s to 4.5 9’s of reliability.
  4. IP and market-enhanced demand reduces data costs from cents-per-bit to less than a few nanocents-per-bit. The key is scale; this is another factor that contributes to telecommunications consolidation.

 

Along with these predictions, I believe security will present big issues. If we do not pay attention to it, we will end up embroiled in a cyber-war of biblical proportions. Unfortunately, no one is looking at this problem holistically. It requires legislators to create rules that protect privacy. Information on both government computers and consumer electronics are now open to attacks: iPhones and any smart phone or device connected to the internet — even IPTV — is vulnerable. Can you imagine hackers attacking the IPTV system and shutting down the Super Bowl broadcast? It’s possible!

These same concerns apply to cable, fiber-to-the-home and fiber-to-the-curb technologies.

We need to design a network that is totally cloaked (I freely admit stealing this phrase from the Klingons of Star Trek fame.) If we can make an aircraft invisible by using certain materials to evade radar, then why shouldn’t we focus on cloaking infected computers, ensuring that they cannot affect other machines on the Internet? Current security technologies are all after-the-fact, but I envision a world where security will follow a 3P Model:

  1. Proactive — Systems proactively scan for flaws and use ACL to control access to software and systems in both enterprise and tier-1 service providers.
  2. Preventative — Networks adopt a set of rules to help prevent any attack on any element connected to Internet. Stochastic processes will also be a great help. Intrusion Detection Systems (IDS) are practically useless, and companies sell these technologies even though they offer
  3. Predictive — Systems use real-time data and deep-packet inspection to look for patterns of attacks — to get into the minds of hackers and stop them before they launch. I call this knowledge mining, and I have encountered few companies that grasp the concept and apply the principles effectively. Government assets may need to use this technique to prepare for future cyber-attacks, an effort that will require federal legislation. This is the only way to avoid major cyber wars in the 21st century. If not addressed, the negative impact of cyber wars could be colossal for human kind.

 

Let me focus for a moment on cloaking, shielding and quarantines.

To protect the network of the future, we must anticipate malicious attacks well in advance. These attacks could be launched against the core infrastructure of the network, edge devices on the network, or both. A virtual quarantine isolates infected elements from the rest of the network, stopping the spread of the infection to other areas.

How can we accomplish this vision for future security?

With access to large volumes of IP data traffic, we can carry out forensic analysis of network traffic using deep packet inspection (DPI). This analysis detects patterns that may be early indicators of a new worm, virus, or malware attack. This information could then be used to isolate network elements from attacks, to shield network elements from the attack in proactive ways, to isolate and quarantine specific infected elements. This is an attractive proposition, but if it is to work, a difficult question needs to be addressed: How so we communicate with network elements and edge devices that may already be infected?

My model for successful quarantines begins with the following steps:

  1. Malicious attacks are launched.
  2. Networked PC blocks the IP address the attack was launch from somewhere from the Internet.
  3. Edge networks isolate newly attached, infected PCs from the rest of the network.

 

Using new high-speed computation and insights culled from DPI, we will be able to make these security efforts execute on a (almost) real-time basis. Doing so will prevent massive damage and isolate the public and government from enormous problems. And by “enormous”, I mean trillions of dollars lost by businesses and consumers.

Finally, consider the importance of sensor networks. With the advent of near field communications (NFC) for various smart phones, and the continuing development and deployment of radio frequency identification and detection (RFID), more information will be bubbling up at the edge of the network. This enormous amount of data will feed back to central sites, like distribution centers and corporate headquarters.

In fact, it’s quite possible that current information exchange processes will soon be reversed. For the first time, more information will be sent from the network’s edge back to central sites, rather than the other way around. This fact will automatically drive symmetrical broadband deployment toward the edge of the network. When RFID technology is bundled to the unit level, information flow from sensors could overshadow other network applications such as voice, data, wireless, and video. If and when this happens, novel network architectures that can adapt to this state of affairs will be obligatory.