Sorry, you need to enable JavaScript to visit this website.

Playing at the Edge of the Network
Converged Systems

Playing at the Edge of the Network

When Distance Matters

A new class of low latency applications are surfacing that will generate unprecedented demand for computing power and real-time exchange. For example, the latency for refreshing the images on a virtual reality headset must be around 10 milliseconds. To stay in that budget, today’s virtual reality gaming headsets need bulky hardware to control sensors and run compute intensive software.

The rising expectation of rich and immersive experiences means that existing cloud architectures will not be able to handle efficiently the demands of multiplayer gaming, streaming high-definition entertainment, robotics and other service-heavy delivery use cases.The idea behind edge computing and intelligence is to move beyond the limits of device processing and the latency of cloud computing, For example, Zeiss, a leader in the field of optics and optoelectronics is working on smart glasses that display data and video in the wearer’s field of vision. Other applications are waiting to be designed, including cars that can talk to each other and avoid collisions with greater assurance or industrial robots that can be manipulated over the network.

The only way to meet this demand is to look past the limits of device processing and the latency of cloud computing, and instead bring compute power closer to the user — to the edge of the network (or internet) which could well be the network operators' domain.

Apart from the obvious benefits of latency by offloading computing to the neighborhood data center, the edge compute also provides more flexibility to the mobile and edge application developers to leverage the benefits of cloud computing without having to worry about jitter, congestions and draining out the device battery by computing on the device itself.

 

 

When distance matters-between a request and a response - then applications must reside close the point of consumption. However, the edge does not have to be on the headset or in the home. The edge can be on the network where they have a unique advantage of optimizing the hops including, the air-interface, to avoid unwanted latency. 
Walid Negm, chief technology officer, Aricent

Moving compute to the edge will unleash the potential of latency sensitive applications. However, there are some hurdles still to overcome:

Attracting developers onto a brand new platform

Developers will have no compelling reason to build for thee edge unless they see a compelling need for “proximity computing”, “low and stable latency” and a simple and easy method to onboard their applications to an edge compute platform. A vendors edge platform must attract developers by making it ridiculously easy to onboard applications. For example the edge platform should aid in the  discovery of proximity data centers. The edge platform should be able to provision application rapidly. Application developers will expect the platform that hides the complexities of a service provider architecture and is cloud-native. Finally developers will want to distribute the applications between edge and cloud--so on-boarding applications will mean some sort of automated delivery of content, data and software across a distributed edge.

Optimizing for the far and near edge 

A discussion of edge computing will mean applications and resources that are provisioned across central or regional carrier clouds, edge data centers and public or private clouds. Everything needs to work nicely together and where are not there yet. For example, the network needs to be "autonomous" to determine where resources will be provisioned -- at the "right" edge". Hardware such a GPUs and FPGA Network Interface Cards will be deployed in edge data centers to train data, run inference model and orchestrate workloads. While traditional data-centers are being designed for energy efficiency there is an expectation of even greater efficiency because of real estate constraints. And the software platform that runs in edge data-centers must be designed for scale and compute distribution -- running Openstack out of the box will not work. The infrastructure as a service must be optimized for low latency and rapid deployment of applications through containers or virtual machines.

Bringing intelligence to the edge

Bringing intelligence to edge platform is critical for application developers who are looking to leverage low latency. It is essential that the platform provides adequate capabilities in the areas of artificial intelligence, computer vision, robotics etc. General purpose computation requires to be done on different kinds of nodes and hardware. This requires development of kits for developers to utilize the computation platform without having to navigate the challenges posed by the heterogeneous hardware at the edge.

Director, R&D
Shamik Mishra

Converged Systems

  • Brochure
    Aricent Management and Network Orchestration Platform
    Preview
  • Brochure
    Aricent VNF Onboarding Solution
    Preview