“En-caching” the RAN – the AI way

  RAN caching is an intuitive use-case for AI. Our report “AI and RAN – How fast will they run?”, places caching third in the list of top AI applications in the RAN. There is seriously nothing new about caching. In computing analogy, caching is as old as computing itself. The reason caching and RAN are being uttered in the same breath is primarily MEC. MEC is a practical concept. MEC attempts to leverage the distributed nature of the RAN infrastructure in response to the explosion in mobile data generation and consumption. Caching looks at practically every point in the RAN as a possible caching destination – it can be base stations, or RRHs, or BBUs, or femtocells, or macrocells and even user equipment. The caching dilemma is a multipronged – what to cache, where to cache and how much to cache. In an ideal world, one could have access to infinite storage and processing capacity interconnected with infinite throughput at zero latency. In the real world however, each of these a

AI burden in RAN - 5G Wins!

  Does 5G wear the AI burden more lightly in the RAN than earlier ‘older’ generations? Most certainly! We aver in our upcoming report, “ AI ML and DL in the Mobile Core ”. To be sure, there was no shortage of motivation for the usage of AI in 4G, or for that matter ANY generation of RANs. Let us look at 4G networks alone. 4G networks offered a quantum leap in terms of throughput over its predecessors. That aspect alone would have incentivized telcos to view their RANs from the prism of AI-enabled insights. To be sure, they did. Then there was energy management. Practices and the possibility of using AI and ML technologies in energy management precede the advent of 5G RAN. The case for traffic optimization in the pre-5G era was advanced greatly by the hybrid mode of deployment - the 5GC coupled with old radios as the hybrid mode offered a spectacular rise in throughput, enough to keep network planners interested in AI and ML-based constructs. The principle driver for AI however is not t

The Table, The RAN, The AI and The Serving

What is the singularly pivotal value addition that 5G networks bring to the table? Beyond doubt, it is their ability to become all things for everyone. Welcome aboard traffic optimization – better known as network slicing plus edge computing. And who serves the traffic optimization in all its flavor? Undoubtedly, AI. No wonder, we forecast the the addressable market for AI in RAN traffic optimization will grow by a whopping 31.5% during 2023-2028 in our upcoming report "AI and RAN – How fast will they run?" Let us look at network slicing first. To be sure, network slicing is offered to an extent on 4G networks as well. But is the full flavor of the feature that 5G promises to unleash, that is making matters ripe for intervention of AI and ML technologies. Simply put, network slicing puts forth a plethora of difficult decisions that network planners need to confront. These decisions involve the degree to which granularity in slicing should be achieved and how to optimally ma

The “Big O” in the NFVO... and the MANO

  Let us address some existential issues. What is NFVO, or VNFO for that matter, after all? Is it the same as MANO? In the strictest of senses, MANO has a different connotation – a more inclusive one. MANO in its purest sense combines the NFVO, the VNFM and the VIM. Thus, the NFVO orchestrates the VNFs, the VNFM manages the VNFs while the VIM interfaces with the NFVI. Thus technically, NFVO is a subset of MANO. Agreed. So which market is Insight Research covering ? Is it NFVO, or is it the larger MANO? Well, it is the large (not larger) MANO. Let me present an equation for you to paraphrase our inclusions: NFVO (as quantified in our report) = MANO (As per ETSI) - VIM Then why are we calling it NFVO, and why not MANO? The reasons are very simple – better recall and better sense. Explaining better recall is easy. The term NFVO connects the orchestration of NFs in a way MANO never can. This is as clear as it gets. But MANO is not orchestration alone. It includes ‘management’ as well.

The NFVO Product and its Profiles

  Insight Research, in its recent report “The VNFO…Ripe for change” has broken down the market by product profile. We consider two profile categories – Direct Open Source and Proprietary. Let us look closely at the categories. Insight Research has a very categorical definition for direct open source NFVOs - those that traverse the development journey directly from the open source community to the end-user. Thus open source offering that are internalized by OEMs and offered as proprietary NFVOs are not covered under direct open source. Open source initiatives have played a pivotal role in the journey of the orchestrator. ONAP, OSM, Nephio and Kubernetes are some of the prominent contributions of the open source community to the VNFO development; and have been therefore profiled at length in the report. What then is ‘direct’ open source? Open source initiatives have been utilized by various stakeholders in different ways. While in some cases have been used “as-is”; they have also be

Are CNFs, VNFs?

The answer is yes. Our recent report “ The VNFO – Ripe for Change ” says this loudly and clearly. Pardon the atrocious image. But I hope it encapsulates the dilemma appropriately. It is important to reiterate the reasons for including containers under VNF and in essence, Kubernetes under VNFO. Both VMs and containers are virtualization methodologies. Thus, network functions synthesized using VMs and containers qualify as VNFs. In VNFs orchestrated by containers are sometimes referred to as cloud-native NFs (CNFs). Insight Research has also employed this term as early as 2020. Over time however, we have observed that the usage of CNFs is neither consistent nor uniform. Most ‘traditional’ MANOs such as ONAP, OSM and all proprietary offerings now support containers and Kubernetes. Containers are thus one more means towards achieving the end-objective of VNFs. In such situations, Insight Research finds it more appropriate to use VNF as an umbrella term and under this term, refer to VM or c

Future of 5G Core – Putting it in Numbers

What does the future portend for 5G Core? In our previous blog, we examined the barriers to 5G acceptance. We will now see how the numbers stack up for 5GC vis-à-vis EPC. Figure below is excerpted from our latest report Virtual Core – Gateway to the “Real 5G” . The figure shows the market share progression during 2021-2026 for EPC and 5GC. Market Share Progression for the Overall Virtualized Core; by Generation 2021-2026 (%) Source: Insight Research As 5G radios increasingly become the norm, the terms EPC and 5G Core will be used interchangeably with the NSA and SA modes. Expectedly, the market for 5G core will outpace the EPC market. The most obvious barrier for 5G core acceptance is pricing. Reportedly, for a comparable user base, the 5G core is costlier by a substantial factor with premium estimates ranging from 50% upward. This is a compellingly steep barrier for many carriers whose ROI from 4G EPC is unfulfilled to date. 4G EPC is also able to address the most immediate user exper

What ails 5G-SA?

  Our recent report  “Virtual Core – Gateway to the “Real 5G” ; brought out one thing very clearly -  5G SA is clearly taking longer than anticipated.  The reasons are many – telco ennui with the constant architectural flux without commensurate returns being the main one. Telcos have had their fingers burnt with the seemingly never-ending development cycle of a reliable and acceptable MANO. If the MANO experience is a sobering reality check for telcos, they did not lose hope. In came containers and microservices, with a ready-to-deploy orchestrator in form of Kubernetes. Notwithstanding all the challenges surrounding the implementation of containers in performance-intense and latency sensitive network function like the mobile core; the value proposition of containers is beyond doubt – and this was established close to the end of last decade. Containers are therefore no longer the reason for telco reluctance in embracing the SA mode, which lends itself elegantly to SBA. What is the

Stakes are high for SD-WAN - Airtel invests in Lavelle Networks

Yesterday,  Airtel acquired a 25 percent stake in SD-WAN specialist Lavelle Networks . Let us try to make sense of this development and answer a few questions. What does it mean for Airtel? What’s in it for Lavelle Networks? Why SD-WAN? Lavelle Networks has developed SD-WAN suite based on its indigenous ScaleOn network architecture. The company offers a controller, edge port and gateway to complete its SD-WAN portfolio. What is SD-WAN? SD-WAN is a use case of software defined networks (SDN). SDN decouples the network control and forwarding functions enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. SD-WAN, as the term suggests, refers to the application of the SDN paradigm to WANs. SD-WAN is many a time used interchangeably with bandwidth on demand (BoD). While BoD caters to specific requirements of bandwidth provisioning, SD-WAN deals with the network at a more fundamental, des