Decoding the RAN coding conundrum - with AI!
How important is coding in the RAN? The
answer, “very important” is not going to win you any awards, as it is a
no-brainer!
Coding is not just about data
representation; it's about optimizing data flow, enhancing error correction,
and ensuring robustness in diverse network conditions.
Let us look at a very specific intent –
error correction. To prevent data from being corrupted by channel interference
and noise, channel coding employs redundancy. At the heart of 5G RAN's coding
mechanisms is the use of Polar Codes and LDPC (Low-Density Parity-Check) codes.
Polar Codes work well in 5G control
channels. This is because, they are well suited for limited or finite data
lengths.
LDPC codes works well with longer lengths.
Not surprising, given the “low density” properties of the codes. LDPC
essentially ensures that even if some packets are corrupted during
transmission, the original data can be reconstructed without retransmission. LDPC
codes, are therefore employed for data channels. If you are streaming a
high-definition video or downloading a large file, LDPC is at work.
Codes, while being necessary, are not
without their burdens. They demand significant computational resources. In
real-time scenarios, where data needs to be encoded and decoded on the fly,
ensuring efficiency without compromising on speed is a challenge.
5G networks are dynamic (or are at least
supposed to be in their truest form). User densities, mobility patterns and
interference levels; all of them can and do vary. The coding rate, therefore,
needs to be dynamic, Which means that the amount of redundancy added for error
correction, needs to be dynamic.
In a congested urban environment with high
interference levels, a higher coding rate might be necessary to ensure data
integrity. In contrast, in a stable, low-interference scenario, a lower coding
rate could suffice, allowing for higher data throughput.
And then there is network slicing. Each
slice, tailored for a specific use case, might have its own set of requirements
in terms of latency, reliability, and throughput. Designing coding strategies
that cater to these diverse needs, while ensuring efficient resource
utilization, is challenging.
Its not as if traditional coding techniques
have not served well. They, however, fall short on adaptability.
Enter AI.
The usage of AI in RAN coding is gathering
shape. Insight Research has identified coding as one of the four principal
applications of AI in the RAN in its report, “AI
and RAN – How fast will they run?
Let’s give some credit to NFV, SDN, CNF,
OpenRAN and other RAN disaggregation initiatives for binding AI more tightly
with coding. Coding is traditionally associated with the physical layer and is
indeed a physical layer construct. The disaggregation of the RAN has however
allowed higher layers to take over a considerable portion of the
decision-making associated with coding.
Lets dive into the coding itself. We know
that LDPC codes are faster and offer better performance. We also know that they
are more complex to encode and typically require more iterations than iterative
turbo coding. The ‘LD’ in LDPC can become ‘latency delivered’, if used blindly
without the appropriate tweaks.
Deep Learning can help by
- · identify LDPC codes and subsequently reduce the decoding delay
- · developing error correction codes for nonlinear channels
- · analyzing the trade-off between LDPC codes and channel coding
- ·optimizing (that word again!) the decoding algorithm to achieve non-convex minima (jargon alert – this only means quickly identifying the extremities of a pattern with certainty)
Experimental studies conducted on LDPC
decoders peppered with DNN were able to achieve that golden mean of ironclad
BER performance WITHOUT compromising on the throughput.
Polar codes too have been touched by AI.
Neural networks have been explored to optimize polar code constructions. How? By
feeding a neural network with data from various transmission scenarios and
corresponding optimal polar code constructions, the network can learn to design
optimal codes for new, previously unencountered scenarios.
This is just the beginning.
Lets look at autoencoders. We know that
they are from the neural network family. Once trained on a dataset representing
various channel conditions, these autoencoders can efficiently encode and
decode data. It is said that they can outperform traditional algorithms in
terms of error rates and computational efficiency.
Impressed?
ML algorithms are trained on historical
data. They can then predict the optimal modulation and coding schemes taking
into account all the unpredictable - current network conditions, user mobility
patterns and traffic demand.
In a high-interference scenario for
example, an ML algorithm might recommend a robust modulation scheme with a high
coding rate to ensure data integrity. QPSK anyone?
In essence, AI can and does deliver the
full spectrum of improvements and enhancements in coding mechanisms; ranging
from making the existing coding work better to selecting the best coding
alternative for relevant scenarios.
The optimism expressed earlier should be
tempered with a dose of realism.
The dual nature of codes and their
distributed decision making centers does impair the speed of AI adoption. The
baseband is more amenable to managing the payload code – which is LDPC. This is
truer, particularly in the context of Open RAN, which empowers the BBU, placed
in the CU with modular architecture and enhanced interfacing profile.
The
Polar Codes, which manage the control channels, remain predominantly in the
ambit of the RU. Since RU continues to be largely outside the purview of Open
or Virtual RANs, the applicability of advanced AI and ML constructs to the
coding mechanism is constricted to an extent.
It should be noted that even LDPC,
considered amenable to basebands, is amenable only in the relative sense. It is
only in certain splits that LDPC has been pulled ‘upwards’ – towards the
‘intelligent’ center. Thus, a significant market for coding control remains
locked in the RU.
AI in coding is catching root. We can hope
to see it blossom in the next half decade.
Comments
Post a Comment