0% found this document useful (0 votes)
17 views18 pages

Draft0 R3-206873 CB 25 - EnhDataColl - Princ - Defs - ZTE - FJ - Intel - DT - HW - Nokia - ATT - Sam - CMCC

Enhancements draft

Uploaded by

rahivliar00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views18 pages

Draft0 R3-206873 CB 25 - EnhDataColl - Princ - Defs - ZTE - FJ - Intel - DT - HW - Nokia - ATT - Sam - CMCC

Enhancements draft

Uploaded by

rahivliar00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 18

3GPP TSG-RAN WG3 #110-e R3-206873

Online, 2-12 June 2020

Agenda Item: 18.2

Source: Samsung (moderator)

Title: Summary of Offline Discussion on High-Level Principles and Definitions

Document for: Approval

1 Introduction
CB: # 25_EnhDataColl_Princ_Defs
CMCC 6783
The general RAN framework to support AI enabled network intelligence and automation should be studied and defined in
RAN working groups.
RAN-DCA (RAN data collection and analytics) module/function should be explicitly defined.
discuss and agree to capture the proposal functional framework of RAN-DCA in TR 37.817.
SS 6041
The details of AI algorithm are out of scope.
The discussion should focus on AI functionality and corresponding input/output.
AI enabled NG-RAN should be based on the existing architecture without impact on 5GC.
AI enabled RAN supports NG-RAN and EN-DC. NG-RAN should be prioritized.
As for AI enabled RAN, the change on UE should be limited.
ZTE,CU 6091
The AI algorithm is out of the scope of 3GPP.
The framework for AI in an open and standardized way based on fully understand on the logical AI functions and the
interpretable data for AI input/output should be defined during the study.
The impact on the current RAN architecture and interfaces should be minimized, the interfaces among network nodes, e.g.,
Uu interface, Xn interface, NG interface to enable AI should be open and interoperable.
The robustness and reliability of AI related data transmission should be further studied.
QC 6170
*On overall frame
AI/ML includes following phases:
- Model design
which determines:
Function
Input and output
Model parameters (e.g. model type, NN layers, nodes, initial weights)
- Model training
which includes:
Offline training
Online verification, further training and model update
- Model inference
which includes:
Model deployment
Deriving policy from model inference
Dispatching and executing the policy
*On model training
Offline ML model training is performed outside of gNB based on MDT, SON, QoE and OAM data
Support access to data across all the data collection procedures for ML model training
Allow a single framework to access the data stored e.g., in the MCE, TCE, CU, DU
Support a coordination function for data registration, data query/discovery and data collection to enable the access for ML
model training
*On model running/inference
For centralized model inference outside of NG-RAN, the model generated policy is sent to RAN. RAN3 to study whether to
define new interface or enhance existing interfaces for the policy and configuration delivery.
For distributed model inference inside NG-RAN, RAN3 to study the interface for the delivery of ML model and model
generated policy/configuration to the related RAN nodes.
DT 6197
- take the existing output on data analytics from 3GPP SA2 and SA5 into account for this SI as well as the output of other
fora like e.g. ITU-T, ETSI, and O-RAN.
- consider placement of AI/ML applications in the RAN which may cover both near-real time and non-real time
optimization purposes dependent on the use case perspective. This is in contrast to AI/ML applications in the 5GC or OAM
system, where primarily non-real time optimization is expected.
- consider interrogation of primarily non-real time optimization approaches via 5GC NWDAF or OAM MDAS with those
implemented in the RAN for both near- and non-real time purposes.
- The LCM for AI/ML algorithms placed in RAN nodes is still conducted in the OAM system.
- adapt the MDA process utilizing AI/ML technologies as described by SA5 for the OAM system for introducing the
functional framework for RAN intelligence.
+ The LCM for AI/ML applications in the RAN will stay in the OAM system as well as the offline training of models to be
applied in the ML training engine.
+ The ML inference engines using trained models provided by the ML training engine are assumed to be part of logical RAN
nodes/NFs; which nodes is dependent on the selected use case.
+ The ML inference engines are providing outputs of data analysis to an actor in the RAN which may trigger actions on the
basis of those results. An actor may be the same logical RAN node/NF where the ML inference engine is placed or another
one dependent on the selected use case.
- To get a common understanding on the impact of AI/ML-based optimization approaches to the RAN RAN3 should
describe a detailed workflow covering the data acquisition and collection from different sources as well as the handling of
the different ML model LCM phases (building, training, deployment, execution, and validation).
- The LCM for AI/ML applications in the RAN and the AI/ML-related interfacing/configuration part for ML inference engines
to be implemented in logical RAN nodes/NFs shall be part of the study work.
AT&T 6333
Consider the following two categories for evaluating different RAN-AI approaches and their corresponding requirements
and potential network impacts:
Type-1 RAN-AI: Near-Real Time / Centralized
Type-2 RAN-AI: Real-Time / Distributed RAN-AI
CATT 6338
- AI functionality at least includes AI module training, AI module implementation, input/output to AI among which AI
algorithm should not be discussed in RAN and the input for AI could include data come from UE, NG-RAN node and OAM.
- discuss the location of the AI related function in NG-RAN nodes, i.e.co-located with gNB-CU or gNB-DU or an independent
entity.
- consider retrieving the analytics information provided by NWDAF for AI support in RAN.
Nok 6375
Develop a terminology list for use within this study item (TP provided)
Intel 6403
- include in the TR the definitions of three types of AI/ML algorithms: supervised learning, unsupervised learning and
reinforcement learning.
- include in the TR the definitions of AI/ML frameworks: centralized learning, federated learning and distributed learning.
- include in the TR the definitions two categories of AI-enabled NG-RAN use cases: delay insensitive use case and delay
sensitive use case.
E/// 6438
- agree to the definitions given
- Definition of the AI/ML models applicable to the 5G RAN are left to implementation and outside the scope of 3GPP
- focus the RAN3 study on AI/ML on the execution of ML-based learned rules and to leave the process of training up to
implementation
- The node hosting an ML model should be able to request, if needed, specific information to be used as inputs to the
model and to avoid reception of unwanted information if not needed.
- The node hosting an ML model should signal the outputs of the model only to nodes that have explicitly requested them,
unless it is agreed that the outputs are believed to be always of interest to the receiving node.
The receiving node of a predicted value should also receive the uncertainty of such prediction
Any new potential input information to an AI/ML model, should provide clear advantages in comparison to absence of
such information
HW 6729
The study should make a common understanding and a definition of the AI/ML function for RAN.
The definition/scope of AI/ML function for RAN could refer to and take what has been studied in SA2/SA5 as base line, and
the study should focus on, e.g. the input data collection and the output data implementing.
The study should not attempt to specify the actual AI/ML models/algorithms.
The study should be focused on the current NG-RAN architecture and interfaces.
Chair:
- start _slow_, try to agree few basic principles first: attempt to converge around 6041, 6901, 6438, 6729 before proceeding
further (seems there is consensus)
- revise if needed, add FFSs as appropriate and agree terminology list in 6375?
(SS - moderator)
Summary of offline disc R3-206873

2 For the Chairman’s Notes


Propose to capture the following:

3 Discussion
3.1 High Level Principles
In [1]-[11], the high level principles for AI based RAN intelligence in terms of scope, architecture and
essential design constrains were proposed.
Among the proposals, seems the following principles are common parts:
a) AI/ML models/algorithms is out of scope of 3GPP [1][2][3][7][10][11].
b) The study focuses on AI/ML functionality and corresponding input/output [1][2][3][4][7][11].
c) The study is based on the current RAN architecture and interfaces [2][3][10][11].

It is proposed to agree the above high level principles. If a company has different view, input in the
following is appreciated.
Company Comment
ZTE Yes.
Fujitsu Yes for a), b), c)
Intel We agree with above proposals with additional comments for proposal a). The
detailed AI/ML algorithms and models is out of scope of 3GPP, however, we
should discuss (and capture in the TR) the general types/assumptions of
AI/ML algorithms and models within the scope of 3GPP to help the discussion
of standard impact in 3GPP.
Deutsche Telekom We agree with the principles listed above and support Intel’s view on a).
Huawei In general, fine with the proposal. But for c), in the SID, it is clear written to
focus on existing architecture, i.e., the study will not touch the architecture, we
would like to be clear on this point.
Nokia Yes
AT&T Agree with Intel for a). It is not possible to characterize the requirements for b)
and the potential impact on c) without understanding the underlying AI/ML
models/algorithms
Samsung Yes
CMCC We are fine with principle a and b, but not ok with c. It should be defined as a
principle.
Besides above, the following principles related with the scope of the study were also proposed.
d) Training process is up to implementation [10]
e) Including NG-RAN and EN-DC in the scope. NG-RAN should be prioritized [2].
f) The change on UE should be limited [2].
g) Consider placement of AI/ML applications in the RAN which may cover both near-real time
and non-real time optimization purposes dependent on the use case perspective [5]
h) The LCM for AI/ML applications in the RAN and the AI/ML-related interfacing/configuration
part for ML inference engines to be implemented in logical RAN nodes/NFs shall be part of the
study work. The deployment of AI model is discussed case by case [5].
i) To get a common understanding on the impact of AI/ML-based optimization approaches to the
RAN RAN3 should describe a detailed workflow covering the data acquisition and collection
from different sources as well as the handling of the different ML model LCM phases (building,
training, deployment, execution, and validation) [5].

Companies’ views are appreciated on the above principles.


Company Comment
ZTE d) the processing of ML training and inference belongs to implementation,
however, the location and input/output of these AI functions needs to be
discussed case by case.
e) Fine for us.
f) Agree.
g) Agree.
h) LCM=the AI/ML lifecycle management. Same as g).
i) Agree, the LCM flow can be seen in R3-206092 as below:

Figure1: AI framework for RAN


Fujitsu Agree d), e), f), g), h), i)
Intel Following comments regarding to each point:
d) Partially Disagree. The process of training may include the following two
parts: 1) process inside training model, including how to train a model or detail
algorithms of training; 2) the process of interaction between the training model
and other modules, such as data collection, data/model refresh frequency,
inference model, etc. For 1), we agree that it is up to implementation. For 2), it is
expected such process may bring impact to existing interface and functionality in
NG-RAN, which should be in the scope of this SI.
e) Agree.
f) Disagree. As mentioned by Samsung, UE is one of the important data sources
to the data foundation for AI model, it is essential to include UE in the AI-
enabled RAN for data reporting. Moreover, the data reported by UE is not
limited to measurement data, updated AI/ML model parameters is also part of
collected data from UE to network. Such information can be treated as an
aggregated-trained data representing multiple measurement data used by legacy
methods. In some degree, it helps to reduce the data volume and reports from UE
to network. Hence, it may also bring benefit to reduce burden of UE reporting
compared with legacy methods.
As for the concern of whether UE performance may be impacted by frequent
updates, from our view, we can limit the report frequency of different types of
data based on use case requirement.
g) Agree.
h) Agree. The LCM for AI/ML applications should consider different
deployment at existing NG-RAN architecture for different use cases.
i) Agree we should study LCM within the scope of RAN3, as well as the
deployment and network node/UE handling of the different ML model LCM
phases.
Deutsche Telekom d) Agree, but the general impact of training processes on RAN architecture and
interfaces has to be considered (to be done via exemplary use cases).
e) Agree
f) Agree, but details with respect to information required from UEs have to be
further evaluated based on use case examples (see Intel’s feedback).
g) Agree
h) Agree
i) Agree. This is related to figures in Section 3.3 of present SoD. Further figures
are also available in other tdocs not assigned to agenda item 18.2, e.g. in R3-
206092. Therefore, also the overlap with agenda item 18.4 (CB #27) has to be
considered here.
Huawei d) yes, training is up to implementation
e) agree
f) agree
g) maybe we could make things a bit simple, for aggregated architecture, we
think the function anyway should be located inside gNB; while for disaggregated
architecture, we could have a WA of locating inside gNB-CU as starting point.
Whether it is for real time purpose or non-real time purpose, the two places
should be proper
h) LCM should be part of OAM work? And there is an ongoing SI on AI in SA5.
For AI model deployment, it is true that it should be case by case, but it is also
part of implementation.
i) Before talking about concrete work flow, maybe we need to answer if there is
any specific/new stuff/difference when applying AI/ML in RAN comparing to a
normal AI/ML procedure or procedure already adopted in SA2, otherwise maybe
we could just reuse current ones.
Nokia d) We agree with ZTE. Even though Training/Learning process can be up to
implementation its inputs need to be discussed.
e) We agree.
f) We agree but with a small clarification. The UE is a very important source of
information to the network. The existing framework of measurement reporting
and MDT should be enhanced and optimized for AI/ML while at the same time
keeping the effects on both the UE and network side to the minimum.
g) We agree
h) We agree but we think that also training should be part of the study work (not
only LCM and inference).
i) We agree.
AT&T Similar views as Nokia on d), f), h)
For g) we consider real-time applications are also possible for some use cases
Samsung d) The same view as ZTE
e)-h) Agree.
i) To have a common understanding on the work flow for AI/ML-based
optimization is beneficial. However, we don’t need to consider the detail LCM
for different ML model.
CMCC d) AI training process is up to implementation, but the input/output and location
of AI training should be studied.
e) we are fine with this proposal
f) We do not need this principle, and should first focus on discussion of use cases
and solution.
g) Agree
h) Agree
i) Agree, a general framework and workflow for AI/ML optimization should be
defined and captured in the our RAN TR, although similar topic has been
discussed in SA2 or SA5. We cannot simply refer to other groups spec.

The following principles related with input, output, AI related data transmission via network
interfaces were proposed.
j) The input/output data transmitted between network nodes should be fully interpretable [1][3].
k) The robustness and reliability of AI related data transmission should be further studied [3].
l) The node hosting an ML model should be able to request, if needed, specific information to be
used as inputs to the model and to avoid reception of unwanted information if not needed.
m) The node hosting an ML model should signal the outputs of the model only to nodes that have
explicitly requested them, unless it is agreed that the outputs are believed to be always of
interest to the receiving node.
n) The receiving node of a predicted value should also receive the uncertainty of such prediction
o) Any new potential input information to an AI/ML model, should provide clear advantages in
comparison to absence of such information
Companies’ views are appreciated on the above principles.
Company Comment
ZTE j) Partial agree, in R17, in order to simplify the solution, fully interpretable of
ML model/data can be limited to a specified range, e.g., vendors under the
same vendor code, via some mechanism.
k) Yes.
l) Yes, this is why AI measurement management function needs to be
supported.
m) Not agree, it does not reply on the request from the receiving node, it
depends on the designed solution, whether the output needs to be exchanged
between nodes.
n) It depends on how precise the used ML model is.
o) Yes.
Fujitsu j) Yes, as described in SID, this is required for multi-vendor support
k) Yes, this is an important requirement to study
l), m), n) Too early to decide this
o) Yes
Intel Following comments regarding to each point:
j) Agree. We can first focus on data exchange within the same vendor
network. For Rel-17, we can wait for SA2’s progress on whether support inter-
vendor data/model exchange, then decide interpretability between nodes from
different vendors. In addition, in order to achieve fully interpretable between
network nodes, ML models and algorithms should be shared among different
network nodes.
k) We wonder how to measure the robustness and reliability of the data
transmission. From our view, it is hard to quantize such robustness for AI/ML
algorithms. We should focus on what data is needed as input/output first in this
release.
l) Too early to discuss
m) same as l), too early to discuss
n) We think there’s no difference between this proposal and k). The
uncertainty/reliability of certain prediction can be discussed later after we
defined input/output data.
o) Disagree. Since we are not going to discuss the actual ML models, we don’t
see how one can prove “clear advantages” of some information needed for the
said algorithms. If missing such information may lead to failure of training or
inference in some use cases, we should also consider such information.
Deutsche Telekom j) Agree, as open interface definition is very important for us as operator.
k) Agree, but this is part of the use case analysis. Furthermore, it has to be
clarified what is meant by robustness and reliability. This should be related to
output of AI/ML algorithms, not to information provided by “usual” data
sources.
l) Agree
m) Agree, but this is part of the design of AI/ML approaches for different use
cases, i.e., the output/input relation may vary between nodes.
n) Agree, if it is feasible by the corresponding AI/ML model.
o) Agree. We should definitely avoid flooding AI/ML algorithms with
information not needed.
Huawei j) if the data here is just collected for training/inference, and as output of
inference, we agree it should be interpretable. For the AI model itself, we think
it should be left for OAM to configure.
k) agree
l) it is a bit stage 3 detailed. In our understanding, we need to discuss what
kind of data/info needed for a specific use case, all the data input to a trained
model should be optional, and it is up to implementation what info/data to be
included, then maybe there is no need to rush into a conclusion of data request
procedure? Note that we already have measurement/SON related procedures to
request data.
m) Again, seems to us it is a bit stage 3 detailed.
n) Not sure the intention, anyway it is up to receiving node to decide how to
use the received info.
o) agree
Nokia j) We agree.
k) It is a bit unclear what is meant by this. Are security issues also covered
under this scope? If so, SA3 should be involved.
l) We think it is too early to get into such details.
m) We think it is too early to get into such details.
n) We think it is too early to get into such details.
o) We disagree. We think that it is impossible to prove clear advantages for
any new potential input information to an AI/ML model unless we introduce
the details of the algorithm describing the model. The latter seems to be out of
the scope of this SI and we therefore disagree to such proposal.
AT&T The need for o) is unclear and seems potentially redundant given l)
Samsung j) Agree. Open interface definition is the normal principle in RAN3.
k) Agree, but it is not easy to measure and evaluate.
l) – n). It’s stage 3 issue regarding how to design the procedures e.g. whether
notification mechanism or request/response. This may be discussed case by
case for different use cases. To have such principles too early may bring
unnecessary restriction to select the best solutions.
o) in general it’s right, we agree. But at this stage, we are not sure whether it is
always possible to compare without touching the AL algorithm.
CMCC j) Yes, the input/output data shall be clearly defined and inter-operable.
k) Yes
l) m) n) are stage 3 details, are not principles.
o) Not needed as a principle, it is business as usual, it is for sure will be
considered in the evaluation of the solution.

The following principle related with coordination with other working group was proposed [5].
p) RAN3 to take the existing output on data analytics from 3GPP SA2 and SA5 into account for
this SI as well as the output of other fora like e.g. ITU-T, ETSI, and O-RAN
Companies’ views are appreciated on the above principles.
Company Comment
ZTE This SI is mainly focus on 3GPP RAN side impact, all the outputs from other
WGs or foras can be regarded as information.
Fujitsu For p) coordination with other 3GPP groups is expected as usual. Companies
are free to include inputs from any sources as usual
Intel Agree with ZTE that all inputs from other STDs and WGs can be treated as
information noticed, rather than directly take it as baseline or input for this SI
in RAN3, which focusing on AI-enabled RAN.
Deutsche Telekom As there was already work performed on definitions and AI/ML
LCM/frameworks in different fora inclusive of 3GPP WGs, there is no need to
start from scratch. Especially the interrelation with SA2 and SA5 is important
to avoid diverging approaches within 3GPP. The focus of this SI is on RAN
part, but it has to consider the impact on LCM from e.g. OAM.
Huawei Nothing against, but maybe the study results within 3GPP should be
considered firstly, i.e. other 3GPP WGs.
Nokia There is a lot of work done by other groups, e.g., by SA5 on the data
collection. We believe that RAN3 should coordinate with other working
groups.
Samsung Agree company’s views to consider the output on data analytics from other
WGs or foras.
CMCC This principle is not needed, business as usual, liaison can be received from
other groups, but RAN3 could take it into account and make its own decisions

3.2 Definitions
[1], [8]-[11] discussed the terminology which should be defined for RAN intelligence. The following
aspects are included:
 The definition of Lifecycle related terminologies: input/output/ML model/model
training/model inference, etc. [1][8][9][10]
o Data collection/repository: Data collected from the gNB, UE or management entity, as
a basis for AI model training or data analytics and inference.
o Input: A range of data that may in full or in part be needed by an ML model to generate
Outputs
o Output: A range of predicted data of which all or part can be generated by an ML model
as output
o ML Model: Model created by applying machine learning techniques to generate a set of
outputs consisting of predicted information based on a set of inputs
o AI/ML Training: An online or offline process to produce an AI/ML model by learning
features and patterns that best present data automatically.
o AI/ML Inference: A process of using a ML model to make a prediction or guide the
decision or an AI process to perform AI analytics and inference based on collected data
and AI model.
o Inference Host: The entity which hosts the ML model during inference phase.
o Training Host: The entity which hosts the training of the ML model.

Q2-1: Do you think the definition for the above terminologies should be defined in the TR? And if
yes, please give the definition of your preferred terminologies.
Company Comments
ZTE At least the below terminologies for LCM are necessary:
1) Data collection: Data collected from the NG-RAN node (including
aggregated or dis-aggregated deployments), UE or CN, as a basis for AI
model training or data analytics and inference.
2) Model Training: ML models include supervised learning, unsupervised
learning, reinforcement learning, deep neural network, and appropriate ML
model has to be chosen according to various use cases.
3) Model Inference: Executing the trained ML model with corresponding
inference data, the output is the input for taking a decision for an action.
4) Action: Performing the actions based on the outputs of the inference
model.
Fujitsu It is good to define basic terminology, exact wording of definitions can be
worked on when drafting TR
Intel We think we should define and agree on what should be included in the LCM
first, then provide definition to each terminology according to each module’s
functionality.
Deutsche Telekom Agree that we need those definitions in the TR.
Huawei As commented, LCM should be part of OAM work, but we are also fine to
capture some terminologies, like ML, ML model, Model training, Model
inference etc, again, existing 3GPP specs should be good references.
Nokia In our view all the above definitions need to be captured in the TR.
Alignment between definitions from different companies may be needed,
also in relation to AI 18.1.
Samsung Agree. The above terminologies are necessary for the discussion of
input/output, AI functionality, procedure and LCM of AI enabled RAN
intelligence.
CMCC AI/ML is not the expertise of 3GPP, and the focus of the SI is not the AI per
se, so we should start with simple definitions. In our view, the following is
necessary,
 Data collection
 AL/ML model
 AI training
 AI inference
 Input and output
The definitions in R3-206783 can be considered
 The definition of AI/ML algorithms: supervised learning, unsupervised learning and
reinforcement learning [9][11]
Q2-2: Do you think the definition for AI/ML algorithm should be included in the TR?
Company Comments
ZTE It is helpful for further study.
Fujitsu This is not necessary
Intel Yes, deployment of LCM in RAN node may be different for the algorithms
listed above.
Deutsche Telekom Agree to have learning structures in the TR (note that this is also related to
CB #24 as this is part of Intel’s ToC proposal in R3-206402).
Huawei Not sure what the definition means, our understanding, the definition should
be already there in academic area.
Nokia Yes
AT&T Yes
Samsung AL/ML algorithm selection and details should be up to implementation and
out of scope. And same view as HW, there is common understanding in
academic area for these algorithms. Thus, maybe no need to include the
definition of AL/ML in the TR.
CMCC Clarification on the AI algorithm may be beneficial but the pre-condition is
we can reach quick consensus on this.

 AI/ML learning structure: centralized learning, federated learning, distributed learning [9]
o Centralized learning refers to an AI/ML framework which requires all training data
collected by different nodes in RAN to be reported to a centralized node. In centralized
learning, all data resource/storage/training for supervised learning/unsupervised
learning/reinforcement learning are performed in centralized manner in a single node.
o Federated learning is a distributed machine learning framework (not to be confused
with distributed learning) that allows a collective model to be constructed from data that
is distributed across data owners. It brings AI/ML models to the data source, rather than
bringing the data to the model, allowing the local nodes/individual devices to collect
data and train their own copy of the model, thus no need to report source data to the
centralized node. In federated learning, only parameters/weights of AI/ML model need
to be sent back to the centralized node to assist generic model training.
o Distributed learning refers to the concept in which machine learning processes have
been scaled out and deployed across a cluster of nodes. The training model is split up
and shared among multiple simultaneously working nodes, in order to speed up model
training.
Q2-3: Do you want to include the definitions of the learning structure in the TR?
Company Comments
ZTE No. As we agreed to discuss the solution case by case, it is not necessary to
describe the central, hybrid, distributed architecture on the top of each
identified use cases. The detailed LCM workflow is more important, all the
logical functions can be deployed at different RAN elements based on the
solution for each use case.
Fujitsu This also does not seem to be necessary
Intel Yes. We don’t think it is even possible to discuss LCM without having
AI/ML framework definitions first. If we use training module in LCM as an
example, in centralized learning framework, all training are in the same node
(either in CN or RAN); for federated learning framework, training can be
deployed at both CN and RAN or both RAN and UE; for distributed learning
framework, training may be considered deployed at multiple RAN or
multiple UEs. Hence, it is essential to consider different AI/ML framework
options for AI-enabled RAN.
Deutsche Telekom We agree to include those definitions, as there are impacts on deployment
aspects of AI/ML algorithms in RAN nodes.
Huawei Maybe no need? When we have consensus on, e.g. work flow, what else
needed for definitions of learning structure?
Nokia Yes, we think it is important to define the different types of learning
(centralized, federated, distributed) since this characterizes the framework
under consideration in more detail. The type of learning will have an effect
on the general signaling over the NG-RAN interfaces.
AT&T Yes
Samsung The learning structure should be discussed case by case. So the general
learning structure may be unnecessary.

 The definitions two categories of AI-enabled NG-RAN use cases: delay insensitive use
case and delay sensitive use case [9].
Q2-4: Do you want to include two categories of AI-enabled NG-RAN use cases as proposed in
[9] in the TR?
Company Comments
ZTE No. Identify the valid use cases for this SI firstly.
Fujitsu Also not necessary
Intel The proposed delay insensitive and sensitive use case category is the same as
proposal g) (near-real time and non-real time) in section 3.1. We don’t
understand why companies agree with g) but not this proposal.
As for what should be captured in TR, it depends on the conclusion from use
case discussion. However, we suggest considering delay impact/requirement
of those uses cases, which may be helpful to further study on deployment of
training and inference.
Deutsche Telekom In [5] we described it as real time or non- and near-real time with impacts on
function/model placement and required feedback loops. But we see this
discussion as part of CB #26 and it does not need to be considered here.
Huawei Not needed for the moment.
Nokia Yes. In fact, we think that we should identify the representative use cases
such that we cover both delay insensitive and delay sensitive cases.
AT&T This categorization is helpful
Samsung We are fine to consider use cases cover both delay insensitive and delay
sensitive cases. It’s better to discuss use case related in CB#26.
CMCC Not needed, it can be discussed per use case.
3.3 Others
[1], [3], [5], and [8] proposed that it is required to study and define AI enabled RAN framework. Several
types of frameworks are given in [1] [5] [8].
Option 1: RAN-DCA [1]

Option 2: [5]

Option 3: [8]

Training data ML Model Training


Source Data:
Host (*)
UE,
DU, ML Model ML Model re- ML Model re-
deployment training info training info
CU-CP Subject of action
Action
CU-UP, (*)
OAM, MLModel
ML ModelInference
Inference
online data ML Model Inference
etc. Host
Host Actor (*)
Host (*) Output

Action Subject of action


(*)

(*): UE,DU,CU-CP, CU-UP, OAM

Q3-1: Do you think the framework should be captured in the TR? If yes, which framework
are you prefer?
Company Comment
ZTE Pls check the LCM flow in R3-206092 as below (can call this Option 4):
Figure1: AI framework for RAN
Pls note that the above logical functions in LCM flow, e.g., Data collection, Model
training, Model inderence, Action, can be located inside/outside of RAN node
depends on the use cases. Option1 seems exclude such possibility.
The same problem in Option2, data training can also be deployed in the RAN node.
For Option3, the re-training part is not clear, after the action has been performed, the
performance can be used to adjust the parameters for Model training, and some
performance data can also be used as input data for Model training and Model
Inference.
Option3 and Option4 only describe the LCM flow, without any limitation to the
deployment of each logical functions. Data Collection and Action are usually should
be located in NG-RAN node for those use cases identified for RAN optimization.
While Model Training and Model Inference can be both located in a single place, e.g.
OAM system or NG-RAN node, or Model Training is located in the OAM system,
and Model Inference is located in the NG-RAN node. How to define the functional
components of the AI Entity and the deployment of AI Entity should be discussed
case by case.
Fujitsu Capturing different AI frameworks would be useful
Intel We agree that such AI framework for RAN should be considered, however, we
think we need further discussion on the agreeable AI framework for RAN. At
least, we should first agree the general concepts and types of ML to study, then
which module should be included in LCM, then discuss the functionalities and
located nodes.
Deutsche Telekom We see the need to capture a framework figure and corresponding description in
the TR to have a common basis for use case analysis.
Our preference is on Option 2 and 3 due to high similarity. There is a need for
consistency in possible feedback loops to combine the 2 figures.
Huawei Maybe we should firstly try to reach some common understandings that the
framework here is a general one, and what are the basic function blocks, etc.,
seems to us, option (2) and (3) could be taken as base lines for further
discussions.
Nokia Yes, we support Option 2 or Option 3 to be captured in the TR. We do not
support Option 1 since this would imply changes in the RAN architecture.
AT&T We are open to capturing different frameworks in the TR. At the same time it
may be useful to first try and define some common functional blocks/definitions
of terms to allow easier comparison between options
Samsung We see the need to capture a framework figure and corresponding description in
the TR to have a common basis for use case analysis.
We are fine to take option 2, option 3 or option 4 as basis for further refinement.
We observed some minor update is needed for option 2, option 3 or option 4.
CMCC Similar view as DT, We need to capture a framework figure and corresponding
description in the TR to have a common basis for use case analysis. We can take
option 2 and 3 as baseline.
Additionally, we want to clarify option 1 does not imply architecture change, it
just show a function/module which support AI in RAN and placement of the
components in the function can be discussed case by case.

[4], [5] and [7] proposed that the coordination with OAM and/or NWDAF is required for NG-RAN.
- the function of OAM for providing data source [4][7]
- interrogation of primarily non-real time optimization approaches via 5GC NWDAF or OAM MDAS
with those implemented in the RAN for both near- and non-real time purposes[5]
- The LCM for AI/ML algorithms placed in RAN nodes is still conducted in the OAM system [5]
- adapt the MDA process utilizing AI/ML technologies as described by SA5 for the OAM system for
introducing the functional framework for RAN intelligence
- analytics information provider [7]

Q3-1: Do you think the coordination with OAM and/or NWDAF is required for NG-RAN? If
yes, which part of the function needs coordination?
Company Comment
ZTE Co-ordination can be performed later when needed.
Fujitsu Too early to decide this
Intel Yes, it depends on the use cases. we can further discuss and evaluate it after
we defined the use cases within the scope of this SI.
Deutsche Telekom The framework and LCM for AI/ML approaches may be spread across
different network domains. We have to clarify in the SI for the RAN how to
specify the interrelation with e.g. OAM, but we don’t have to describe
details as this is task of SA5. As Intel stated, this can be done after use case
definition/evaluation.
Huawei We think coordination might be needed, but this coordination should be a
natural outcome of the study on use cases and solutions.
Nokia We think that coordination with OAM is required. Benefits of coordination
with NWDAF may need some further justification.
Samsung Coordination can be discussed based on identified use cases if required.
CMCC Coordination can be decided later when we touch the details of the use
cases

[6] proposed two categories for evaluating different RAN-AI approaches and their corresponding
requirements and potential network impacts:
Type 1: RAN-AI: Near-Real Time / Centralized
Type 2: RAN-AI: Real-Time / Distributed RAN-AI
Q3-2: Do you support the two categories for evaluating different RAN-AI approaches and
their corresponding requirements and potential network impacts?
Company Comment
ZTE Not needed. Identify the valid use cases for this SI firstly.
Fujitsu Too early to decide this
Intel From our view, these are two methods of understanding the AI-enabled
RAN.
Based on the delay sensitivity of use cases, RAN can be deployed as near-
real time and real time, where low latency use cases, such as channel
estimation, MAC scheduling, are delay sensitive, a.k.a. real time AI-
RAN; traffic steering, QoE estimation, etc, are delay insensitive, a.k.a.
near-real time AI-RAN. This near-real time and real time category is
more focus on the use case, rather than framework and approaches.
Based on the deployment of training and inference modules, ML
functions in RAN can be separated into centralized/distributed/federated
(hybrid) AI-RAN. Centralized or distributed is a relative concept,
depending on the use case and AI/ML model deployment, there’s several
possibilities:
- CN as the center, RANs are distributed
- RAN-CU as the center, RAN-DUs are distributed
- RAN (CU or DU) as the center, UEs are distributed
Hence, from our view, we suggest to study these two categories
separately.
Deutsche Telekom This question is related to the one listed earlier on delay insensitive and
delay sensitive use cases. See our comments to that question.
Huawei Maybe not for now.
Nokia Yes
AT&T Yes these categories can have vastly different requirements and impacts
on the RAN (which interfaces are involved and data collection
requirements, etc.)
Samsung Prefer to identify the use case firstly.
CMCC Not needed for now, we should discuss use cases first

[4] proposed for RAN3 to study the impact on training and inference inside RAN or outside RAN as:
 Training: offline ML model training is performed outside of gNB based on MDT, SON, QoE
and OAM data
 Outside RAN inference: study whether to define new interface or enhance existing interfaces
for the policy and configuration delivery.
 Inside RAN inference: study the interface for the delivery of ML model and model generated
policy/configuration to the related RAN nodes.
Companies are invited to give your views on the above aspects.
Company Comment
ZTE Related to Q3-1, as we said, how to define the functional components of the
AI Entity and the deployment of AI Entity should be discussed case by case.
Fujitsu This may be useful depending on the chosen use cases
Intel The deployment of training and inference module should be discussed
together with LCM and AI-RAN framework, considering different service
requirement from use cases.
Deutsche Telekom These topics are related to the AI/ML framework description and possible
alternatives for implementation/placement of AI/ML model/functions in
the RAN nodes or other NW domains like OAM and should be
considered within that discussion.
Huawei Let’s step by step, i.e. maybe we should try to reach consensus on basic
concept, terminology, framework firstly, and existing outcome could be
taken as starting point. In general, we are fine that offline training should
not be preformed inside RAN, and inference should be inside RAN.
Nokia Yes, but we think that Training/Learning should be allowed also inside a
gNB.
AT&T Agree with Nokia
Samsung It should depend on the use case. Support to further study after identifying
use cases.
CMCC Trianing can be inside gNB or outside gNB depending on the use cases.

4 Conclusion, Recommendations [if needed]


If needed

5 References
1 R3-206783 High-level framework for AI enabled RAN optimisation (CMCC) discussion

2 R3-206041 Discussion on High-Level Principles for RAN Intelligence (Samsung) discussion

3 R3-206091 General Principles for AI study (ZTE Corporation, China Unicom) discussion

4 R3-206170 AI/ML framework (Qualcomm Incorporated) discussion

5 R3-206197 High-level principles and definitions for AI/ML in RAN (Deutsche Telekom AG) discussion

6 R3-206333 High-Level Overview of Artificial Intelligence in RAN (AT&T) discussion

7 R3-206338 Considerations on high-level principle and definition for AI support in NG-RAN (CATT) discussion

8 R3-206375 (TP for TR 37.817) Terminology for functional framework for machine learning and other
RAN intelligence (Nokia, Nokia Shanghai Bell)

9 R3-206403 Use cases, AI/ML algorithms, and general concepts (Intel Corporation) other
10 R3-206438 Definitions and Working Assumptions for the study on AI/ML for NG-RAN (Ericsson) discussion

11 R3-206729 Initial discussions on further enhancement for data collection (Huawei) other

You might also like