WO2024080746A1 - Procédés et appareil de gestion de données ai/ml - Google Patents
Procédés et appareil de gestion de données ai/ml Download PDFInfo
- Publication number
- WO2024080746A1 WO2024080746A1 PCT/KR2023/015637 KR2023015637W WO2024080746A1 WO 2024080746 A1 WO2024080746 A1 WO 2024080746A1 KR 2023015637 W KR2023015637 W KR 2023015637W WO 2024080746 A1 WO2024080746 A1 WO 2024080746A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- entity
- data
- training
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/0268—Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W76/00—Connection management
- H04W76/10—Connection setup
- H04W76/15—Setup of multiple wireless link connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0866—Checking the configuration
- H04L41/0869—Validating the configuration within one network element
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/803—Application aware
Definitions
- Certain examples of the present disclosure relate to methods, apparatus and/or systems for handling artificial intelligence / machine learning (AI/ML) data, and in particular handling AI/ML training data. Further, certain examples of the present disclosure relate to methods and apparatus for distinguishing traffic relating to training data and other data. In particular, certain examples relate to classifying traffic relating to training data. Further, certain examples of the present disclosure relate to methods and apparatus for transferring AI/ML model and/or training data between network entities (or network functions). Further, certain examples of the present disclosure relate to the notification and behaviour of network entities (or network functions) based on training status of an AI/ML model.
- AI/ML artificial intelligence / machine learning
- IIoT Industrial Internet of Things
- IAB Integrated Access and Backhaul
- DAPS Dual Active Protocol Stack
- 5G baseline architecture for example, service based architecture or service based interface
- NFV Network Functions Virtualization
- SDN Software-Defined Networking
- MEC Mobile Edge Computing
- multi-antenna transmission technologies such as Full Dimensional MIMO (FD-MIMO), array antennas and large-scale antennas, metamaterial-based lenses and antennas for improving coverage of terahertz band signals, high-dimensional space multiplexing technology using OAM (Orbital Angular Momentum), and RIS (Reconfigurable Intelligent Surface), but also full-duplex technology for increasing frequency efficiency of 6G mobile communication technologies and improving system networks, AI-based communication technology for implementing system optimization by utilizing satellites and AI (Artificial Intelligence) from the design stage and internalizing end-to-end AI support functions, and next-generation distributed computing technology for implementing services at levels of complexity exceeding the limit of UE operation capability by utilizing ultra-high-performance communication and computing resources.
- FD-MIMO Full Dimensional MIMO
- OAM Organic Angular Momentum
- RIS Reconfigurable Intelligent Surface
- Wireless or mobile (cellular) communications networks in which a mobile terminal (UE, such as a mobile handset) communicates via a radio link with a network of base stations, or other wireless access points or nodes, have undergone rapid development through a number of generations.
- the 3 rd Generation Partnership Project (3GPP) design specify and standardise technologies for mobile wireless communication networks.
- Fourth Generation (4G) and Fifth Generation (5G) systems are now widely deployed.
- 3GPP standards for 4G systems include an Evolved Packet Core (EPC) and an Enhanced-UTRAN (E-UTRAN: an Enhanced Universal Terrestrial Radio Access Network).
- EPC Evolved Packet Core
- E-UTRAN Enhanced-UTRAN
- LTE Long Term Evolution
- LTE is commonly used to refer to the whole system including both the EPC and the E-UTRAN, and LTE is used in this sense in the remainder of this document.
- LTE should also be taken to include LTE enhancements such as LTE Advanced and LTE Pro, which offer enhanced data rates compared to LTE.
- 5G New Radio 5G New Radio
- 5G NR 5G New Radio
- NR is designed to support the wide variety of services and use case scenarios envisaged for 5G networks, though builds upon established LTE technologies.
- New frameworks and architectures are also being developed as part of 5G networks in order to increase the range of functionality and use cases available through 5G networks.
- One such new framework is the use of artificial intelligence / machine learning (AI/ML), which may be used for the optimisation of the operation of 5G networks.
- AI/ML artificial intelligence / machine learning
- AI/ML models and/or data might be transferred across the AI/ML applications (e.g., application functions (AFs)), 5GC (5G core), UEs (user equipments) etc.).
- AI/ML works could be divided into two main phases: model training and inference. During model training and inference, multiple rounds of interaction may be required.
- An AI/ML model training process is generally computationally complex and may significantly impact power consumption, resources and performance of the model training network entity (that is, a network entity performing the model training; in a non-limiting example, this may be a UE). Additionally, this considerable volume of training data needs to be exchanged between application residing in the UE and its counterpart within or outside the operator's network, and needs to be send via radio links, e.g. between the UE and NG-RAN (next generation RAN (radio access network)).
- NG-RAN next generation RAN (radio access network)
- the AI/ML operation/model is split into multiple parts according to the current task and environment.
- the intention is to offload the computation-intensive, energy-intensive parts to network endpoints, whereas leave the privacy-sensitive and delay-sensitive parts at the end device.
- the device executes the operation/model up to a specific part/layer and then sends the intermediate data to the network endpoint.
- the network endpoint executes the remaining parts/layers and feeds the inference results back to the device.
- Multi-functional mobile terminals might need to switch the AI/ML model in response to task and environment variations.
- the condition of adaptive model selection is that the models to be selected are available for the mobile device.
- it can be determined to not pre-load all candidate AI/ML models on-board.
- Online model distribution i.e. new model downloading
- NW network
- the model performance at the UE needs to be monitored constantly.
- the cloud server trains a global model by aggregating local models partially-trained by each end devices.
- a UE performs the training based on the model downloaded from the AI server using the local training data. Then the UE reports the interim training results to the cloud server via 5G UL channels.
- the server aggregates the interim training results from the UEs and updates the global model. The updated global model is then distributed back to the UEs and the UEs can perform the training for the next iteration.
- the present disclosure relates to wireless communication systems and, more specifically, the invention relates to method and apparatus for handling AI/ML data
- a first entity in a communications network comprising: a transmitter; a receiver; and at least one processor configured to: while a first connection with a user equipment (UE) is established, transmit information relating to an artificial intelligence/machine learning (AI/ML) model to a second entity in the communications network, wherein the information is usable for obtaining the AI/ML model and/or data relating to the AI/ML model; transmit, to the UE, a first message to trigger the UE to establish a second connection with the second entity, wherein the first message indicates that the second entity has obtained the AI/ML model and/or the data relating to the AI/ML model; and forward AI/ML data for the AI/ML model to the second entity.
- UE user equipment
- AI/ML artificial intelligence/machine learning
- the information relating to the AI/ML model comprises: the AI/ML model, an update to the AI/ML model, the data relating to the AI/ML model, assistance information usable for obtaining the AI/ML model and/or the data relating to the AI/ML model, or assistance information for replacing or updating previously transmitted information relating to the AI/ML model.
- the assistance information comprises one or more of: AI/ML model ID; AI/ML model deployment; AI/ML model training; AI/ML model training status; AI/ML model transfer; AI/ML model update; AI/ML model use case;
- the AI/ML model deployment indicates one of the UE, the first entity, the second entity, or a combination of the first entity and the second entity;
- the AI/ML model training indicates one of the UE, the first entity, the second entity, or a combination of the first entity and the second entity;
- the AI/ML model training status indicates one or completed, untrained or partially trained;
- the AI/ML model transfer indicates one of full or partial;
- the AI/ML model update indicates one of the first entity, the second entity, a combination of the first entity and the second entity, a core network (CN) or operations, administration and maintenance (OAM);
- the AI/ML model use case indicates one of load balancing, energy saving, mobility optimisation, CSI feedback enhancement, beam management, and positioning accuracy enhancements;
- the network-collaboration UE level indicates one of the UE, the first entity, the second entity, or a collaboration between two or more of the UE, the first entity and the second entity;
- the training type indicates offline or online;
- the training update
- the at least one processor is configured to: receive data from a user plane function (UPF); and identify at least part of the data received from the UPF as the AI/ML data.
- UPF user plane function
- the at least one processor is configured to identify and/or classify the at least part of the received data as the AI/ML data based on one or more of: a label assigned to the at least part of the received data by another entity, the other entity having split the data into the AI/ML data and other data; a 5G Quality of Service (QoS) Indicator (5QI) for the AI/ML data; one or more QoS parameters of the at least part of the received data; an ID assigned to packets in the at least part of the received data; a volume, data structure and/or data format of the at least part of the received data; assistant information related to the AI/ML data received from a third entity; a QoS flow(s) or protocol data unit (PDU) session(s) used for the at least part of the received data; or stored information on one or more of frequency, size or time or frequency pattern of the AI/ML data.
- QoS Quality of Service
- PDU protocol data unit
- a remaining part of the received data includes user data; and the AI/ML data is distinct from the user data.
- one or more of: the 5QI for the AI/ML data is different to a 5QI for the user data; the one or more QoS parameters for the at least part of the received data are different to corresponding one or more QoS parameters for the remaining part of the received data; the ID comprises a training session ID and/or an ID for the AI/ML model; the QoS flow(s) or PDU session(s) used for the at least part of the received data is different to a QoS flow(s) or PDU session(s) used for the remaining part of the received data; or the stored information is obtained based on observing previous data comprising previous AI/ML data and previous user data as received by the first entity.
- the at least one processor is configured to: split traffic, received from core network (CN), into the AI/ML data and other data; and/or wherein the AI/ML data includes training data.
- CN core network
- the first message is a first radio resource control (RRC) message; and the at least one processor is configured to: receive, from the UE, a second RRC message; and facilitate establishment of the second connection.
- RRC radio resource control
- the first message is a RRC reconfiguration message and the second message is an RRC reconfiguration complete message.
- the at least one processor is configured to: receive, from the second entity, a second message indicating the second entity has obtained the AI/ML model, the data relating to the AI/ML model and/or the information relating to the AI/ML model; and/or transmit, to the second entity, the information relating to the AI/ML model in response to a request for transfer of the AI/ML model and/or the data relating to the AI/ML model received from the second entity.
- the at least one processor is configured to: perform training of the AI/ML model based on other AI/ML data for the AI/ML model and/or data stored in the first entity; and/or receive, from the second entity, an updated AI/ML model and/or other data relating to the AI/ML model.
- the first entity is configured to jointly perform training of the AI/ML model with the second entity; and/or the first entity is configured to: perform model inference or jointly perform model interference with the second entity based on the AI/ML model or an updated AI/ML model, wherein the updated AI/ML model results from performing the training of the AI/ML model.
- the at least one processor is configured to: receive, from the second entity, a model training status based on training of the AI/ML model, or a notification of completion of the training; and/or upon completion of the training, cause release of the second entity to be triggered.
- a dedicated channel is used for forwarding the AI/ML to the second entity; and the dedicated channel allows the AI/ML data to be processed differently to other data.
- the at least one processor is configured to: forward further AI/ML data for the AI/ML model to the second entity; and receive, from the second entity, a reject message indicating a cause of failure.
- a second entity in a communications network comprising: a transmitter; a receiver; and at least one processor configured to: receive, from a first entity in the communications network and having a first connection with a user equipment (UE), information relating to an artificial intelligence/machine learning (AI/ML) model, wherein the information is usable for obtaining the AI/ML model and/or data relating to the AI/ML model; obtain the AI/ML model and/or the data relating to the AI/ML model, based on the information; establish a second connection with the UE; receive, from the first entity, AI/ML data for the AI/ML model; and perform, based on the obtained AI/ML model and/or the data relating to the AI/ML model, training of the AI/ML model to provide an updated AI/ML model based on the second connection.
- UE user equipment
- AI/ML artificial intelligence/machine learning
- the information relating to the AI/ML model comprises: the AI/ML model, an update to the AI/ML model, the data relating to the AI/ML model, assistance information usable for obtaining the AI/ML model and/or the data relating to the AI/ML model, or assistance information for replacing or updating previously received information relating to the AI/ML model.
- the assistance information comprises one or more of: AI/ML model ID; AI/ML model deployment; AI/ML model training; AI/ML model training status; AI/ML model transfer; AI/ML model update; AI/ML model use case; network-UE collaboration level; training type; training session ID; training update; a training version number; training validity; or AI/ML model inference.
- the AI/ML model deployment indicates one of the UE, the first entity, the second entity, or a combination of the first entity and the second entity;
- the AI/ML model training indicates one of the UE, the first entity, the second entity, or a combination of the first entity and the second entity;
- the AI/ML model training status indicates one or completed, untrained or partially trained;
- the AI/ML model transfer indicates one of full or partial;
- the AI/ML model update indicates one of the first entity, the second entity, a combination of the first entity and the second entity, a core network (CN) or operations, administration and maintenance (OAM);
- the AI/ML model use case indicates one of load balancing, energy saving, mobility optimisation, CSI feedback enhancement, beam management, and positioning accuracy enhancements;
- the network-collaboration UE level indicates one of the UE, the first entity, the second entity, or a collaboration between two or more of the UE, the first entity and the second entity;
- the training type indicates offline or online;
- the training update
- the at least one processor is configured to transmit the AI/ML data to the UE for model training, and/or perform the training using the AI/ML data; and/or wherein the AI/ML data includes training data.
- a first dedicated channel is used for transmitting the AI/ML data to the UE and/or a second dedicated channel is used for receiving the AI/ML data from the first entity; and the first dedicated channel and the second dedicated channel allow the AI/ML data to be processed differently to other data.
- the at least one processor is configured to: transmit the AI/ML data using one or more of: best effort, non-guaranteed bit rate (non-GBR), or low QoS values data radio bearers (DRB); a modulation and coding scheme, a security and protection level, an energy requirement, a reliability requirement, a bandwidth part (BWP), a carrier, or a carrier group, that differs compared to that used for other data (e.g. the other data may be user data or non-AI/ML data).
- non-GBR non-guaranteed bit rate
- DRB low QoS values data radio bearers
- the AI/ML data is transmitted or processed with a different priority than that of the other data.
- the at least one processor is configured to transmit, to the first entity, a message indicating the second entity has obtained the AI/ML model, the data relating to the AI/ML model and/or the information relating to the AI/ML model; the at least one processor is configured to transmit, to the first entity, a model training status based on the training of the AI/ML model, or a notification of completion of the training of the AI/ML model; and/or upon completion of the training, release of the second entity is triggered.
- the at least one processor is configured to: transmit, to the first entity, a request for transfer of the AI/ML model and/or the data relating to the AI/ML model; and receive, from the first entity, the information relating to the AI/ML model in response to the request.
- the at least one processor is configured to: receive further AI/ML data for the AI/ML model from the first entity; determine that the further AI/ML data is not supported; and transmit, to the first entity, a reject message indicating a cause of failure.
- the first entity is a first next generation radio access network (NG-RAN), a master NG-RAN (M-NG-RAN) node, a first secondary NG-RAN (S-NG-RAN) node, or a first next generation Node B (gNB);
- the second entity is a second NG-RAN, a second S-NG-RAN node, or a second gNB; and/or the communications network is a 5G network.
- a method in a communications network comprising a first entity and a second entity, the method comprising: while a first connection with a user equipment (UE) is established, transmitting, by the first entity, information relating to an artificial intelligence/machine learning (AI/ML) model to a second entity, wherein the information is usable for obtaining the AI/ML model and/or data relating to the AI/ML model; obtaining, by the second entity, the AI/ML model and/or the data relating to the AI/ML model, based on the received information relating to the AI/ML model; transmitting, by the first entity to the UE, a first message to trigger the UE to establish a second connection with the second entity, wherein the first message indicates that the second entity has obtained the AI/ML model and/or the data relating to the AI/ML model; establishing, by the second entity, a second connection with the UE; forwarding, by the first entity, AI/ML data for the AI/ML model to the second
- AI/ML artificial intelligence/machine learning
- a method of a first entity in a communications network comprising: while a first connection with a user equipment (UE) is established, transmitting information relating to an artificial intelligence/machine learning (AI/ML) model to a second entity in the communications network, wherein the information is usable for obtaining the AI/ML model and/or data relating to the AI/ML model; transmitting, to the UE, a first message to trigger the UE to establish a second connection with the second entity, wherein the first message indicates that the second entity has obtained the AI/ML model and/or the data relating to the AI/ML model; and forwarding AI/ML data for the AI/ML model to the second entity.
- AI/ML artificial intelligence/machine learning
- a method of a second entity in a communications network comprising: receiving, from a first entity in the communications network and having a first connection with a user equipment (UE), information relating to an artificial intelligence/machine learning (AI/ML) model, wherein the information is usable for obtaining the AI/ML model and/or data relating to the AI/ML model; obtaining the AI/ML model and/or the data relating to the AI/ML model, based on the information; establishing a second connection with the UE; receiving, from the first entity, AI/ML data for the AI/ML model; and performing, based on the obtained AI/ML model and/or the data relating to the AI/ML model, training of the AI/ML model to provide an updated AI/ML model based on the second connection.
- AI/ML artificial intelligence/machine learning
- methods of the first entity and methods of the second entity including operations and/or features corresponding to the operations and/or features of the respective one of the first entity and the second entity which are described in any of the various examples disclosed above.
- a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out a method according to any of the examples described above.
- a network comprising a first entity according to any of the examples described above and a second entity according to any of the examples described above.
- method and apparatus for handling AI/ML data are provided.
- Figure 1 shows a representation of a method or call flow according to an example of the present disclosure.
- Figure 2 shows a representation of a method or call flow according to another example of the present disclosure.
- Figure 3 shows a representation of a method or call flow according to another example of the present disclosure.
- Figure 4 shows a representation of a method or call flow according to another example of the present disclosure.
- Figure 5 shows a representation of a method or call flow according to another example of the present disclosure.
- Figure 6 is a block illustrating an example structure of a network entity or network function in accordance with certain examples of the present disclosure.
- X for Y (where Y is some action, process, operation, function, activity or step and X is some means for carrying out that action, process, operation, function, activity or step) encompasses means X adapted, configured or arranged specifically, but not necessarily exclusively, to do Y.
- Certain examples of the present disclosure provide methods, apparatus and/or systems for: differentiating traffic (or data packet(s)) associated with AI/ML training data; classifying traffic (or data packet(s)) as being associated with AI/ML training data; providing an indication that traffic (or data packet(s)) are associated with AI/ML training data; transferring or transmitting an AI/ML model and/or assistance information associated with training for an AI/ML model between network entities/functions; controlling notification and/or behaviour of a network entity/function based on traffic (or data packet(s)) being associated with AI/ML training data; and controlling notification and/or behaviour of a network entity/function based on training status of a AI/ML model.
- the present disclosure is not limited to these examples, and includes other examples.
- 3GPP 5G 3rd Generation Partnership Project
- the techniques disclosed herein are not limited to these examples or to 3GPP 5G, and may be applied in any suitable system or standard, for example one or more existing and/or future generation wireless communication systems or standards.
- the techniques disclosed herein may be applied in any existing or future releases of 3GPP 5G NR or any other relevant standard.
- the functionality of the various network entities and other features disclosed herein may be applied to corresponding or equivalent entities or features in other communication systems or standards.
- Corresponding or equivalent entities or features may be regarded as entities or features that perform the same or similar role, function, operation or purpose within the network.
- a particular network entity may be implemented as a network element on a dedicated hardware, as a software instance running on a dedicated hardware, and/or as a virtualised function instantiated on an appropriate platform, e.g. on a cloud infrastructure.
- One or more of the messages in the examples disclosed herein may be replaced with one or more alternative messages, signals or other type of information carriers that communicate equivalent or corresponding information.
- One or more non-essential elements, entities and/or messages may be omitted in certain examples.
- ⁇ Information carried by a particular message in one example may be carried by two or more separate messages in an alternative example.
- ⁇ Information carried by two or more separate messages in one example may be carried by a single message in an alternative example.
- the transmission of information between network entities is not limited to the specific form, type and/or order of messages described in relation to the examples disclosed herein.
- an apparatus/device/network entity configured to perform one or more defined network functions and/or a method therefor.
- Such an apparatus/device/network entity may comprise one or more elements, for example one or more of receivers, transmitters, transceivers, processors, controllers, modules, units, and the like, each element configured to perform one or more corresponding processes, operations and/or method steps for implementing the techniques described herein.
- an operation/function of X may be performed by a module configured to perform X (or an X-module).
- Certain examples of the present disclosure may be provided in the form of a system (e.g., a network) comprising one or more such apparatuses/devices/network entities, and/or a method therefor.
- examples of the present disclosure may be realized in the form of hardware, software or a combination of hardware and software.
- Certain examples of the present disclosure may provide a computer program comprising instructions or code which, when executed, implement a method, system and/or apparatus in accordance with any aspect, example and/or embodiment disclosed herein.
- Certain embodiments of the present disclosure provide a machine-readable storage storing such a program.
- 3GPP agreed a new Release 18 "Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface”.
- the initial set of use cases under study in 3GPP Technical Specification Group (TSG) RAN1 (RAN Working Group 1 (WG1)) includes: CSI feedback enhancement, Beam management, and positioning accuracy enhancements.
- 3GPP TSG RAN2 (RAN Working Group 2 (WG2)) is also involved in this study and will be addressing the following protocol aspects:
- RAN1 agreed the following different levels of collaboration between the network and UE:
- RAN1 made the following working assumption on the general aspects of AI/ML framework:
- RAN3 Specify data collection enhancements and signaling support within existing NG-RAN interfaces and architecture (including non-split architecture and split architecture) for AI/ML-based Network Energy Saving, Load Balancing and Mobility Optimization. (RAN3)
- Xn interface
- AI/ML related information e.g., predicted information
- the new procedure for reporting of AI/ML related information should be based in a requested way, like resource status report procedure.
- the new procedure over Xn used for AI/ML related information should be non-UE associated as a start point.
- an AI/ML model training process is generally computationally complex and may significantly impact power consumption, resources and performance of the model training network entity. Additionally, a considerable volume of training data may need to be exchanged between an application(s) residing in the model training network entity (e.g., a UE) and its counterpart(s) within or outside the (operator's) network, and it may need to be sent via radio links, e.g. between the UE (model training network entity) and a next generation radio access network (NG-RAN).
- NG-RAN next generation radio access network
- an NG-RAN would treat user data and training data similarly, i.e. in terms of radio protocol procedures, resulting in similar assigned power and computational resources.
- certain examples of the present disclosure provide apparatus, system(s), network(s) and/or method(s) to distinguish training data (e.g., AI/ML training data, application training data etc.) from other data (e.g., user data, or data not associated with training of a AI/ML model).
- classifying data may allow for the distinguishing of training data in the network.
- distinguishing the training data is based on use of one or both of two new classes of 5G Quality of Service Identifiers (5QIs), e.g., a 5QI associated with AI/ML training GBR (guaranteed bit rate), and a 5QI associated with AI/ML training non-GBR.
- 5QIs 5G Quality of Service Identifiers
- certain examples of the present disclosure provide apparatus, system(s), network(s) and/or method(s) to differentiate treatment (that is, to treat differently) such training data and the other data (e.g., user data) at a network entity (or network function) and/or UE, based in part on the classification (or distinguishing) of data.
- a network entity/function and a UE may treat data differently depending on whether the data is identified to be AI/ML training data or user data, based on a classification method for distinguishing the training data.
- a network entity may be a UE, a network node, or an application etc.
- certain examples of the present disclosure provide apparatus, system(s), network(s) and/or method(s) to enable a network entity and/or a UE to perform model training in another network entity in a dual-connectivity scenario (e.g., E-UTRAN New Radio - Dual Connectivity (EN-DC), or multi-RAT Dual Connectivity (MR-DC) etc.).
- a dual-connectivity scenario e.g., E-UTRAN New Radio - Dual Connectivity (EN-DC), or multi-RAT Dual Connectivity (MR-DC) etc.
- certain examples of the present disclosure provide apparatus, system(s), network(s) and/or method(s) relating to notification and behaviour of network entities based on model training status.
- a network entity may explicitly or implicitly notify/report, to another network entity, a training status of an AI/ML model, and may also optionally transmit the model to the other network entity.
- examples described herein may apply to dual connectivity (DC) scenarios/cases and standalone (SA) scenarios/cases, with or without modification as appropriate. Additionally, a person skilled in the art would appreciate that examples described herein may apply to one or more, or all, DC scenarios/cases (e.g., EN-DC, MR-DC) and related network entities/function (e.g., EN-DC, S-eNB (secondary eNB), SN (secondary node), MN (master node), S-GW (serving gateway), MME (mobility and management entity); MR-DC with 5GC (5G core), NG-eNB (next generation eNB), gNB, MN, SN, UPF (user plane function), AMF (access and mobility management function).
- DC scenarios/cases e.g., EN-DC, MR-DC
- related network entities/function e.g., EN-DC, S-eNB (secondary eNB), SN (secondary node), MN (master node), S-
- a network function may be an example of a network entity.
- traffic or data packets associated with, or related to, training data or similar; it will be appreciated that included within the scope of such traffic or data packets are cases of the traffic or data packets being training data, as well as cases where the traffic or data packets are associated with, or related to, or correspond to, training data.
- examples may refer to classifying traffic, determining whether traffic is related to training data, operating differently if the traffic relates to training data etc.; it will be appreciated that the present disclosure should be considered to equivalently include examples where the term "traffic" is replaced by "data packet(s)" (it is also noted that some examples of the present disclosure refer to both traffic and/or data packets, thereby demonstrating that the present disclosure consistently considers both cases of traffic and cases of data packet(s)).
- apparatus, system(s), network(s) and/or method(s) to distinguish training data e.g., AI/ML training data, application training data etc.
- training data e.g., AI/ML training data, application training data etc.
- other data e.g., user data
- a method is provided for distinguishing AI/ML training data from user data in a network.
- the network classifies traffic/data packets into application training data and/or user data.
- the method of classification may depend on whether the training data is labelled by an application function (AF) in a way that the (mobile) network can understand that data in this class are training data.
- AF application function
- the network may distinguish training data from user data based on one or more of the following methods/arrangements:
- QoS profile/parameters/characteristics are different for training data to that/those of user data.
- a network (entity/function) and/or an AF may assign for the training data:
- a new 5QI class of service identifier/value i.e. new 5QI index in the 5QI table
- new 5QI index in the 5QI table
- Packets are assigned a training session ID, AI/ML model ID, or other ID.
- this ID may be assigned to the packet by an internal or external network entity/function and/or an AF (or an application).
- an AF may directly or via another network entity/function (e.g., for 5GC case: AMF, SMF (session management function), or other entity/function), provide to the traffic classification entity (e.g. for 5GC case: UPF, NG-RAN, or other entity) assistant information related to training data packets/streams to assist training data classification at this entity.
- another network entity/function e.g., for 5GC case: AMF, SMF (session management function), or other entity/function
- the traffic classification entity e.g. for 5GC case: UPF, NG-RAN, or other entity
- assistant information related to training data packets/streams to assist training data classification at this entity.
- training data packets are differentiated from user data packets at the UPF entity, based on one or multiple criteria mentioned above and/or any other suitable traffic classification criteria.
- training data packets are differentiated from the user data packets at the NG-RAN node, based on one or multiple criteria mentioned above and/or any other suitable traffic classification criteria.
- training data packets/streams may be sorted into QoS flows (or PDU (protocol data unit) session(s)) that are different to the QoS flows (or PDU session(s)) that are used for user data.
- QoS flows or PDU (protocol data unit) session(s)
- PDU protocol data unit
- training data packets/traffic/streams may be combined with the user data packets into the same QoS flows (or PDU session(s)).
- the UPF can apply different QoS parameters to the training data and user data.
- IP headers e.g., different ToS (type of service) field / DS (differentiated services) field; and/or use of ECN (explicit congestion notification) bits to indicate training data vs. user data
- classification can be done as follows:
- the network may also classify, or identify, training data based on assistance information from the CN, e.g., based on analytics and/or predictions obtained from NWDAF (Network Data Analytics Function), or other network (NW) entity/function.
- NWDAF Network Data Analytics Function
- NWW Network Data Analytics Function
- a network entity or network function arranged to differentiate between traffic or data packets associated with training data, such as AI/ML model training data, and traffic or data packets associated with other data, such as user data. That is, a network entity/function may be configured to identify (or detect, or determine etc.) that traffic or data packets are related to training data.
- a network entity/function in accordance with some examples of the present disclosure may classify (or label, or denote) the traffic or data packets as being associated with training data.
- the traffic or data packets may be labelled as being associated with training data, or information may be included with (or linked to) the traffic or data packets, where the information indicates that the traffic or data packets are associated with training data, or information indicates that the traffic or data packets are associated with training data may be sent separately.
- the network entity may generate an indicator to inform another network entity/function that the traffic or data packets are associated with training data, and may transmit this indicator to the other network entity/function, for example at, before or after the time of forwarding the traffic or data packets to the other network entity/function.
- training data may be one type of traffic, while use data may be another type of traffic.
- the network entity/function or another network entity/function may perform one or more operations. Said one or more operations may differ compared to an operation(s) performed in a case where the traffic or data packets do not relate to training data. This will be discussed in more detail below.
- the network entity/function which classifies the traffic or data packets as training data may receive the traffic or data packets from another network entity/function in the network, or may itself generate the traffic or data packets.
- the network entity/function may be configured to determine that the traffic or data packets are associated with training data using one of the methods described above (e.g., based on use of a new or existing 5QI, based on at least one characteristic of the traffic or data packets etc.); and in the case of the latter, the network entity/function may classify or otherwise indicate the traffic or data packets as being related to training data, in such a way that the network (e.g., another entity/function in the network, such as the entity/function to perform the model training or requiring/requesting the training data) may identify that the traffic or data packets are related to training data.
- the network e.g., another entity/function in the network, such as the entity/function to perform the model training or requiring/requesting the training data
- certain examples of the present disclosure provide (or generate, or assign) such new types/classes of 5QI values, which may be for training data traffic (that is, traffic related to training data).
- a new 5QI value or a new type/class of 5QI value, is provided for indicating training data traffic.
- new 5QI values, or new types/classes of 5QI value are provided to distinguish between different types, or classes, of training data traffic. For example, there may be provided either or both of the following types/classes of 5QI values:
- AI/ML training data GBR e.g., training samples/data used for Real-time /(near) Real-time/Online AI/ML model training service
- AI/ML training data non-GBR e.g., training samples/data used for Offline AI/ML model training service.
- Table 1 shows the above-defined new types/classes of training data traffic in relation to 3GPP standardised 5QI, with the information relating to 5QI values 1, 5 and 82 shown in Table 5.7.4-1 of section 5.7.4 of TS 23.501 [4] (it will be appreciated that Table 1 omits some information shown in Table 5.7.4-1 of section 5.7.4 of TS 23.501 [4] for brevity).
- the above newly-introduced traffic classes and related QoS parameters are used only as one example; in other examples, other QoS parameters values may also apply to the newly defined classes, in addition to or instead of one or more of the QoS parameters shown in Table 1. It will be appreciated that, in certain examples, the newly-introduced traffic classes relate to one or more of the QoS parameters shown in Table 1, as opposed to all of those shown in Table 1.
- the network e.g., network entity, or network function
- the AF may assign one or more existing 5QI values (for example, one or more of the 3GPP standardised 5QI values as shown in Table 5.7.4-1 of section 5.7.4 of TS 23.501 [4]) for the training data traffic or data packets.
- the assigned existing value may be different to one which may be assigned for user data.
- certain examples of the present disclosure include a network entity/function which is configured to apply a (new or appropriately-selected existing) 5QI value, or generate an indication of such a 5QI value, for traffic or data packets which is determined to be associated with or related to training data.
- the indication of the 5QI value may be sent to another network entity/function to allow the other network entity/function to determine that the traffic or data packets (sent or to be sent to the other network entity/function) are related to training data for AI/ML model (e.g., thereby allowing the other network entity/function to react to the traffic or data packets in a different manner to if the traffic or data packets were associated with user data).
- a network entity may determine to process the traffic or data packets differently, such that transmission properties/parameters specified for training data (or data associated with training data) are used for the traffic (or data packets).
- the user data includes (or is) one or more of: user-generated data, application generated data, non-overhead data, data originating from outside the network and/or application (such as user-generated, or generated by another application) etc.
- certain examples of the present disclosure relate to one or more operations of the network based on the classification, distinction or identification of the traffic (e.g., based on identifying traffic to be training data traffic, and/or identifying (other) traffic to be user data).
- the network may be configured to split (or divide, or apportion) the traffic or data packets, e.g., into training data or user data, at different entities or functions in the network (for example: 5GC, NG-RAN, other).
- this splitting may be based on classifying traffic into training data and user data; that is, the network identifies that traffic is associated with training data and other traffic is associated with user data, based on the training data being distinguished.
- a UPF may split a PDU session during PDU Session Resource Setup, in order to enable separation of training data traffic from user data traffic.
- the UPF sends the training data to desired NG-RAN data using user plane (e.g. UPF sends training data to SN for model training at the SN).
- a UPF may split a PDU session during PDU Session Resource Modify in order to enable separation of data traffic from the user data traffic.
- the UPF sends the training data to desired NG-RAN data using user plane (e.g. UPF sends training data to SN for model training at the SN).
- a NG-RAN may split a PDU session and forward training traffic (e.g. QoS flows carrying training data) to another NG-RAN for model training at this other NG-RAN.
- training traffic e.g. QoS flows carrying training data
- a MN may perform SN (S-NG-RAN (secondary next generation radio access network) node) Addition procedure, split a PDU session, and forward training data, received from the network (e.g. UPF via UP), to the newly added SN that performs model training.
- MN e.g., M-NG-RAN (master next generation radio access network) node
- SN secondary next generation radio access network
- a MN may perform SN (S-NG-RAN node) Addition procedure, optionally including assistant information on the desired AI/ML model(s) to a SN, split a PDU session, and forward training data, received from the network (e.g. UPF via UP), to the newly added SN that performs model training.
- Addition procedure optionally including assistant information on the desired AI/ML model(s) to a SN, split a PDU session, and forward training data, received from the network (e.g. UPF via UP), to the newly added SN that performs model training.
- a MN may perform SN (S-NG-RAN node) Modification Request procedure, optionally including assistant information on the desired AI/ML model(s) to a SN, split a PDU session, and forward training data, received from the network (e.g. UPF via UP), to the newly added SN that performs model training.
- SN S-NG-RAN node
- Modification Request procedure optionally including assistant information on the desired AI/ML model(s) to a SN, split a PDU session, and forward training data, received from the network (e.g. UPF via UP), to the newly added SN that performs model training.
- a MN may send the received training data to a SN (S-NG-RAN node), for example, using the F1-C Traffic Transfer message.
- a network entity that received training data (e.g., from the core network) may perform the training (e.g., the AI/ML model training) and/or forward the training data to another network entity or network function (e.g., a UE) in order for the other entity/function to perform the model training.
- a network entity may perform the training (e.g., the AI/ML model training) and/or forward the training data to another network entity or network function (e.g., a UE) in order for the other entity/function to perform the model training.
- a UE may receive the training data from another network entity or a network function and, assuming the UE has already downloaded or received the AI/ML model, the UE may perform the model training.
- the network e.g., a network entity/function included in the network
- the processing may include or result in one or more of the following operations/methods/arrangements:
- ⁇ Training data may be offered one or more of: different modulation and coding schemes, different security and protection level, lower delay, lower energy and/or lower reliability requirements (e.g. less robust transmission parameters). That is, fewer radio resources (e.g. fewer headers; fewer retransmissions; and/or fewer bits spent on channel coding etc.).
- the network e.g., a NG-RAN
- the network entity/function may send the data training packets to the other network entity/function (e.g., the UE) using best-effort / non-GBR / low QoS values DRBs (data radio bearers).
- the network e.g., a NG-RAN
- the network entity/function may use at least one of different bandwidth parts, different carriers (CA), or even carrier groups (MCG/SCG) for the training data.
- CA carrier
- MCG/SCG carrier groups
- the network e.g., a NG-RAN
- the network entity/function may use the same carrier but have different LCP (logical channel prioritization) values configured, and/or different L1/L2 transmission parameters configured (e.g. channel coding, HARQ (Hybrid Automatic Repeat Request) for the training data.
- LCP logical channel prioritization
- L1/L2 transmission parameters e.g. channel coding, HARQ (Hybrid Automatic Repeat Request) for the training data.
- the NG-RAN may combine the training data and user data in the same DRBs or separate DRBs.
- an network entity/function such as a NG-RAN node (e.g., MN) to transfer AI/ML model and/or related training data (e.g. received from CN, AF, OAM (Operations, Administration and Maintenance), other), to another network entity/function, such as another NG-RAN node (e.g. SN) to perform model training.
- a network entity/function e.g.
- an NG-RAN entity such as MN
- MN may assist another entity (e.g., a SN) in downloading or obtaining the desired AI/ML model and/or training data from another network entity/function (and/or OAM, AF, other).
- the SN obtains the model and/or training data from the network (e.g., directly from CN or via NG-RAN, or other) using control plane (CP) signalling/interfaces/procedure/messages.
- CP control plane
- the network entity/function may send information (e.g., assistance information) to the other network entity/function.
- information e.g., assistance information
- the NG-RAN (e.g., MN or SN) sends one or more of the following assistance information (or sends assistance information comprising one or more of the following or an indication thereof) to the newly added or modified NG-RAN (e.g., SN):
- the AI/ML model deployment e.g., UE-side, MN-side, SN-side, joint/split-deployment (MN-SN), or other;
- the AI/ML model training e.g., UE-side, MN-side, SN-side, Joint/split-training (MN-SN), or other;
- the AI/ML model training status (e.g., completed, untrained, partially-trained, or other);
- AI/ML model transfer e.g., Full, Partial, or some model parameters
- the AI/ML model update (e.g., MN, SN, MN-SN, CN, OAM, or other);
- the AI/ML model use case e.g., load balancing, Energy saving, Mobility Optimisation, CSI feedback enhancement, Beam management, and Positioning accuracy enhancements, or other;
- the network-UE collaboration level (e.g., UE-side, MN-side, SN-side, UE-MN, UE-SN, UE-MN-SN (joint/multiple node-UE collaboration));
- Training type e.g., Online, Offline, or other
- Training update e.g., UE-initiated, MN-initiated, SN-initiated, CN-initiated, or other;
- Training validity (e.g., period, or location, or other);
- the AI/ML model inference (e.g., UE-side, SN-side, MN-side, joint inference MN-SN, UE-MN-SN, or other); and
- the network entity/function may send the AI/ML model(s) and any related assistance information. For example using:
- Figure 1 shows a representation of a method or call flow according to an example of the present disclosure.
- Fig. 1 shows an example of an update to the SN Addition procedure to include assistance information, for example "AI/ML Model Assistant Information". That is, Fig. 1 shows an example on MN sending assistance information on AI/ML model(s) in SN Addition Request message (as part of SN Addition procedure), SN providing acknowledgement and/or indication of reception of the Assistant Information, and MN informing the UE of sending AI/ML model Assistant Information to the SN (e.g. transfer of AI/ML model and any related information, such as training data or other).
- assistance information for example "AI/ML Model Assistant Information”. That is, Fig. 1 shows an example on MN sending assistance information on AI/ML model(s) in SN Addition Request message (as part of SN Addition procedure), SN providing acknowledgement and/or indication of reception of the Assistant Information, and MN informing the UE of sending AI/ML model Assistant Information to the SN (e.g.
- Fig. 1 shows a UE 11, a MN 13, a SN 15 and a UPF 17.
- this combination of network entities/functions is merely to provide an exemplary embodiment of the present disclosure and should not be considered as limiting.
- a person skilled in the art would appreciate that any of these entities/functions may be replaced by another (suitable) entity/function. As such, it may be helpful to consider numerals 11, 13, 15 and 17 to instead refer to first to fourth network entities/function.
- Fig. 1 shows a number of steps or states. It will be appreciated that one or more of these steps or states may be modified (e.g., two or more steps or states may be combined), omitted (e.g., one or more of the steps or states may not be included in Fig. 1) or moved (e.g., the one or more steps, or a combination thereof, may be provided in a different order), in the procedure, if desired and appropriate, as would be understood by the skilled person. Additionally, it will be appreciated that additional steps or states may be added, or additional actions/operations performed in each described step or state.
- MN 13 is assumed to have the AI/ML model(s) to be transferred (with any relevant info) to SN 15 (to be added). In another example, MN 13 may not have the AI/ML model(s), but may optionally assist SN 15 in downloading or transferring the model from another network entity/function, or OAM.
- MN 13 may send SN Addition Request message including "AI/ML model Assistant Information" (e.g. AI/ML model(s) trained/untrained, and/or information related to model training, other). For example, MN 13 may transmit, to SN 15, a message to add SN 15, where SN 15 may perform model training as seen later.
- AI/ML model Assistant Information e.g. AI/ML model(s) trained/untrained, and/or information related to model training, other.
- SN 15 may store the AI/ML model(s), received from (or transferred by, or downloaded from) MN 13. In another example, SN 15 may store the AI/ML model(s), received from (or transferred by, or downloaded from) the other NW entity/function or OAM.
- SN 13 may acknowledge reception of the AI/ML model and/or any other assistance information (e.g., information related to training).
- MN 13 may inform UE 11 of transfer of the AI/ML model to the SN 15.
- S105 (figure text: “5. RRC reconfiguration complete message”)
- S106 (figure text: “6. SN Reconfiguration Complete”)
- S107 (figure text: "7. Random Access Procedure”):
- UE 11 may establish a connection with SN 15.
- UPF 17 may transfer the training data for the AI/ML model (at SN 15) using the UP.
- the training data may be forwarded to SN 15 for model training. It is assumed that UPF 17 (or another NW entity/function) has already, in a previous steps (not shown in Fig. 1), distinguished the training data from user data, for example in accordance with one or more of the methods described above.
- UE 11 may send measurements reports to SN 15.
- SN 15 may perform model training.
- an AI/ML model may be transferred between network entities/functions (e.g., between RAN nodes) using a newly defined procedure (e.g., a newly defined Class 1 procedure).
- a newly defined procedure e.g., a newly defined Class 1 procedure.
- an MN or SN can use a newly defined procedure (e.g. Xn signaling / messages) to transfer AI/ML model(s) (and/or any related information of AI/ML model(s)) between the MN and SN.
- the procedure can be UE associated or non-UE associated.
- Figure 2 shows a representation of a method or a call flow according to an example of the present disclosure.
- Fig. 2 shows an example of a newly defined Class 1 procedure to transfer/exchange AI/ML model(s) (trained, untrained, partially trained, or having other status) and/or other information related to the AI/ML model(s) between network entities (in the example of Fig. 2, these are MN (M-NG-RAN node) and SN (S-NG-RAN node), however the present disclosure is not limited thereto) and numerals 21 and 23 may instead refer to other network entities or network functions, as appropriate.
- MN M-NG-RAN node
- S-NG-RAN node S-NG-RAN node
- a M-NG-RAN node or S-NG-RAN node 21 performs AI/ML model transfer and/or AI/ML model assistant information transfer with a S-NG-RAN node or M-NG-RAN node 23.
- M-NG-RAN node 23 may send the AI/ML Model Transfer Request to S-NG-RAN node 21, or the S-NG-RAN node 21 may send the AI/ML Model Transfer Required message to the M-NG-RAN node 23, or vice versa.
- the S-NG-RAN node 21 or M-NG-RAN node 23 sends AI/ML model transfer acknowledge ("ACKNOWLEDGE") to the M-NG-RAN node 23 or S-NG-RAN node 21 including an indication/ACK of reception of AI/ML model assistant information (e.g., if sent in S200).
- MN 23 may transfer the AI/ML model and the training data to SN 21, or vice versa.
- the SN 21 may transfer the trained model back to the MN 23.
- MN 23 may request to transfer model to SN 21, and SN 21 may acknowledge in response.
- SN 21 may require to transfer a model (e.g. trained model) to MN 23, and MN 23 may acknowledge in response.
- SN 21 may send AI/ML model transfer required message to MN 23 (to trigger transfer of different model(s) or an updated model or a new model) from MN 23. Then MN 23 would send AI/ML model transfer request and SN 25 would ACK.
- the M-NG-RAN node initiates the procedure by sending the S-NODE ADDITION REQUEST message to the S-NG-RAN node including assistant information on AI/ML model to be transferred to S-NG-RAN node to be trained at the S-NG-RAN node.
- the S-NG-RAN node shall, if supported, send an indication or acknowledgement to the M-NG-RAN node, in the S-NODE ADDITION REQUEST ACKNOWLEDGE message, that it has received the Assistant Information on AI/ML model.
- the S-NG-RAN node If the S-NG-RAN node is not able to accept the transfer of AI/ML model and/or training the AI/ML model, or a failure occurs during the S-NG-RAN node Addition Preparation, the S-NG-RAN node sends the S-NODE ADDITION REQUEST REJECT message with an appropriate cause value to the M-NG-RAN node. For example, a new cause value "AI/ML model not supported", "Model training not supported”, or any other suitable cause value.
- the M-NG-RAN node initiates the procedure by sending the S-NODE MODIFICATION REQUEST message to the S-NG-RAN node including assistant Information on AI/ML model trained at S-NG-RAN node.
- the information may optionally include updates of AI/ML model(s), previously transferred to S-NG-RAN, and/or any new AI/ML model(s) to be transferred to S-NG-RAN.
- the S-NG-RAN node shall, if supported:
- the S-NG-RAN node shall, if supported, send an indication or acknowledgement to the M-NG-RAN node, in the S-NODE MODIFICATION REQUEST ACKNOWLEDGE message, that it has received the Assistant Information on AI/ML model.
- the S-NG-RAN node If the S-NG-RAN node is not able to accept the update or modification of the AI/ML model and/or training the AI/ML model, the S-NG-RAN node sends the S-NODE MODIFICATION REQUEST REJECT message to the M-NG-RAN node including an appropriate cause. For example, a new cause IE value "Model update not supported”, "Model training update not supported”, or any other suitable cause value.
- notification and/or behaviour of network entities/functions may differ based on model training status.
- an SN may explicitly notify/report to an MN (or UE, and/or another network entity/function) the model training status (complete, failed, updated, other etc.).
- the SN may also send the trained AI/ML model to the MN (M-NG-RAN node).
- an SN may implicitly notify/report to an MN (or UE, and/or another network entity/function) the completion of model training by sending the trained model to MN (M-NG-RAN node).
- an SN may notify/report to an MN (or UE, and/or another network entity/function) the (successful) completion of model training, send the trained model to MN (M-NG-RAN node), and (optionally) trigger SN initiated SN Release.
- an MN after receiving the model training status (e.g. successfully completed) and/or trained model, may trigger MN initiated SN Release.
- an MN after receiving a trained AI/ML model, may perform model inference based on input data for inference.
- an SN after completion of model training, may perform the model inference based on input data for inference.
- an SN and MN may jointly perform model training and/or model inference.
- an MN may explicitly notify/report to an SN (or UE, and/or another network entity/function) the model training status (complete, failed, updated, other).
- the MN may also send the trained AI/ML model to the SN (S-NG-RAN node).
- an MN after receiving the trained AI/ML model, may trigger the MN initiated SN Modification procedure to provide any AI/ML model updates (or any other modified AI/ML model parameters) to an SN.
- the SN /MN may exchange notification of model training status and/or (optionally) trained model (and/or any assistant information related to the trained/updated model) using, for example:
- ⁇ existing EN-DC (or MR-DC) related procedure e.g. SN Modification (MN/SN initiated) procedure; or
- an MN after receiving the trained AI/ML model, may trigger the MN initiated SN release.
- an MN after receiving the trained AI/ML model, may trigger the MN initiated SN Modification procedure to provide any AI/ML model updates (or any other modified AI/ML model parameters) to the SN.
- Fig. 5 shows a representation of a call flow according to an example of the present disclosure.
- Fig. 5 shows an example of a new Class 2 procedure to notify MN/SN of training status at SN(A) / MN(B).
- Fig. 5 there is shown an MN 53 and an SN 55..It will be appreciated that only one of S500 and S501 may be performed; e.g., depending on whether SN 55 or MN 53 initiates.
- S500 Training Notification (SN initiated)
- the purpose of S500 is for the SN 55 to indicate to MN 53 the completion of training (successfully), and optionally send the trained model to the MN 53 for inference, or may optionally perform inference and send to the MN 53 the indication of training completion together with the inference, or may only send the trained model or only send the inference, or any combination of the previous.
- S501 Training Notification (MN initiated)
- MN 55 or MN 53 may trigger/initiate the release of the SN 55 (e.g. SN Release procedure - MN initiated /SN initiated, or a newly defined SN release procedure), Secondary Node Change (MN/SN initiated), Conditional SN change procedure (MN/SN initiated) or any suitable existing or newly defined EN-DC or MR-DC procedure. It would be appreciated that for simplicity those procedures are not shown Figure 5, however, still these procedures are within the scope of certain examples of the present disclosure.
- Figure 5 shows a non-limiting example and does not show (for simplicity of demonstration) other network entities/functions and/or UE that may also be included in the communications network.
- the examples above handle one or more of classification, differentiation, splitting of training data for AI/ML in a network (or core network or 5G CN (core network)).
- a gNB itself can perform or undertake classification/differentiation/splitting of training data for AI/ML.
- An example of this is gNB operating edge computing function for AI/ML methods.
- training and AI/ML application may be performed between the gNB and UE.
- the training data can be labelled by the gNB.
- the labelling can be indicated in one of L2 headers (SDAP or PDCP or MAC headers) and can be also indicated by RRC message (e.g. QFI or LCID (Logical Channel Identifier)).
- the RAN may further differentiate them (the classified or differentiated or split AI/ML related data) and may process them in a variety of ways as follows:
- QFI QoS Flow ID
- UE or gNB
- SDAP SDU Service Data Unit
- SDAP Service Data Adaptation Protocol
- a RRC (Radio Resource Control) message that a gNB transmits to a UE for configuration may indicate QFIs as AI/ML training data - this may not require pre-defined QFI described in the prior example above.
- SDAP control PDU may be defined to indicate QFI to be used for AI/ML related data.
- the network can allocate a/the separate carrier or channel for the UE(s) because the training data itself is not urgent data. It would be reasonable to have separate carrier (or channel) to handle this, i.e., one or more training dedicated carriers or channels.
- the carrier can be a BWP (Bandwidth Part) a SUL (supplement Uplink) or a SCell, which only transmit and receive AI/ML related data (e.g. training data or control data).
- the network (or gNB) may configure the separate carrier (or channel) for AI/ML related data transmission and reception.
- the training dedicated channel may be logical channel, i.e., AI/ML related data can be allocated with a LCID, which enables the gNB (or UE) to process them with different criteria (e.g. priority, PBR (prioritised Bit Rate), subcarrier spacing, etc.) differently from user data and which can be configured by RRC message generated by the network (or gNB).
- the MAC (Medium Access Control) entity may multiplex the training data and user data into a MAC PDU, but the training data may be processed in separate RLC (Radio link control) entity and PDCP (Packet data convergence protocol) entity different from those of user data.
- RLC Radio link control
- PDCP Packet data convergence protocol
- the MAC entity may limit multiplexing only to the training data while multiplexing user data separately, which may enable the network to prioritize radio resources (frequency or time resources) per MAC PDU. It also enables the application of separate HARQ (Hybrid automatic repeat request) process to AI/ML training data by the network/MAC entity.
- HARQ Hybrid automatic repeat request
- an RLC entity can be configured with RLC AM (Acknowledged Mode) mode or TM (Transparent Mode) mode or UM (Unacknowledged Mode) mode or UM for uni-directional or UM for bi-directional.
- RLC AM Acknowledged Mode
- TM Transparent Mode
- UM Unacknowledged Mode
- the configuration of RLC may be limited to RLC UM mode, which does not incur unnecessary ARQ mechanism in RLC entities.
- the configuration of RLC can be limited to RLC TM mode, which does not incur unnecessary ARQ mechanism and can reduce the header overhead in RLC entities.
- the above proposals can be extended to integrated access and backhaul (IAB) nodes by allocating different backhaul RLC channel (e.g. RLC channel Identifier) for AI/ML related data on a specific hop, and/or a different path overall to the destination for AI/ML data than user data.
- RLC channel e.g. RLC channel Identifier
- an IAB donor may configure the RLC Channel IDs and/or path IDs to IAB node(s) by RRC message or F1AP (F1 application protocol) message.
- the IAB donor may also configure multiple paths for user data and only a single path for AI/ML training data (offering less redundancy and support in case RLF occurs).
- the network may configure, or only configure, BSR (buffer status report) and/or pre-emptive BSR for logical channel groups containing user data.
- BSR buffer status report
- This different treatment of AI/ML data can apply to scheduling assistance data other than BSR.
- the network may only configure flow control feedback for backhaul channels carrying user data.
- the PDCP layer may differentiate the training data from user data by inspecting PDCP header or upper layer headers and may apply different security protection to the PDCP SDU corresponding to training data.
- the PDCP entity may not perform ciphering or integrity protection function to the training data because some training data have no personal or infrastructure-sensitive information based on the type of AI/ML model - this may reduce processing load (e.g. in this example the type of AI/ML model may be a federated learning model).
- the PDCP entity may only perform ciphering or only perform integrity protection to the training data to reduce the processing burden.
- the PDCP layer may indicate whether to perform ciphering for PDCP SDU (Service Data Unit), e.g., training data or integrity protection for PDCP PDU (Protocol Data Unit), e.g., training data with indication in PDCP header.
- PDCP SDU Service Data Unit
- integrity protection for PDCP PDU Protocol Data Unit
- the PDCP layer may only process AI/ML related data (or training data or control data) when dedicated RLC channel or DRB is used for AI/ML related data and configured by RRC message.
- the PDCP layer may perform security protection to PDCP SDUs unlike the PDCP entity corresponding to DRB handling user data.
- the PDCP entity does not perform ciphering or integrity protection function to the training data because some training data have no personal information based on the type of AI/ML model and it can reduce processing load (e.g., the AI/ML model may be of a federated learning model type in this case).
- the RRC message can configure for this DRB (or this PDCP entity) whether to perform ciphering or integrity protection function
- the RRC message referred to above can be RRCReconfiguration or RRCResume or RRCSetup or RRCRelease messages or a newly-defined message.
- a UE may apply different data processing to the AI/ML training data separately in MAC or RLC or PDCP or SDAP layers.
- Fig. 6 is a block diagram illustrating an exemplary network entity 600 (or electronic device 600, or network node 600 etc.) that may be used in examples of the present disclosure.
- any of the network entities, functions, nodes etc. may be implemented by or comprise network entity 600 (or be in combination with a network entity 600) such as illustrated in Fig. 6.
- the network entity 600 may comprise a controller 605 (or at least one processor) and at least one of a transmitter 601, a receiver 603, or a transceiver (not shown).
- receiver 603 may be configured to be used in the process(es) of one of more of: receiving an indication of transfer of the AI/ML model to the SN 15 from the MN 13, establishing a connection with SN 15, and, optionally, model deployment/update, model inference, action and/or feedback;
- transmitter 601 may be configured to be used in the process(es) of one or more of: establishing the connection with SN 15, sending measurement reports to SN 15, and, optionally, model deployment/update, model inference, action and/or feedback;
- controller 605 may be configured to be used in the process(es) of one or more of: performing one of the aforementioned operations and/or controlling the receiver 603 and/or the transmitter 601 in performing one of the aforementioned operations.
- the transmitter 601 may be configured to be used in the process(es) of one or more of: sending an SN addition request message to SN 15 (which may include assistance information (i.e., information related to training)), informing UE 11 of the transfer of the AI/ML model to SN 15, establishing a connection between UE 11 and SN 15, forwarding training data to SN 15, using the UP, and, optionally, model deployment/update, model inference, action and/or feedback; the receiver 603 may be configured to be used in the process(es) of one or more of: receiving SN acknowledge of reception of the AI/ML model and/or any assistance information from SN 15, establishing a connection between UE 11 and SN 15, and, optionally, model deployment/update, model inference, action and/or feedback; and controller 605 may be configured to be used in the process(es) of one or more of: performing one of the aforementioned operations and/or controlling the receiver 603 and/or the transmitter
- the receiver 603 may be configured to be used in the process(es) of one or more of: receiving an SN addition request message from MN 13 (which may include assistance information (i.e., information related to training)), establishing a connection between UE 11 and SN 15, receiving training data for the AI/ML model via the UP, and, optionally, model deployment/update, model inference, action and/or feedback;
- the transmitter 601 may be configured to be used in the process(es) of on or more of: transmitting SN acknowledge of reception of the AI/ML model and/or any assistance information to MN 13, and, optionally, model deployment/update, model inference, action and/or feedback;
- controller 605 may be configured to be used in the process(es) of one or more of: performing one of the aforementioned operations and/or controlling the receiver 603 and/or the transmitter 601 in performing one of the aforementioned operations.
- At least one of the transmitter 601, the receive 603 and the controller 605 may be configured to be used in the process of forwarding the AI/ML model training data to SN 15.
- a first network entity or network function included in a communications network wherein the first network entity or network function is configured to: determine whether traffic or a data packet is associated with training data for a AI/ML model.
- the traffic or data packet is associated with training data for the AI/ML model if the traffic or data packet is labelled as and/or inferred as training data for the AI/ML model.
- the first network entity or network function is configured to: based on determining that the traffic or data packet is associated with training data, process the traffic or data packet according to one or more parameters.
- the one or more parameters include one or more of: a modulation and coding scheme, a security level, a protection level, a transmission parameter, a delay parameter, an energy parameter, a network configuration or a reliability parameter.
- the first network entity or network function is configured to: based on determining that the traffic or data packet is not associated with training data, differently process the traffic or data packet according to the one or more parameters. For example, a modulation and coding scheme used by the first network entity or network function for the traffic or data packet associated with training data is different from a modulation and coding scheme used by the first network entity or network function for traffic or a data packet which is not associated with training data.
- the first network entity or network function is configured to transmit the processed traffic or data packet to a second network entity or network function, included in the communications network.
- the first network entity or network function is configured to: determine whether the traffic or data packet is associated with training data for a AI/ML model based on information received from a third network entity or network function included in the communication network, or based on at least one characteristic of the traffic or data packet.
- the information may be assistance information related to training data packets/streams to assist in training data classification at the first network entity or network function.
- the characteristic may include one or more of: frequency of the training data, size of the training data, pattern of the training data, burst length of the traffic or data packet, periodicity of the traffic or data packet, on-off periods of the traffic or data packets, and a QoS flow or PDU session used for the traffic or data packet.
- the first network entity or network function is a UPF or NG-RAN, wherein the second network entity is a SN or UE, and wherein the third network entity is an AF, AMF or SMF, and wherein the wireless communications network is a 5G NR network or a 6G network.
- a first network entity or network function included in a communications network wherein the first network entity or network function is configured to: classify traffic or a data packet as being associated with training data for a AI/ML model.
- the first network entity or network function is configured to classify the traffic or data packet as being associated with training data based on a predetermined 5QI value. According to another example, the first network entity or network function is configured to classify the traffic or data packet as being associated with training data based on a predetermined 5QI value and label the data accordingly.
- the first network entity or network function is configured to transmit, to a second network entity or network function, an indication that the traffic or data packet is associated with training data based on the predetermined 5QI value.
- a second network entity or network function included in a communications network is configured to receive, from a first network entity or network function included in the communications network, an indication that traffic or a data packet, received at the second network entity or network function, is associated with training data.
- a network entity or network function included in a communications network is configured to, based on determining that a part of traffic or data packets are associated with training data, split the traffic or data packets into first traffic or data packets associated with training data and second traffic or data packets associated with non-training data.
- the non-training data may be user data.
- the user data includes one or more of user-generated data, application generated data, non-overhead data, data originating from outside the network and/or application (such as user-generated, or generated by another application) etc.
- the network entity or network function may split a PDU session to enable separation of the first traffic or data packets and the second traffic or data packets.
- the network entity or network function may split the PDU session during PDU session resource setup or during PDU session resource modify.
- the network entity or network function is an MN and is configured to add a SN, and forward the first traffic or data packets to the SN for the SN to perform model training.
- a network entity or network function included in a communications network is configured to, upon determining that traffic or a data packet is associated with training data, process the traffic or data packet based on the traffic or data packet being associated with training data.
- the processing may be different to a case where the traffic or data packet is not associated with training data.
- the processing may include one or more of: providing modulation and coding schemes, security and protection level, or lower delay, energy and/or reliability requirements.
- the processing may include one or more of: transmitting the traffic or data packet associated with training data to another network entity or network function using best-effort, non-GBR, or low QoS values DRBs; using different bandwidth parts, different carriers or different carrier groups for the traffic or data packets associated with the training data; using different LCP values configured for the traffic or data packets associated with the training data (e.g., compared to LCP values used for traffic or data packets not associated with the training data), using different L1/L2 transmission parameters configured for the traffic or data packets associated with the training data (e.g., compared to L1/L2 transmission parameters used for traffic or data packets not associated with the training data); and combining the traffic or data packets associated with the training data with traffic or data packets not associated with the training data in the same DRBs or separate DRBs.
- a first network entity or network function included in a communication network is configured to transmit, to another network entity or network function included in the communication network, assistance information for transferring an AI/ML model and/or training data to the other network entity or network function.
- the assistance information comprises one or more of: an ID of the AI/ML model, information on deployment of the AI/ML model, information on training of the AI/ML model, information on training status of the AI/ML model (e.g., completed, untrained, or partially-trained), information on transfer of the AI/ML model (e.g., full, or partial), information on update of the AI/ML mode, information on a use case of the AI/ML model (e.g., load balancing, energy saving, mobility optimisation, CSI feedback enhancement, beam management, and/or positioning accuracy enhancements), information on a network-UE collaboration level for the AI/ML model, a training type (e.g., online, or offline), information on a training session ID, information on a training update, information on training validity, and information on the AI/ML model inference.
- an ID of the AI/ML model e.g., information on deployment of the AI/ML model, information on training status of the AI/ML model (e.g., completed, untrained, or partially
- the first network entity or network function is configured to transmit the assistance information using SN addition procedure or SN modification procedure, or a newly defined Class 1 or Class 2 procedure.
- the present disclosure also includes methods as performed by the entities and functions etc. described above.
- the present disclosure includes a method by a network entity or network function included in a communications network, wherein the method comprises classifying traffic or data packets as being associated with training data for an AI/ML model.
- the present disclosure should also be seen to disclose computer-readable storage media comprising instructions which, when executed by a processor (for example, a processor corresponding to controller 605 of a network entity/function), cause the processor to perform any method in accordance with the above.
- a processor for example, a processor corresponding to controller 605 of a network entity/function
- a first network entity/function classifies or labels traffic (or data packets) as training data (or being associated with training data), for example using a 5QI value
- a second network entity/function upon reception of the traffic and an indication of the classification of the traffic, determines that the traffic is training data based on the indication, and processes the traffic based on determining it as being the training data (e.g., configured one or more transmission parameters of the training data based on the determination, such as by using an indicated 5QI value to determine corresponding QoS characteristics)
- a third network entity/function upon receiving the processes traffic, uses the training data for model training based on the processes traffic being the training data.
- Such an apparatus and/or system may be configured to perform a method according to any aspect, embodiment, or example disclosed herein.
- Such an apparatus may comprise one or more elements, for example one or more of receivers, transmitters, transceivers, processors, controllers, modules, units, and the like, each element configured to perform one or more corresponding processes, operations and/or method steps for implementing the techniques described herein.
- an operation/function of X may be performed by a module configured to perform X (or an X-module).
- the one or more elements may be implemented in the form of hardware, software, or any combination of hardware and software.
- examples of the present disclosure may be implemented in the form of hardware, software or any combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage, for example a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape or the like.
- volatile or non-volatile storage for example a storage device like a ROM, whether erasable or rewritable or not
- memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape or the like.
- the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement certain examples of the present disclosure. Accordingly, certain examples provide a program comprising code for implementing a method, apparatus or system according to any example, embodiment, and/or aspect disclosed herein, and/or a machine-readable storage storing such a program. Still further, such programs may be conveyed electronically via any medium, for example a communication signal carried over a wired or wireless connection.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23877670.2A EP4584988A4 (fr) | 2022-10-12 | 2023-10-11 | Procédés et appareil de gestion de données ai/ml |
| KR1020257011939A KR20250068709A (ko) | 2022-10-12 | 2023-10-11 | Ai/ml 데이터를 처리하는 방법 및 장치 |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2215059.3 | 2022-10-12 | ||
| GBGB2215059.3A GB202215059D0 (en) | 2022-10-12 | 2022-10-12 | Methods and apparatus for handling ai/ml data |
| GB2314266.4 | 2023-09-18 | ||
| GB2314266.4A GB2624512A (en) | 2022-10-12 | 2023-09-18 | Methods and apparatus for handling AI/ML data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024080746A1 true WO2024080746A1 (fr) | 2024-04-18 |
Family
ID=84818005
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2023/015637 Ceased WO2024080746A1 (fr) | 2022-10-12 | 2023-10-11 | Procédés et appareil de gestion de données ai/ml |
Country Status (4)
| Country | Link |
|---|---|
| EP (1) | EP4584988A4 (fr) |
| KR (1) | KR20250068709A (fr) |
| GB (3) | GB202215059D0 (fr) |
| WO (1) | WO2024080746A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025169363A1 (fr) * | 2024-02-07 | 2025-08-14 | 株式会社Nttドコモ | Composant de réseau et procédé de communication sans fil |
| WO2025203011A1 (fr) * | 2024-05-09 | 2025-10-02 | Lenovo (Singapore) Pte. Ltd. | Appareil et procédé de communication d'informations ai/ml |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021029799A1 (fr) * | 2019-08-14 | 2021-02-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Équipement d'utilisateur, nœud d'accès cible et procédés dans un réseau de communications sans fil |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240049003A1 (en) * | 2020-07-13 | 2024-02-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Managing a wireless device that is operable to connect to a communication network |
| CN116711373A (zh) * | 2021-01-14 | 2023-09-05 | 联想(北京)有限公司 | 用于执行pscell改变过程的方法及设备 |
-
2022
- 2022-10-12 GB GBGB2215059.3A patent/GB202215059D0/en not_active Ceased
-
2023
- 2023-09-18 GB GB2314266.4A patent/GB2624512A/en active Pending
- 2023-09-18 GB GBGB2500280.9A patent/GB202500280D0/en active Pending
- 2023-10-11 KR KR1020257011939A patent/KR20250068709A/ko active Pending
- 2023-10-11 WO PCT/KR2023/015637 patent/WO2024080746A1/fr not_active Ceased
- 2023-10-11 EP EP23877670.2A patent/EP4584988A4/fr active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021029799A1 (fr) * | 2019-08-14 | 2021-02-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Équipement d'utilisateur, nœud d'accès cible et procédés dans un réseau de communications sans fil |
Non-Patent Citations (5)
| Title |
|---|
| CATT: "TP on TS 38.300 for AI/ML", 3GPP TSG-RAN WG3 #117-E, R3-224660, 9 August 2022 (2022-08-09), XP052264827 * |
| MODERATOR (ERICSSON): "Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement (Final Summary)", 3GPP TSG-RAN WG1 #110, R1-2208161, 28 August 2022 (2022-08-28), XP052276084 * |
| NVIDIA: "Evaluation of AI and ML for positioning enhancements", 3GPP TSG-RAN WG1 MEETING #110BIS-E, R1-2209629, 30 September 2022 (2022-09-30), XP052259102 * |
| See also references of EP4584988A4 * |
| VIVO: "Evaluation on AI/ML for positioning accuracy enhancement", 3GPP TSG RAN WG1 #110, R1-2206036, 12 August 2022 (2022-08-12), XP052273969 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025169363A1 (fr) * | 2024-02-07 | 2025-08-14 | 株式会社Nttドコモ | Composant de réseau et procédé de communication sans fil |
| WO2025203011A1 (fr) * | 2024-05-09 | 2025-10-02 | Lenovo (Singapore) Pte. Ltd. | Appareil et procédé de communication d'informations ai/ml |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2624512A (en) | 2024-05-22 |
| GB202314266D0 (en) | 2023-11-01 |
| EP4584988A4 (fr) | 2025-12-24 |
| GB202500280D0 (en) | 2025-02-26 |
| KR20250068709A (ko) | 2025-05-16 |
| GB202215059D0 (en) | 2022-11-23 |
| EP4584988A1 (fr) | 2025-07-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021157974A1 (fr) | Communication associée à une session pdu multi-accès | |
| WO2018062949A1 (fr) | Procédé et appareil d'établissement de double connectivité pour transmettre des données dans une nouvelle architecture de communication radio | |
| WO2020218857A1 (fr) | Appareil et procédé de fourniture de services de communication directe dans un système de communication sans fil | |
| WO2018110939A1 (fr) | Procédé d'attribution de zone de suivi dans un système de communication sans fil et appareil correspondant | |
| WO2018174516A1 (fr) | Procédé de traitement de message nas dans un système de communication sans fil et appareil correspondant | |
| WO2018070689A1 (fr) | Procédé d'application d'une qualité de service réfléchissante dans un système de communication sans fil, et dispositif correspondant | |
| EP3516922A1 (fr) | Procédé et appareil d'établissement de double connectivité pour transmettre des données dans une nouvelle architecture de communication radio | |
| WO2021049841A1 (fr) | Procédé permettant de déplacer une session vocale ims sur un accès non 3gpp vers un accès 3gpp | |
| WO2017078258A1 (fr) | Procédé d'émission/réception de données dans un système de communication sans fil, et dispositif prenant en charge celui-ci | |
| WO2017078259A1 (fr) | Procédé d'émission/réception de données dans un système de communication sans fil, et dispositif prenant en charge celui-ci | |
| WO2017039042A1 (fr) | Procédé et dispositif de transmission/réception de données de terminal dans un système de communication sans fil | |
| WO2022211519A1 (fr) | Procédé de mesure de performance pour qos | |
| WO2022010287A1 (fr) | Procédé et appareil d'émission et de réception de signaux dans un système de communication sans fil | |
| WO2021194137A1 (fr) | Procédé et appareil de gestion de session de données dans un système de communication sans fil | |
| WO2017138757A1 (fr) | Procédé de transmission et de réception de données utilisant de multiples dispositifs de communication dans un système de communication sans fil, et dispositif prenant en charge celui-ci | |
| WO2024080746A1 (fr) | Procédés et appareil de gestion de données ai/ml | |
| WO2021251559A1 (fr) | Procédé pour amener une smf à effectuer efficacement une transmission redondante par amélioration de la fonction de nwdaf | |
| WO2020159168A1 (fr) | Procédé de réplication de données, procédé de comptage de données, entités et supports correspondants | |
| WO2021235614A1 (fr) | Procédé d'exploitation de smf à l'aide d'informations d'analyse de nwdaf | |
| WO2023014059A1 (fr) | Procédé et dispositif d'établissement et de commande de session | |
| WO2023121357A1 (fr) | Procédé et appareil destinés à un schéma de transmission d'informations de commande | |
| WO2025034042A1 (fr) | Procédé et appareil de transfert associé à l'utilisation d'un ensemble de pdu | |
| WO2022080958A1 (fr) | Mécanisme et dispositif de transmission de paquets de données. | |
| WO2024096601A1 (fr) | Dispositif et procédé mis en œuvre par le dispositif dans une communication sans fil | |
| WO2023287225A2 (fr) | Procédé et appareil pour configurer une mesure d'affaiblissement sur trajet de canal pour une pluralité de trps dans un système de communication sans fil |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23877670 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 20257011939 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023877670 Country of ref document: EP Ref document number: 1020257011939 Country of ref document: KR |
|
| ENP | Entry into the national phase |
Ref document number: 2023877670 Country of ref document: EP Effective date: 20250411 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023877670 Country of ref document: EP |