GB2630961A - Offloading of processing to a network according to a subscription - Google Patents
Offloading of processing to a network according to a subscription Download PDFInfo
- Publication number
- GB2630961A GB2630961A GB2308930.3A GB202308930A GB2630961A GB 2630961 A GB2630961 A GB 2630961A GB 202308930 A GB202308930 A GB 202308930A GB 2630961 A GB2630961 A GB 2630961A
- Authority
- GB
- United Kingdom
- Prior art keywords
- processing
- network
- offloading
- energy efficient
- subscription
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/59—Providing operational support to end devices by off-loading in the network or by emulation, e.g. when they are unavailable
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/0826—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/0833—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network energy consumption
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1012—Server selection for load balancing based on compliance of requirements or conditions with available server resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1036—Load balancing of requests to servers for services different from user content provisioning, e.g. load balancing across domain name servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/289—Intermediate processing functionally located close to the data consumer application, e.g. in same machine, in same home or in same sub-network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/2895—Intermediate processing functionally located close to the data provider application, e.g. reverse proxies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W52/00—Power management, e.g. Transmission Power Control [TPC] or power classes
- H04W52/02—Power saving arrangements
- H04W52/0209—Power saving arrangements in terminal devices
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/509—Offload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Apparatus that performs: determining, based at least on a processing-offloading subscription for a User Equipment, UE, and an energy efficient offloading policy, at least one candidate energy efficient host for handling processing to be offloaded by the UE; and transmitting a respective identifier of the at least one determined candidate energy efficient host to a network function of a network, where the network function is configured to cause offloading of the processing to the network. The apparatus is a Management Service producer, MnS producer such as an Energy Efficient Offloading Host Recommendation, EEOHR, producer and the recommendation may be provided to an Operations and Management Function, OAM, as MnS consumer. The MnS producer may be embodied as a MDAF, Management Data Analytics Function, or NWDAF, Network Data Analytics Function, or other network entity. The processing offloaded may be rendering for an extended or augmented reality application or processing of a trained machine learning model. The energy efficiency policy may comprise an energy efficient criterion, an energy cost criteria or hosts powered by a specified energy type . The offloading subscription may comprise a guaranteed or maximum processing rate, maximum processing volume or offloading charging subscription.
Description
Intellectual Property Office Application No G1323089303 RTM Date:29 November 2023 The following terms are registered trade marks and should be read as such wherever they occur in this document: WiMAX Wi-Fi Delphi Java Javascript Pascal Python Visual Basic Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo
OFFLOADING OF PROCESSING TO A NETWORK
ACCORDING TO A SUBSCRIPTION
FIELD
[0001] Various example embodiments relate generally to wireless networks and, more particularly, to offloading of processing to wireless networks.
BACKGROUND
[0002] Wireless networking provides significant advantages for user mobility. A user's ability to remain connected while on the move provides advantages not only for the user, but also provides greater efficiency and productivity for society as a whole. As user expectations for connection reliability, processing power, data speed, and device battery life, become more demanding, technology for wireless networking must also keep pace with such expectations. Accordingly, there is continuing interest in improving wireless networking technology.
SUMMARY
[0003] In an aspect of the present disclosure, an apparatus includes one or more processors and at least one memory storing instructions. The instructions, when executed by the one or more processors, cause the apparatus at least to perform: determining, based at least on a processing-offloading subscription for a UE and an energy efficient offloading policy, at least one candidate energy efficient host for handling processing to be offloaded by the UE; and transmitting a respective identifier of the at least one candidate energy efficient host to a network function of a network, where the network function is configured to cause offloading of the processing to the network.
[0004] In an aspect of the apparatus, the processing-offloading subscription for the UE may include at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0005] In an aspect of the apparatus, the processing to be offloaded to the network from the UE may include processing of data of an extended reality application for rendering at the UE.
[0006] In an aspect of the apparatus, the energy efficient offloading policy may include at least one of the following: a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy efficiency criterion, a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy cost criteria, or a policy that the processing shall be assigned to hosts that are powered by a specified type of energy.
[0007] In an aspect of the apparatus, the energy efficient offloading policy may include: a policy that the processing shall be assigned to the at least one candidate host from among a plurality of candidate hosts in a manner that fulfills requirements of the processing even when one or more of the plurality of candidate hosts are powered off or are scaled down in processing resources.
[0008] In an aspect of the apparatus, the instructions, when executed by the processor, may further cause the apparatus at least to perform: determining, based on processing-offloading subscriptions of a plurality of user equipment apparatuses (UEs), priority levels of the plurality of UEs for offloading of processing, wherein the plurality of UEs includes the UE and determining, based on the priority levels of the plurality of UEs, that the UE is prioritized.
[0009] In an aspect of the apparatus, the instructions, when executed by the processor, may further cause the apparatus at least to perform: obtaining an analysis of mobility of the UE, where the analysis includes a predicted location of the UE, where the determining the at least one candidate energy efficient host is further based on the predicted location of the UE, and where at least one component of the at least one candidate energy efficient host is located in a region that includes the predicted location of the ITE.
[0010] In an aspect of the apparatus, the at least one component of the at least one candidate energy efficient host may include at least one of the following: a network function, a network node, or a radio access network (RAN) logical entity.
[0011] In an aspect of the apparatus, the instructions, when executed by the processor, may further cause the apparatus at least to perform: receiving feedback data regarding whether an offload of the processing to the at least one candidate energy efficient host fulfilled the energy efficient offloading policy.
[0012] In an aspect of the apparatus, the processing to be offloaded to the network may include executing at least a portion of a processing model.
[0013] In an aspect of the apparatus, the instructions, when executed by the processor, may further cause the apparatus at least to perform: transmitting, to a processing model repository of the network, information on the processing model; and receiving, from the processing model repository, one or more options for splitting execution of the processing model among a plurality of hosts of the network.
[0014] In an aspect of the apparatus, the instructions, when executed by the processor, may further cause the apparatus at least to perform: selecting one option of the one or more options for splitting the execution of the processing model; and transmitting, to the network function configured to cause the offloading, the selected one option for splitting execution of the processing model.
[0015] In an aspect of the apparatus, the processing model is a trained machine learning model.
[0016] In accordance with aspects of the present disclosure, a method includes: determining, based at least on a processing-offloading subscription for a UE and an energy efficient offloading policy, at least one candidate energy efficient host, for handling processing to be offloaded by the UE; and transmitting a respective identifier of the at least one candidate energy efficient host to a network function of a network, where the network function is configured to cause offloading of the processing to the network.
[0017] In an aspect of the method, the processing-offloading subscription for the UE may include at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0018] In an aspect of the method, the processing to be offloaded to the network from the UE may include processing of data of an extended reality application for rendering at the UE.
[0019] In an aspect of the method, the energy efficient offloading policy may include at least one of the following: a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy efficiency criterion, a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy cost criteria, or a policy that the processing shall be assigned to hosts that are powered by a specified type of energy.
[0020] In an aspect of the method, the energy efficient offloading policy may include: a policy that the processing shall be assigned to the at least one candidate host from among a plurality of candidate hosts in a manner that fulfills requirements of the processing even when one or more of the plurality of candidate hosts are powered off or are scaled down in processing resources.
[0021] In an aspect of the method, the method may further include: determining, based on processing-offloading subscriptions of a plurality of user equipment apparatuses (UEs), priority levels of the plurality of UEs for offloading of processing, where the plurality of UEs includes the UE; and determining, based on the priority levels of the plurality of UEs, that the UE is prioritized.
[0022] In an aspect of the method, the method may further include: obtaining an analysis of mobility of the UE, where the analysis includes a predicted location of the UE, where the determining the at least one candidate energy efficient host is further based on the predicted location of the UE, and where at least one component of the at least one candidate energy efficient host is located in a region that includes the predicted location of the UE.
[0023] In an aspect of the method, the at least one component of the at least one candidate energy efficient host may include at least one of the following: a network function, a network node, or a radio access network (RAN) logical entity.
[0024] In an aspect of the method, the method may further include: receiving feedback data regarding whether an offload of the processing to the at least one candidate energy efficient host fulfilled the energy efficient offloading policy.
[0025] In an aspect of the method, the processing to be offloaded to the network may include executing at least a portion of a processing model.
[0026] In an aspect of the method, the method may further include: transmitting, to a processing model repository of the network, information on the processing model; and receiving, from the processing model repository, one or more options for splitting execution of the processing model among a plurality of hosts of the network.
[0027] In an aspect of the method, the method further includes: selecting one option of the one or more options for splitting the execution of the processing model; and transmitting, to the network function configured to cause the offloading, the selected one option for splitting execution of the processing model.
[0028] In an aspect of the method, the processing model is a trained machine learning model.
[0029] According to some aspects, there is provided the subject matter of the independent claims. Some further aspects are defined in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] Some example embodiments will now be described with reference to the accompanying drawings.
[0031] FIG. 1 is a diagram of an example embodiment of wireless networking between a network system and a user equipment apparatus (UE), according to one illustrated aspect of the disclosure; [0032] FIG. 2 is a diagram of example components of a network system, according to one
illustrated aspect of the disclosure;
[0033] FIG. 3 is a flow diagram of example operat ons of a user equ pment apparatus, accord ng to one illustrated aspect of the disclosure; [0034] FIG. 4 is a flow diagram of example operations of a network apparatus, according to
one illustrated aspect of the disclosure;
[0035] FIG. 5A and FIG. 5B are diagrams of example signals and operations of a network system, according to one illustrated aspect of the disclosure; and [0036] FIG. 6 is a diagram of example embodiment of components of a UE or of a network apparatus, according to one illustrated aspect of the present disclosure.
DETAILED DESCRIPTION
[0037] In the following description, certain specific details are set forth in order to provide a thorough understanding of disclosed aspects. However, one skilled in the relevant art will recognize that aspects may be practiced without one or more of these specific details or with other methods, components, materials, etc. In other instances, well-known structures associated with transmitters, receivers, or transceivers have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the aspects.
[0038] Reference throughout this specification to "one aspect" or "an aspect" means that a particular feature, structure, or characteristic described in connection with the aspect is included in at least one aspect. Thus, the appearances of the phrases "in one aspect" or "in an aspect" in various places throughout this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more aspects.
[0039] Embodiments described in the present disclosure may be implemented in wireless networking apparatuses, such as, without limitation, apparatuses utilizing Worldwide Interoperability for Microwave Access (WiMAX), Global System for Mobile communications (GSM, 2G), GSM EDGE radio access Network (GERAN), General Packet Radio Service (GRPS), Universal Mobile Telecommunication System (UMTS, 3G) based on basic wideband-code division multiple access (W-CDMA), high-speed packet access (HSPA), Long Term Evolution (LTE), LTE-Advanced, enhanced LTE (eLTE), 5G New Radio (5G NR), 5G Advance, 6G (and beyond) and 802.11ax (Wi-Fi 6), among other wireless networking systems. The term eLTE' here denotes the LTE evolution that connects to a 5G core. LTE is also known as evolved UMTS terrestrial radio access (EUTRA) or as evolved UMTS terrestrial radio access network (EUTRAN).
[0040] Aspects of the present disclosure relate to offloading of processing from user equipment apparatuses to a network. As used herein, the term "processing" refers to any execution of instructions by one or more processors. Examples of processing include, without limitation, rendering operations for virtual reality (VR), augmented reality (AR), and/or extended reality (XR) applications, machine learning computations, and/or algorithmic or mathematical computations, among other processing. Any type of processing may be offloaded by a UE, for example, if the UE has low battery and would like to save battery charge by offloading processing. In aspects, processing in a UE may be offloaded to energy efficient hosts in a network based on an energy efficient offloading policy maintained by the network. In aspects, a UE may have an offloading subscription with the network, and the processing in a TIE may be offloaded to the network based on the UE's subscription. Such and other aspects will be described in more detail later herein.
[0041] The present disclosure may use the term "serving network device" to refer to a network node or network device (or a portion thereof) that services a UE. As used herein, the terms "transmit to," "receive from," and "cooperate with," (and their variations) include communications that may or may not involve communications through one or more intermediate devices or nodes. The term "acquire" (and its variations) includes acquiring in the first instance or reacquiring after the first instance. The term "connection" may mean a physical connection or a logical connection.
[0042] The present disclosure uses 5G NR as an example of a wireless network and may use smartphones and/or extended reality headsets as an example of UEs. It is intended and shall be understood that such examples are merely illustrative, and the present disclosure is applicable to other wireless networks and user equipment apparatuses.
[0043] FIG. 1 is a diagram depicting an example of wireless networking between a network system 100 and a user equipment apparatus (UE) 150. The network system 100 may include one or more network nodes 120, one or more servers 110, and/or one or more network equipment 130 (e.g., test equipment). The network nodes 120 will be described in more detail below. As used herein, the term "network apparatus" may refer to any component of the network system 100, such as the server 110, the network node 120, the network equipment 130, any component(s) of the foregoing, and/or any other component(s) of the network system 100. Examples of network apparatuses include, without limitation, apparatuses implementing aspects of 5G NR, among others. The present disclosure describes embodiments related to 5G NR and embodiments that involve aspects defined by 3rd Generation Partnership Project (3GPP). However, it is contemplated that embodiments relating to other wireless networking technologies are encompassed within the scope of the present disclosure.
[0044] The following description provides further details of examples of network nodes. In a 5G NR network, a gNodeB (also known as gNB) may include, e.g., a node that provides NR user plane and control plane protocol terminations towards the UE and that is connected via a NG interface to the 5G core (5GC), e.g., according to 3GPP TS 38.300 V16.6.0 (2021-06) section 3.2, which is hereby incorporated by reference herein.
[0045] A gNB supports various protocol layers, e.g., Layer 1 (L 1) -physical layer, Layer 2 (L2), and Layer 3 (L3).
[0046] The layer 2 (L2) of NR is split into the following sublayers: Medium Access Control (MAC), Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP) and Service Data Adaptation Protocol (SDAP), where, e.g.: o The physical layer offers to the MAC sublayer transport channels; o The MAC sublayer offers to the RLC sublayer logical channels; o The RLC sublayer offers to the PDCP sublayer RLC channels; o The PDCP sublayer offers to the SDAP sublayer radio bearers; o The SDAP sublayer offers to 5GC quality of service (QoS) flows; o Control channels include broadcast control channel (BCCH) and physical control channel (PCCH).
[0047] Layer 3 (L3) includes, e.g., radio resource control (RRC), e.g., according to 3GPP TS 38.300 VI 6.6.0 (2021-06) section 6, which is hereby incorporated by reference herein.
[0048] A gNB central unit (gNB-CU) includes, e.g., a logical node hosting, e.g., radio resource control (RRC), service data adaptation protocol (SDAP), and packet data convergence protocol (PDCP) protocols of the gNB or RRC and PDCP protocols of the en-gNB, that controls the operation of one or more gNB distributed units (gNB-DUs). The gNB-CU terminates the Fl interface connected with the gNB-DU. A gNB-CU may also be referred to herein as a CU, a central unit, a centralized unit, or a control unit.
[0049] A gNB Distributed Unit (gNB-DU) includes, e.g., a logical node hosting, e.g., radio link control (RLC), media access control (MAC), and physical (PHY) layers of the gNB or engNB, and its operation is partly controlled by the gNB-CU. One gNB-DU supports one or multiple cells. One cell is supported by only one gNB-DU. The gNB-DU terminates the F1 interface connected with the gNB-CU. A gNB-DU may also be referred to herein as DU or a distributed unit.
[0050] As used herein, the term "network node" may refer to any of a gNB, a gNB-CU, or a gNB-DU, or any combination of them. A RAN (radio access network) node or network node such as, e.g., a gNB, gNB-CU, or gNB-DU, or parts thereof, may be implemented using, e.g., an apparatus with at least one processor and/or at least one memory with processor-readable instructions ("program") configured to support and/or provision and/or process CU and/or DU related functionality and/or features, and/or at least one protocol (sub-)layer of a RAN (radio access network), e.g., layer 2 and/or layer 3. Different functional splits between the central and distributed unit are possible. An example of such an apparatus and components will be described in connection with FIG. 12 below.
[0051] The gNB-CU and gNB-DU parts may, e.g., be co-located or physically separated. The gNB-DU may even be split further, e.g., into two parts, e.g., one including processing equipment and one including an antenna. A central unit (CU) may also be called baseband unit/radio equipment controller/cloud-RAN/virtual-RAN (BB U/REC/C-RAN/V-RAN), open-RAN (O-RAN), or part thereof. A distributed unit (DU) may also be called remote radio head/remote radio unit/radio equipment/radio unit (RRH/RRU/RE/RU), or part thereof. Hereinafter, in various example embodiments of the present disclosure, a network node, which supports at least one of central unit functionality or a layer 3 protocol of a radio access network, may be, e.g., a gNB-CU. Similarly, a network node, which supports at least one of distributed unit functionality or a layer 2 protocol of the radio access network, may be, e.g., a gNB-DU.
[0052] A gNB-CU may support one or multiple gNB-DUs. A gNB-DU may support one or multiple cells and, thus, could support a serving cell for a user equipment apparatus (UE) or support a candidate cell for handover, dual connectivity, and/or carrier aggregation, among other procedures.
[0053] The user equipment apparatus (UE) 150 may be or include a wireless or mobile device, an apparatus with a radio interface to interact with a RAN (radio access network), a smartphone, an in-vehicle apparatus, an IoT device, or a M2M device, among other types of user equipment. Such UE 150 may include: at least one processor; and at least one memory including program code; where the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform certain operations, such as, e.g., RRC connection to the RAN. An example of components of a UE will be described in connection with FIG. 6. In embodiments, the UE 150 may be configured to generate a message (e.g., including a cell ID) to be transmitted via radio towards a RAN (e.g., to reach and communicate with a serving cell). In embodiments, the UE 150 may generate and transmit and receive RRC messages containing one or more RRC PDUs (packet data units). Persons skilled in the art will understand RRC protocol as well as other procedures a UE may perform.
[0054] With continuing reference to FIG. I, in the example of a 5G NR network, the network system 100 provides one or more cells, which define a coverage area of the network system 100. As described above, the network system 100 may include a gNB of a 5G NR network or may include any other apparatus configured to control radio communication and manage radio resources within a cell. As used herein, the term "resource" may refer to radio resources, such as a resource block (RB), a physical resource block (PRB), a radio frame, a subframe, a time slot, a sub-band, a frequency region, a sub-carrier, a beam, etc. In embodiments, the network node 120 may be called a base station.
[0055] FIG. 1 provides an example and is merely illustrative of a network system 100 and a UE 150. Persons skilled in the art will understand that the network system 100 includes components not illustrated in FIG. 1 and will understand that other user equipment apparatuses may be in communication with the network system 100.
[0056] FIG. 2 is a block diagram of example components of the network system 100 of FIG. 1. A 5G NR network may be described as an example of the network system 100, and it is intended that aspects of the following description shall be applicable to other types of network systems, as well. The network system may operate in accordance with the signals and connections shown in FIG. 1 such that the UE 150 is in communication with the network system 100 through the radio access network 225. Additionally, the network system may be divided into user plane components and functions and control plane components and functions, as shown and described herein. Unless indicated otherwise, the terms "component", "function", and "service" may be used interchangeably herein, and they may refer to and be implemented by instructions executed by one or more processors.
[0057] Example functions of the components are described below. The example functions are merely illustrative, and it shall be understood that additional operations and functions may be performed by the components described herein. Additionally, the connections between components may be virtual connections over service-based interfaces such that any component may communicate with any other component. In this manner, any component may act as a service "producer," for any other component that is a service "consumer," to provide services for network functions.
[0058] For example, a core network 210 is described in the control plane of the network system. The core network 210 includes an authentication server function (AUSF) 211, an access and mobility function (AMF) 212, and a session management function (SMF) 213. The core network 210 also includes a network slice selection function (NSSF) 214, a network exposure function (NEF) 215, a network repository function (NRF) 216, and a unified data management function (UDM) 217, which may include a uniform data repository (UDR) 224.
[0059] Additional components and functions of the core network 210 include an application function (AF) 218, policy control function (PET) 219, network data analytics function (NWDAF) 220, analytics data repository function (ADRF) 221, management data analytics function (MDAF) 222, and operations and management function (OAM) 223.
[0060] The user plane includes the UE 150, a radio access network (RAN) 225, a user plane function (UPF) 226, and a data network (DN) 227. The RAN 225 may include one or more components described in connection with FIG. 1, such as one or more network nodes. However, the RAN 225 may not be limited to such components. The UPF 226 provides connection for data being transmitted over the RAN 225. The DN 226 identifies services from service providers, Internet access, and third party services, for example.
[0061] The AMF 212 processes connection and mobilit * tasks. The AUSF 211 receives authentication requests from the AMF 212 and interacts with. UDM 217 to authenticate and validate network responses for determination of successful authentication. The SWF 213 conducts packet data unit (PD11) session management, as well as manages session context with the UPE 226.
[0062] The NSSF 214 may select a network slicing instance (NSI) and determine the allowed network slice selection assistance information (NSSA1). This selection and determination is utilized to set the AMF 212 to provide service to the UE 150. The NEE 215 secures access to network services for third parties to create specialized network services. The NRF 216 acts as a repository to store network functions to allow the functions to register with and discover each other.
[0063] The UDN1 217 generates authentication vectors for use by the AUSF 211 and ADM 212 and provides user identification handling. The UDM 217 may be connected to the UDR 224 which stores data associated with authentication, applications, or the like. The AF 218 provides application services to a user (e.g., streaming services, etc.). The PCF 219 provides policy control functionality. For example, the PCF 219 may assist in network slicing and mobility management, as well as provide quality of service (QoS) and charging functionality.
[0064] The NNSTMF 220 collects data (e.g., from the TT, 150 and the network system) to perform network analytics and provide insight to functions that utilize the analytics in the providing of services. The ADRF 221 allows the storage, retrieval, and removal of data and analytics by consumers. The MDAF 222 provides additional data analytics services for network functions. The OAM 223 provides provisioning and management processing functions to manage elements in or connected to the network (e.g., UE 150, network nodes, etc.).
[0065] FIG. 2 is merely an example of components of a network system, and variations are contemplated to be within the scope of the present disclosure. In embodiments, the network system may include other components not illustrated in FIG. 2. In embodiments, the network system may not include every component illustrated in FIG. 2. In embodiments, the components and connections may be implemented with different connections than those illustrated in FIG. 2. Such and other embodiments are contemplated to be within the scope of the present disclosure.
[0066] The following will describe example operations for offloading processing from a UE to a network. FIG. 3 shows examples operations of a UE, FIG. 4 show example operations of a network apparatus, and FIG. 5A and FIG. 5B show example signals and operations of a network system.
[0067] FIG. 3 is a flow chart of example operations of a UE for offloading processing to a network. In accordance with aspects of the present disclosure, a network system may maintain a subscription type that is dedicated to offloading of processing, and the UE may subscribe to such an offloading-dedicated subscription with the network system. The offloading-dedicated subscription may be referred to herein as "processing-offloading subscription" or simply as a subscription, and the two terms may be used interchangeably herein. The UE's subscription may be stored by the network system in the unified data repository (UDR).
[0068] In various embodiments, a subscription may specify a guaranteed processing rate that the network may guarantee for processing that is offloaded to the network. A processing rate may specify, for example, number of CPU/GPU cores, CPU frequency, and/or FLOPS, among others. Such examples are merely illustrative, and other metrics for processing rate are within the scope of the disclosure. Such a subscription level may be beneficial for processing that is consistently needed by a UE, such as for rendering operations for VR, AR, and/or XR, or real-time applications, among other situations. The guaranteed processing rate may be specified in the subscription information stored in the UDR.
[0069] In various embodiments, a subscription may specify a maximum processing rate that the network may guarantee for processing that is offloaded to the network. Such a subscription level may be beneficial for processing that has a known demand or that can tolerate a capped processing rate, among other situations. The maximum processing rate may be specified in the subscription information stored in the UDR.
[0070] In various embodiments, a subscription may specify a maximum processing volume for processing that is offloaded to the network. A processing volume may specify, for example, maximum amount of processed data/number of processed tasks per day/week/month and/or the maximum time of processing per day/week/month. Such examples are merely illustrative, and other time frames and/or metrics for processing volume are within the scope of the disclosure. Such a subscription level may be beneficial for processing that has a known volume or that can tolerate a capped processing volume, among other situations. The maximum processing volume may be specified in the subscription information stored in the UDR.
[0071] In various embodiments, a subscription may specify a pay-as-you-go processing for processing that is offloaded to the network. Such a subscription may specify, for example, cost per processing rate or cost per processing volume, among others. Such examples are merely illustrative, and other pay-as-you-go cost metrics are within the scope of the disclosure. Such a subscription level may be beneficial for subscribers that generally do not offload processing and that do so only for emergencies, among other situations. The cost metric may be specified in the subscription information stored in the UDR.
[0072] In various embodiments, the subscriptions may account for processing across quality of service (QoS) Flows for a data network name (DNN) and single-network slice selection assistance information (S-NSSAI). Network slicing is a technique used to divide a physical network infrastructure into multiple virtual networks, each with its own resources and quality of service (QoS) requirements. Each virtual network is identified by a unique Data Network Name (DNN), which is used by the core network to route traffic to the appropriate network slice. In the example of 5G NR, a mobile operator can create a dedicated network slice for a particular enterprise customer with specific QoS requirements and can assign a unique DNN to that network slice. This enables the enterprise customer to have a dedicated and secure network that is customized to their specific needs. In various embodiments, the subscriptions may account for processing in other ways, which are contemplated to be within the scope of the present disclosure.
[0073] The subscriptions described above may be usable with each other and may be combined in various ways. Such combinations are contemplated to be within the scope of the present disclosure. Table I summarizes examples of various subscriptions.
Subscription Name Example of Subscription Details Subscribed GPR Guaranteed aggregate processing rate that can be (guaranteed offloaded to the network, e.g., across all quality processing rate) of service (QoS) Flows for the data network name (DNN) and single-network slice selection assistance information (S-NSS AI).
This may be expressed in number of CPU/GPU cores, CPU frequency, FLOPS, etc., dedicated to a user. (Note: this is not an exhaustive list.) Subscribed MPR Maximum aggregate processing rate that can be (maximum offloaded to the network, e.g., across all QoS processing rate) Flows for the DNN and S-NSSAI.
This may be expressed in number of CPU/GPU cores, CPU frequency, FLOPS, etc. dedicated to a user. (Note: this is not an exhaustive list.) Subscribed MPV Maximum aggregate processing volume that can (maximum be offloaded to the network, e.g., across all QoS processing volume) Flows for the DNN and S-NSS AI.
This may be expressed by maximum amount of processed data/number of processed tasks per day/week/month or the maximum time of processing per day/week/month. (Note: this is not an exhaustive list.) Offloading charging Charging subscription for processing offloading, e.g., pay-as-you-go subscription.
Table 1. Examples of Offloading Subscriptions 100741 With continuing reference to FIG. 3, prior to block 310, the LIE already has a processing-offloading subscription with a network. The processing-offloading subscription may include one or more of the subscriptions shown in Table 1 and/or may include another type of dedicated subscription for offloading processing to the network.
100751 At block 310, the UE determines, based on the processing-offloading subscription, processing to be offloaded to the network. In various embodiments, the processing that is to be offloaded may be processing that is appropriate for the subscription.
100761 In accordance with aspects of the present disclosure, the processing to be offloaded may be VR/ARAR processing. XR services typically require significant computational requirements for rendering. Such rendering may be 1) performed on the UE, 2) performed in the network, or 3) split between the UE and the network. Ideally, the rendering is performed at the UE. However, UEs are typically battery driven and may have limitations regarding the hardware/software capabilities for rendering, so at least some of the computationally intensive rendering may need to be offloaded to the network. In some situations, a drawback of this approach is higher network usage (e.g., high bandwidth, low latency usage) between the UE and the network. A solution to this drawback may be split rendering, where part of the processing is performed at the UE and part on the network, such that a desired trade-off between power consumption at device and bandwidth requirements is achieved. In accordance with aspects of the present disclosure, the processing to be offloaded to the network may be full rendering processing or may be split rendering processing.
[0077] In accordance with aspects of the present disclosure, the processing that is to be offloaded may be processing for executing at least a portion of a machine learning (ML) model. For example, the UE may have an application that requires an ML model or that executes a ML model, such as an AR or XR application that involves object recognition using a ML model. Accordingly, the UE may determine that processing relating to the ML model needs to be offloaded to the network. Offloading of processing relating to an ML model will be described in more detail in connection with FIG. 5B. Machine learning is merely an example, and other types of processing may be offloaded to the network, as well.
[0078] At block 320, the UE transmits, to a network apparatus of the network, a request to offload the processing to at least one energy efficient host of the network. As described above, the term "transmit to" includes communications that may or may not involve communications through one or more intermediate devices or nodes. Additionally, the network apparatus may be a network node (or any portion thereof) or may be any part of a network system. in various embodiments, the network apparatus may be a network node, and the UE may transmit the request to the network node. In various embodiments, the network apparatus may be a function or component of a core network or may be a function or component of another part of the network.
[0079] After block 320, if the network has appropriate resources, the UE may offload the processing to the network, and the network may perform the processing and communicate the results to the UE (not shown).
[0080] The operations of FIG. 3 are merely illustrative, and variations are contemplated to be within the scope of the present disclosure. In embodiments, the operations may include other blocks not illustrated in FIG. 3. In embodiments, the operations may not include every block illustrated in FIG. 3. In embodiments, the operation may be implemented in a different order than that illustrated in FIG. 3. Such and other embodiments are contemplated to be within the scope of the present disclosure.
[0081] FIG. 4 is a flow chart of example of operations of a network apparatus for handling offloading of processing from a UE. As mentioned above, a network apparatus may be any part of a network system. In various embodiments, the network apparatus may provide management service (MnS) producer functionality, and the operations of FIG. 4 may be performed by MDAF or by NWDAF functioning as an MnS producer. In various embodiments, the operations of FIG. 4 may be performed by another service or function of the network.
[0082] The operations of FIG. 4 involve a processing-offloading subscription, such as the processing-offloading subscription described in connection with FIG. 3. The operations of FIG. 4 also involve an energy efficient offloading policy, which may be a policy maintained by the network for offloading UE processing to network resources that are more energy efficient. Generally, an energy efficient offloading policy may specify that offloaded processing is to be performed, when possible, by the most energy efficient processing hosts, such as, without limitation, hosts meeting certain energy efficiency criteria, hosts meeting certain energy cost criteria, and/or hosts meeting certain criteria relating to energy type (e.g., solar, wind, geothermal, green, and/or renewable, etc.). Such examples of policies are merely illustrative, and other energy efficient offloading policies are contemplated to be within the scope of the present disclosure. In various embodiments, the policy may also specify that any hosts are to be selected in a manner such that processing that is offloaded are timely performed as required even if some hosts power down or are scaled down.
[0083] Prior to block 410, the UE already has a processing-offloading subscription with the network. The processing-offloading subscription may include one or more of the subscriptions shown in Table 1 and/or may include other subscriptions.
[0084] At block 410, the network apparatus accesses an energy efficient offloading policy relating to offloading of user equipment apparatus processing to the network. The energy efficient offloading policy may include one or more of the policies described above and/or may include another policy. In various embodiments, the energy efficient offloading policy may be stored by an MnS service consumer (e.g., OAM), and the network apparatus may access the energy efficient offloading policy from the MnS service consumer.
[0085] At block 420, the network apparatus identifies the processing-offloading subscription for the UE having the processing to be offloaded to the network. In various embodiments, the processing-offloading subscription may be stored in the UDR, and the network apparatus may identify the processing-offloading subscription for the UE by accessing the UDR.
[0086] At block 430, the network apparatus identifies, based at least on the processing-offloading subscription for the UE, and based on the energy efficient offloading policy, at least one candidate energy efficient host for handling the processing to be offloaded to the network from the UE. The operation of block 430 may consider available processing resources (e.g., hosts) in the network, among other information, in identifying the candidate hosts. Such information will be described in more detail in connection with FIG. 5A and FIG. 5B.
[0087] At block 440, the network apparatus transmits identifier(s) of the candidate energy efficient host(s) to a function of the network that is configured to implement the offloading of the processing. In various embodiments, the identifier(s) of the candidate energy efficient host(s) may be transmitted to an MnS consumer, which may be the OAM, for example. In various embodiments, the identifier(s) of the candidate energy efficient host(s) may be transmitted to another function or service of the network.
[0088] The operations of FIG. 4 are merely illustrative, and variations are contemplated to be within the scope of the present disclosure. In embodiments, the operations may include other blocks not illustrated in FIG. 4. In embodiments, the operations may not include every block illustrated in FIG. 4. In embodiments, the operation may be implemented in a different order than that illustrated in FIG. 4. Such and other embodiments are contemplated to be within the scope of the present disclosure.
[0089] FIG. 5A and FIG. 5B are diagrams of example signals and operations of a network system relating to offloading of processing. The following paragraphs will describe various signals and operations. It will be understood that a described signal may have associated operations and a described operation may have associated signals. Accordingly, a described signal may also be an operation and a described operation may also be a signal. Additionally, FIG. 5A and FIG. 5B will describe signals between and operations performed by various network components, such as those shown at the top of FIG. 5A. Specifically, the components include a random access network (RAN), core network (CN) network functions (NF), a provisioning MnS producer, a fault supervision MnS producer, a performance assurance MnS producer, a unified data repository (UDR), a machine learning (ML) model repository (which may be an ADRF), an energy efficient offloading host recommendation MnS producer (EEOHR MnS producer) (which may be an MDAF), and an energy efficient offloading host recommendation MnS consumer (EEOHR MnS consumer) (which may be an OAM). As described above, an MnS producer is a component that provides a management service, and an MnS consumer is a component that consumes/uses a management service. The network components are illustrative, and it is contemplated that other components may be involved in the signals or may perform the operations.
[0090] In connection with the description below, any processing that is to be offloaded from a UE to a network may be rendering tasks for an VR, AR, and/or XR application. However, it is contemplated that other processing may be offloaded, and the following description shall apply to any processing, as well.
[0091] Prior to signal 501, the UE already has a processing-offloading subscription with the network. The processing-offloading subscription may include one or more of the subscriptions shown in Table 1 and/or may include other subscriptions, and the processing-offloading subscription may be stored in the UDR.
[0092] At signal 501, the EEOHR MnS consumer (e.g., OAM) transmits an energy efficient offloading policy to the EEOHR MnS producer (e.g., MDAF), and the EEOHR MnS producer receives the energy efficient offloading policy from the EEOHR MnS consumer. As mentioned above, an energy efficient offloading policy may specify that offloaded processing is to be performed, when possible, by the most energy efficient processing hosts (e.g., one or more hosts), such as, without limitation, hosts meeting certain energy efficiency criteria, hosts meeting certain energy cost criteria, and/or hosts meeting certain criteria relating to energy type (e.g., solar, wind, geothermal, green, and/or renewable, etc.). Such examples of policies are merely illustrative, and other energy efficient offloading policies are contemplated to be within the scope of the present disclosure.
[0093] At signals 502-507, the EEOHR MnS producer requests and receives data to be used for determining a recommendation for the most energy efficient offloading host(s). This data may include, but is not limited to: lifecycle management (LCM), connection management (CM), performance management (PM), fault management (FM), energy saving strategies, and energy consumption and efficiency key performance indicators (KPIs) of various network functions, among others. In various embodiments, the MnS producer may collect load information including virtual ized resource load/usage measurements.
[0094] Specifically, at signal 502, the EEOHR MnS producer subscribes to LCM and CM data, which may be provided by the provisioning MnS producer. At signal 503, the EEOHR MnS producer subscribes to FM data, which may be provided by the fault supervision MnS producer. At signal 504, the EEOHR MnS producer creates a measurement job and collects PM data and energy efficiency KPIs from the performance assurance MnS producer. At signal 505, the provisioning MnS producer transmits the LCM and CM data to the EEOHR MnS producer, and the EEOHR MnS producer receives the LCM and CM data. At signal 506, the fault supervision MnS producer transmits the FM data to the EEOHR MnS producer, and the EEOHR MnS producer receives the FM data from the fault supervision MnS producer. At signal 507, the performance assurance MnS producer transmits the PM data, including virtualized resource and virtual CPU usage measurements, to the EEOHR MnS producer, and the EEOHR MnS producer receives the PM data from the performance assurance MnS producer.
[0095] The EEOHR MnS producer operates to determine which UEs have a dedicated subscription for processing offloading to the network and details of subscriptions. In various embodiments, this information may be stored in the UDR. At signal 508, the EEOHR MnS producer retrieves, from the UDR, the information on the subscriptions for the UEs that have processing to be offloaded to the network. As described above, the subscriptions may include one or more of the subscriptions shown in Table I or may include other subscriptions.
[0096] After signal 508, the EEOHR MnS producer determines, based on the subscription information, the UEs which will be prioritized for processing offloading to the network.
[0097] At signal 509, having determined which UEs are prioritized for the offloading, the EEOHR MnS producer subscribes to analytics, from the NWDAF, related to such UEs. The analytics may include, for example, analytics and predictions on observed service experience, UE communication and mobility analytics (e.g., traffic volume, mobility statistics, among others), and predictions of the UEs (e.g., UE communication predictions and UE location predictions, among others).
[0098] At signal 510, the NWDAF provides the requested analytics and predictions and may indicate the time (e.g., start time) and duration for which the analytics and predications are valid. In various embodiments, for the UE communication analytics and prediction, one or more of the following output may be provided: > Start time Start time observed (average and variance) > Duration Duration of communication (average and variance).
> Traffic characterization S-NSSAI, DNN, ports, other useful information.
> Traffic volume Volume UL/DL (average and variance).
> Ratio Percentage of UEs in the group (in the case of a UE group).
Applications (0..max) List of application in use.
> Application Id Identification of the application.
> Start time Start time of the application.
> Duration time Duration interval time of the application.
> Occurrence ratio Proportion for the application used by the UE during requested period.
> Spatial validity Area where the service behaviour applies. If Area of Interest information was provided in the request or subscription, spatial validity may be a subset of the requested Area of Interest.
Table 2.
[0099] In various embodiments, for the UE mobility prediction, one or more of the following information may be provided: UE group 1I) or UE ID Identifies a UE or a group of UEs, e.g. internal group ID, or SUPI Time slot entry List of predicted time slots.
( I..max) >Time slot start Time slot start time within the Analytics target period.
> Duration Duration of the time slot. If a Temporal granularity size was provided in the request or subscription, the Duration is greater than or equal to the Temporal granularity size.
> UE location Predicted location during the Analytics target period.
( I..max) >> DE location TA or cells where the UE or UE group may move into( location coordinates).
>> Confidence Confidence of this prediction.
» Ratio Percentage of UEs in the group (in the case of a UE group).
» UE's The geographical distribution of the UEs that can be selected by the AF for application service.
geographical distribution > UE's direction The direction of the UEs in the coverage area.
Table 3.
[0100] At signal 511, the EEOHR MnS producer subscribes to NF load analytics and predictions, from the NWDAF, for the NFs in the area where the prioritized UEs are predicted to be located (based on signal 510) and in the areas where expected application usage (e.g., XR application usage) is predicted (based on signal 510).
[0101] At signal 512, NWDAF transmits the NF load analytics and predictions to the EEOHR MnS producer, and the EEOHR MnS producer receives the NF load analytics and predictions from the NWDAF.
[0102] At operations 513 and 514, the EEOHR MnS producer determines, based on the energy efficient offloading policy (signal 501) and the data collected regarding the NFs and UEs that are prioritized for offloading (signals 502-512), a recommendation of one or more energy efficient offloading hosts. This may be performed in two steps by first determining the hosts with available resources (operation 513) and then selecting the most energy efficient hosts among them (operation 514).
[0103] Specifically, at operation 513, the EEOHR MnS producer determines the potential host(s) that have resource availability at NFs, and at operation 514, the EEOHR MnS producer determines the offloading host(s) among them that best comply with the energy efficient offloading policy. Operation 514 may be performed in various ways to determine the offloading host(s) that best comply with the energy efficient offloading policy. For example, each host may be associated with information relating to the energy efficient offloading policy, and the hosts that satisfy the highest number of sections in the policy may be selected. In various examples, the energy efficient offloading policy may include numerical criteria (e.g., cost of energy, etc.) and the hosts that best satisfy the numerical criteria (e.g., lowest cost of energy) may be selected. Such examples are merely illustrative, and other ways of performing operation 514 are contemplated to be within the scope of the present disclosure.
[0104] At signal 515, the EEOHR MnS producer transmits the recommendation on the most energy efficient offloading host(s) (e.g., at least one candidate host) to the EEOHR MnS consumer, and the EEOHR MnS consumer receives the recommendation of candidate host/s that will provide processing offloading in a most efficient energy manner in accordance with the energy efficient offloading policy.
[0105] Signals and operations 516-519 are optional and will be addressed below.
101061 After signal 515, the processing is offloaded from the UE to the hosts, and the hosts perform the processing. At signal 520, the EEOHR MnS consumer (e.g., OAM) may monitor the network KPIs (e.g., energy consumption, QoS/QoE, UE mobility, NF load, etc.) and may provide the feedback on whether the selected offloading host satisfied the energy efficient offloading policy and/or provide feedback on whether the offloading host needs to be changed. Additionally, the EEOHR MnS producer may provide feedback to the EEOHR MnS consumer on whether the indicated energy efficient offloading policy is feasible given the current network state.
101071 Returning now to signals and operations 516-519, these signals and operations relate to the case where ML model execution may be performed at least partially by the UE and at least a portion of the execution is to be offloaded to one or more hosts of the network. In various embodiments, execution of the entire ML model may be offloaded to one or more hosts of the network. The ML model may be any ML model, such as, without limitation, a neural network, decision tree, and support vector machine, among others.
101081 At signal 516, the EEOHR MnS producer requests from the ML model repository information on the available options for splitting the execution of the ML model to be offloaded (if any splitting options for the ML model are available). The ML model repository generates the available split options based on the ML model structure. At signal 517, the ML model repository transmits a report on ML model split options and characteristics to the EEOHR MnS producer, and the EEOHR MnS producer receives the report from the ML model repository. For example, an ML model split option may allow for a certain portion of the ML model execution to be performed by one network host while the remaining portion(s) may be performed by one or more other network hosts. In various embodiments, the ML model repository may determine that multiple ML model split options are available. In various embodiments, the ML model repository may determine that no ML model split options are available. The flow chart assumes that at least one option is available and is reported at signal 517.
101091 At operation 518, the EEOHR MnS producer selects the most suitable split option for the offloading hosts selected at operation 514. For example, certain split options may be less suitable or more suitable for the network hosts due to various factors, such as, without limitation, the processing capabilities of the network hosts, approximate delay between the hosts and the UE, available throughput between the host and the UE (e.g., in offloading tasks for augmented reality and virtual reality), and/or based on amount of data exchange among the splits, among other factors. And at signal 519, the EEOHR MnS producer transmits the selected ML model split option to the EEOHR MnS consumer, and the EEOHR MnS consumer receives the selected ML model split option. After signal 519, the ML model split option may be performed by the network host(s), and at signal 520 feedback on the performance may be communicated as described above herein.
[OHO] The ML model split described in connection with signals and operations 516-519 is illustrative. Any processing model may be split, and it is intended that aspects of signals and operations 516-519 shall be applicable to any option for splitting any processing model, such that the EEOHR MnS producer may select the split option that best matches the hosts selected at operation 514. Accordingly, processing model execution may be performed at least partially by the UE and at least a portion (or an entirety) of the execution may be offloaded to one or more hosts of the network. The processing model execution offloaded to the network may be split among multiple hosts of the network.
101111 The signals and operations of FIG. 5A and FIG. 5B are merely illustrative, and variations are contemplated to be within the scope of the present disclosure. In embodiments, the signals and operations may include others not illustrated in FIG. 5A and FIG. 5B. In embodiments, the signals and operations may not include every signal and operation illustrated in FIG. 5A and FIG. 5B. In embodiments, the signals and operations may be implemented in a different order than that illustrated in FIG. 5A and FIG. 5B. Such and other embodiments are contemplated to be within the scope of the present disclosure.
[0112] Referring now to FIG. 6, there is shown a block diagram of example components of a UE or a network apparatus. The apparatus includes an electronic storage 610, a processor 620, a memory 650, and a network interface 640. The various components may be communicatively coupled with each other. The processor 620 may be and may include any type of processor, such as a single-core central processing unit (CPU), a multi-core CPU, a microprocessor, a digital signal processor (DSP), a System-on-Chip (SoC), or any other type of processor. The memory 650 may be a volatile type of memory, e.g., RAM, or a non-volatile type of memory, e.g., NAND flash memory. The memory 650 includes processor-readable instructions that are executable by the processor 620 to cause the apparatus to perform various operations, including those mentioned herein, such as the operations of FIGS. 3-5.
[0113] The electronic storage 610 may be and include any type of electronic storage used for storing data, such as hard disk drive, solid state drive, and/or optical disc, among other types of electronic storage. The electronic storage 610 stores processor-readable instructions for causing the apparatus to perform its operations and stores data associated with such operations, such as storing data relating to 50 NR standards, among other data. The network interface 640 may implement wireless networking technologies such as 5G NR and/or other wireless networking technologies.
[0114] The components shown in FIG. 6 are merely examples, and persons skilled in the art will understand that an apparatus includes other components not illustrated and may include multiples of any of the illustrated components. Such and other embodiments are contemplated to be within the scope of the present disclosure.
[0115] Further embodiments of the present disclosure include the following examples.
[0116] Example 1.1. A user equipment apparatus comprising: at least one processor; and at least one memory storing instructions which, when executed by the processor, cause the user equipment apparatus at least to perform: determining, based on a processing-offloading subscription, processing to be offloaded to a network; and transmitting, to a network apparatus coupled with the network, a request to offload the processing to at least one host of the network.
[0117] Example 1.2. The user equipment apparatus of Example 1.1, wherein the processing-offloading subscription comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0118] Example 1.3. The user equipment apparatus of Example 1.1 or Example 1.2, wherein the processing to be offloaded to the network comprises rendering for an extended reality application.
[0119] Example 1.4. The user equipment apparatus of any of the preceding Examples, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
[0120] Example 1.5. The user equipment apparatus of Example 1.4, wherein the processing model is a trained machine learning model.
[0121] Example 2.1. A method comprising: determining, based on a processing-offloading subscription, processing to be offloaded to a network; and transmitting, to a network apparatus coupled with the network, a request to offload the processing to at least one host of the network.
[0122] Example 2.2. The method of Example 2.1, wherein the processing-offloading subscription comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0123] Example 2.3. The method of Example 2.1 or Example 2.2, wherein the processing to be offloaded to the network comprises rendering for an extended reality application.
[0124] Example 2.4. The method of any one of the preceding Examples 2.1-2.3, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
[0125] Example 2.5. The method of Example 2.4, wherein the processing model is a trained machine learning model.
[0126] Example 3.1. A user equipment apparatus comprising: means for determining, based on a processing-offloading subscription, processing to be offloaded to a network; and means for transmitting, to a network apparatus coupled with the network, a request to offload the processing to at least one host of the network.
[0127] Example 3.2. The user equipment apparatus of Example 3.1 wherein the processing-offloading subscription comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0128] Example 3.3. The user equipment apparatus of Example 3.1 or Example 3.2, wherein the processing to be offloaded to the network comprises rendering for an extended reality application.
[0129] Example 3.4. The user equipment apparatus of any one of the preceding Examples 3.1- 3.3, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
[0130] Example 3.5. The user equipment apparatus of Example 3.4, wherein the processing model is a trained machine learning model.
[0131] Example 4.1. An apparatus comprising: means for determining, based at least on a processing-offloading subscription for a UE and an energy efficient offloading policy, at least one candidate energy efficient host, for handling processing to be offloaded by the UE; and means for transmitting a respective identifier of the at least one candidate energy efficient host to a network function of a network, the network function configured to cause offloading of the processing to the network.
[0132] Example 4.2. The apparatus of Example 4.1, wherein the processing-offloading subscription for the UE comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
[0133] Example 4.3. The apparatus of Example 4. I or Example 4.2, wherein the processing to be offloaded to the network from the UE comprises processing of data of an extended reality application for rendering at the UE.
[0134] Example 4.4. The apparatus of any one of the preceding Examples 4.1-4.3, wherein the energy efficient offloading policy comprises at least one of the following: a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy efficiency criterion, a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy cost criteria, or a policy that the processing shall be assigned to hosts that are powered by a specified type of energy.
101351 Example 4.5. The apparatus of Example 4.3 or Example 4.4, wherein the energy efficient offloading policy comprises: a policy that the processing shall be assigned to the at least one candidate host from among a plurality of candidate hosts in a manner that fulfills requirements of the processing even when one or more of the plurality of candidate hosts are powered off or are scaled down in processing resources.
[0136] Example 4.6. The apparatus of any one of the preceding Examples 4.1-4.5 further comprising: means for determining, based on processing-offloading subscriptions of a plurality of user equipment apparatuses (UEs), priority levels of the plurality of UEs for offloading of processing, wherein the plurality of UEs comprises the UE; and means for determining, based on the priority levels of the plurality of UEs, that the UE is prioritized.
[0137] Example 4.7. The apparatus of any one of the preceding Example 4.1-4.6, further comprising: means for obtaining an analysis of mobility of the UE, the analysis comprising a predicted location of the UE, wherein the determining the at least one candidate energy efficient host is further based on the predicted location of the UE, and wherein at least one component of the at least one candidate energy efficient host is located in a region that comprises the predicted location of the UE.
[0138] Example 4.8. The apparatus of Example 4.7, wherein the at least one component of the at least one candidate energy efficient host comprises at least one of the following: a network function, a network node, or a radio access network (RAN) logical entity.
[0139] Example 4.9. The apparatus of any one of the preceding Example 4.1-4.8 further comprising: means for receiving feedback data regarding whether an offload of the processing to the at least one candidate energy efficient host fulfilled the energy efficient offloading policy.
101401 Example 4.10. The apparatus of any one of the preceding Examples 4.1-4.9, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
101411 Example 4.11. The apparatus of Example 4.10, further comprising: means for transmitting, to a processing model repository of the network, information on the processing model; and means for receiving, from the processing model repository, one or more options for splitting execution of the processing model among a plurality of hosts of the network.
101421 Example 4.12. The apparatus of Example 4.11, further comprising: means for selecting one option of the one or more options for splitting the execution of the processing model; and means for transmitting, to the network function configured to cause the offloading, the selected one option for splitting execution of the processing model.
101431 Example 4.13. The apparatus of any of Example 4.10-Example 4.12, wherein the processing model is a trained machine learning model.
101441 The embodiments and aspects disclosed herein are examples of the present disclosure and may be embodied in various forms. For instance, although certain embodiments herein are described as separate embodiments, each of the embodiments herein may be combined with one or more of the other embodiments herein. Specific structural and functional details disclosed herein are not to be interpreted as limiting, but as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure. Like reference numerals may refer to similar or identical elements throughout the description of the figures.
101451 The phrases "in an aspect," "in aspects," "in various aspects," "in some aspects," or other aspects" may each refer to one or more of the same or different aspects in accordance with this present disclosure. The phrase "a plurality of may refer to two or more.
101461 The phrases "in an embodiment," "in embodiments," "in various embodiments," "in some embodiments," or "in other embodiments" may each refer to one or more of the same or different embodiments in accordance with the present disclosure. A phrase in the form "A or B" means "(A), (B), or (A and B)." A phrase in the form "at least one of A, B, or C" means "(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C)." 101471 Any of the herein described methods, programs, algorithms or codes may be converted to, or expressed in, a programming language or computer program. The terms "programming language" and "computer program," as used herein, each include any language used to specify instructions to a computer, and include (but is not limited to) the following languages and their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++, Delphi, Fortran, Java, JavaScript, machine code, operating system command languages, Pascal, Ped, PL I, Python, scripting languages, Visual Basic, metalanguages which themselves specify programs, and all first, second, third, fourth, fifth, or further generation computer languages. Also included are database and other data schemas, and any other meta-languages. No distinction is made between languages which are interpreted, compiled, or use both compiled and interpreted approaches. No distinction is made between compiled and source versions of a program. Thus, reference to a program, where the programming language could exist in more than one state (such as source, compiled, object, or linked) is a reference to any and all such states. Reference to a program may encompass the actual instructions and/or the intent of those instructions.
101481 While aspects of the present disclosure have been shown in the drawings, it is not intended that the present disclosure be limited thereto, as it is intended that the present disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular aspects. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.
Claims (26)
- WHAT IS CLAIMED IS: 1. An apparatus comprising: at least one processor; and at least one memory storing instructions which, when executed by the processor, cause the apparatus at least to perform: determining, based at least on a processing-offloading subscription for a UE and an energy efficient offloading policy, at least one candidate energy efficient host for handling processing to be offloaded by the UE; and transmitting a respective identifier of the at least one candidate energy efficient host to a network function of a network, the network function configured to cause offloading of the processing to the network.
- 2. The apparatus of claim 1, wherein the processing-offloading subscription for the UE comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
- 3. The apparatus of claim I or claim 2, wherein the processing to be offloaded to the network from the UE comprises processing of data of an extended reality application for rendering at the UE.
- 4. The apparatus of any of the preceding claims, wherein the energy efficient offloading policy comprises at least one of the following: a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy efficiency criterion, a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy cost criteria, or a policy that the processing shall be assigned to hosts that are powered by a specified type of energy.
- 5. The apparatus of claim 3 or claim 4, wherein the energy efficient offloading policy comprises: a policy that the processing shall be assigned to the at least one candidate host from among a plurality of candidate hosts in a manner that fulfills requirements of the processing even when one or more of the plurality of candidate hosts are powered off or are scaled down in processing resources.
- 6. The apparatus of any of the preceding claims, wherein the instructions, when executed by the processor, further cause the apparatus at least to perform: determining, based on processing-offloading subscriptions of a plurality of user equipment apparatuses (UEs), priority levels of the plurality of UEs for offloading of processing, wherein the plurality of UEs comprises the UE; and determining, based on the priority levels of the plurality of UEs, that the UE is prioritized.
- 7. The apparatus of any of the preceding claims, wherein the instructions, when executed by the processor, further cause the apparatus at least to perform: obtaining an analysis of mobility of the UE, the analysis comprising a predicted location of the UE, wherein the determining the at least one candidate energy efficient host is further based on the predicted location of the UE, and wherein at least one component of the at least one candidate energy efficient host is located in a region that comprises the predicted location of the UE.
- 8. The apparatus of claim 7, wherein the at least one component of the at least one candidate energy efficient host comprises at least one of the following: a network function, a network node, or a radio access network (RAN) logical entity.
- 9. The apparatus of any of the preceding claims, wherein the instructions, when executed by the processor, further cause the apparatus at least to perform: receiving feedback data regarding whether an offload of the processing to the at least one candidate energy efficient host fulfilled the energy efficient offloading policy.
- 10. The apparatus of any of the preceding claims, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
- 11. The apparatus of claim 10, wherein the instructions, when executed by the processor, further cause the apparatus at least to perform: transmitting, to a processing model repository of the network, information on the processing model; and receiving, from the processing model repository, one or more options for splitting execution of the processing model among a plurality of hosts of the network.
- 12. The apparatus of claim I I, wherein the instructions, when executed by the processor, further cause the apparatus at least to perform: selecting one option of the one or more options for splitting the execution of the processing model; and transmitting, to the network function configured to cause the offloading, the selected one option for splitting execution of the processing model.
- 13. The apparatus of any of claims 10-12, wherein the processing model is a trained machine learning model.
- 14. A method comprising: determining, based at least on a processing-offloading subscription for a UE and an energy efficient offloading policy, at least one candidate energy efficient host, for handling processing to be offloaded by the UE; and transmitting a respective identifier of the at least one candidate energy efficient host to a network function of a network, the network function configured to cause offloading of the processing to the network.
- 15. The method of claim 14, wherein the processing-offloading subscription for the UE comprises at least one of the following: a guaranteed processing rate subscription, a maximum processing rate subscription, a maximum processing volume subscription, or an offloading charging subscription.
- 16. The method of claim 14 or claim 15, wherein the processing to be offloaded to the network from the UE comprises processing of data of an extended reality application for rendering at the UE.
- 17. The method of any one of claims 14-16, wherein the energy efficient offloading policy comprises at least one of the following: a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy efficiency criterion, a policy that the processing shall be offloaded to at least one candidate host that satisfy a specified energy cost criteria, or a policy that the processing shall be assigned to hosts that are powered by a specified type of energy.
- 18. The method of claim 16 or claim 17, wherein the energy efficient offloading policy comprises: a policy that the processing shall be assigned to the at least one candidate host from among a plurality of candidate hosts in a manner that fulfills requirements of the processing even when one or more of the plurality of candidate hosts are powered off or are scaled down in processing resources.
- 19. The method of any one of claims 14-18, further comprising: determining, based on processing-offloading subscriptions of a plurality of user equipment apparatuses (UEs), priority levels of the plurality of UEs for offloading of processing, wherein the plurality of UEs comprises the UE; and determining, based on the priority levels of the plurality of UEs, that the UE is prioritized.
- 20. The method of any one of claims 14-19, further comprising: obtaining an analysis of mobility of the UE, the analysis comprising a predicted location of the UE, wherein the determining the at least one candidate energy efficient host is further based on the predicted location of the UE, and wherein at least one component of the at least one candidate energy efficient host is located in a region that comprises the predicted location of the LTE.
- 21. The method of claim 20, wherein the at least one component of the at least one candidate energy efficient host comprises at least one of the following: a network function, a network node, or a radio access network (RAN) logical entity.
- 22. The method of any one of claims 14-21, further comprising: receiving feedback data regarding whether an offload of the processing to the at least one candidate energy efficient host fulfilled the energy efficient offloading policy.
- 23. The method of any one of claims 14-22, wherein the processing to be offloaded to the network comprises executing at least a portion of a processing model.
- 24. The method of claim 23, further comprising: transmitting, to a processing model repository of the network, information on the processing model; and receiving, from the processing model repository, one or more options for splitting execution of the processing model among a plurality of hosts of the network.
- 25. The method of claim 24, further comprising: selecting one option of the one or more options for splitting the execution of the processing model; and transmitting, to the network function configured to cause the offloading, the selected one option for splitting execution of the processing model.
- 26. The method of any one of claims 23-25, wherein the processing model is a trained machine learning model.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2308930.3A GB2630961A (en) | 2023-06-15 | 2023-06-15 | Offloading of processing to a network according to a subscription |
| CN202480034747.0A CN121368884A (en) | 2023-06-15 | 2024-04-24 | Offloading processing to a network according to subscriptions |
| PCT/EP2024/061212 WO2024256074A1 (en) | 2023-06-15 | 2024-04-24 | Offloading of processing to a network according to a subscription |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2308930.3A GB2630961A (en) | 2023-06-15 | 2023-06-15 | Offloading of processing to a network according to a subscription |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| GB2630961A true GB2630961A (en) | 2024-12-18 |
Family
ID=90829408
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2308930.3A Pending GB2630961A (en) | 2023-06-15 | 2023-06-15 | Offloading of processing to a network according to a subscription |
Country Status (3)
| Country | Link |
|---|---|
| CN (1) | CN121368884A (en) |
| GB (1) | GB2630961A (en) |
| WO (1) | WO2024256074A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022031555A1 (en) * | 2020-08-03 | 2022-02-10 | Intel Corporation | Compute offload services in 6g systems |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017099548A1 (en) * | 2015-12-11 | 2017-06-15 | Lg Electronics Inc. | Method and apparatus for indicating an offloading data size and time duration in a wireless communication system |
-
2023
- 2023-06-15 GB GB2308930.3A patent/GB2630961A/en active Pending
-
2024
- 2024-04-24 CN CN202480034747.0A patent/CN121368884A/en active Pending
- 2024-04-24 WO PCT/EP2024/061212 patent/WO2024256074A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022031555A1 (en) * | 2020-08-03 | 2022-02-10 | Intel Corporation | Compute offload services in 6g systems |
Non-Patent Citations (4)
| Title |
|---|
| 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), "Application based energy optimization for computation offloading in hierarchical MEC network", Thananjeyan S et al. * |
| IEEE Access, October 2019, "A deep learning approach for energy efficient computational offloading in mobile edge computing", Zaiwar Ali et al. * |
| IEEE Communications and Surveys & Tutorials, vol. 24, no. 4, Fourth quarter 2022, "Machine and deep learning for resource allocation in multi-access edge computing: A survey", Hamza Djigal et al. * |
| IEEE Transactions on Mobile Computing, vol. 20, No. 3, March 2021, "Joint resource allocation for device-to-device communication assisted fog computing", Changyan Yi et al. * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024256074A1 (en) | 2024-12-19 |
| CN121368884A (en) | 2026-01-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101400130B (en) | Method, system and device for system information block mapping | |
| US20220408293A1 (en) | Method and device for providing network analysis information for rfsp index selection in mobile communication network | |
| EP2866406B1 (en) | Policy control method and apparatus | |
| CN110519795A (en) | A kind of method and device of determining background traffic transmission strategy | |
| EP4252445B1 (en) | Method and system for exposing radio access network (ran) data | |
| WO2015143763A1 (en) | Load information transfer method, system, network elements and computer storage medium | |
| EP4535875A1 (en) | Disabling n1 mode upon receiving a 5gmm cause value message | |
| CN117641467A (en) | Communication method and device | |
| GB2630961A (en) | Offloading of processing to a network according to a subscription | |
| EP4485994A1 (en) | Network analysis method, network function, and storage medium | |
| WO2024089563A1 (en) | Managing service-level energy efficiency in a communication network | |
| US20250113293A1 (en) | Disabling n1 mode upon receiving a 5gmm cause value message | |
| WO2014101831A1 (en) | A data optimization technique for the exchange of data at the edge of a wireless local area network | |
| CN120730375A (en) | Energy cost reduction of network slicing | |
| WO2025073508A1 (en) | Switch to rrc connected state | |
| CN120730474A (en) | Support for anticipatory resource management between slices | |
| WO2025210570A1 (en) | Mechanism to define granular and optimized data access authorization for ml process | |
| GB2634720A (en) | Smart access traffic switching, steering and splitting | |
| WO2026032678A1 (en) | Ssb type adaptation for network energy saving | |
| WO2025069040A1 (en) | Collecting and exposing network energy related information in a communication system | |
| WO2025223747A1 (en) | Additional contiguous prach resources associated to legacy resources for nes | |
| WO2025223744A1 (en) | Additional contiguous prach resources associated to legacy resources for nes | |
| WO2023180881A1 (en) | Predicting quality-of-service (qos) for a communication network | |
| WO2023143258A1 (en) | Data processing method and apparatus | |
| WO2025223746A1 (en) | Additional contiguous prach resources associated to legacy resources for nes |