[go: up one dir, main page]

WO2025176384A1 - Machine learning task transfer - Google Patents

Machine learning task transfer

Info

Publication number
WO2025176384A1
WO2025176384A1 PCT/EP2025/050956 EP2025050956W WO2025176384A1 WO 2025176384 A1 WO2025176384 A1 WO 2025176384A1 EP 2025050956 W EP2025050956 W EP 2025050956W WO 2025176384 A1 WO2025176384 A1 WO 2025176384A1
Authority
WO
WIPO (PCT)
Prior art keywords
energy
processor
machine learning
entity
cause
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/050956
Other languages
French (fr)
Inventor
Emmanouil Pateromichelakis
Dimitrios Karampatsis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo International Cooperatief UA
Original Assignee
Lenovo International Cooperatief UA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo International Cooperatief UA filed Critical Lenovo International Cooperatief UA
Publication of WO2025176384A1 publication Critical patent/WO2025176384A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models

Definitions

  • the present disclosure relates to wireless communications, and more specifically to machine learning task transfer.
  • a wireless communications system may include one or multiple network communication devices, such as base stations, which may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE), or other suitable terminology.
  • the wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers, or the like).
  • the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G)).
  • the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on. Further, as used herein, including in the claims, a “set” may include one or more elements.
  • Some implementations of the method and apparatuses described herein may further include a user equipment (UE) for wireless communication, comprising at least one memory and at least one processor coupled with the at least one memory and configured to cause the UE to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • UE user equipment
  • the trigger event may indicate that an energy usage indicator has satisfied, is expected to satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation.
  • the at least one processor can be further configured to cause the UE to transmit information indicating the trigger event to a network entity to initiate the transfer, and receive, from the network entity, a task transfer notification comprising an indication of the target entity.
  • the at least one processor can be further configured to cause the UE to obtain energy status information of one or more entities comprising the target entity, rank the one or more entities based on the energy status information, and select the target entity from the one or more entities based on the rank.
  • the at least one processor can be further configured to cause the UE to send, to the one or more entities, a query related to energy capability information for the one or more entities, and receive the energy status information from the one or more entities in response to the query.
  • the at least one processor can be further configured to cause the UE to update a repository (e.g. the ML repository) to indicate that the UE is unavailable to perform machine learning operations.
  • the at least one processor can be further configured to cause the UE to monitor the energy usage indicator at the UE.
  • the energy usage indicator may comprises one of an energy credit level, an energy budget, a battery status, an application traffic schedule, and a pattern from an application for example.
  • the pattern (or traffic pattern) can be similar to a traffic schedule, including the type of traffic (e.g. bursty traffic).
  • a traffic pattern may also be used in cases of discontinuous transmission / reception based on UE or operator policies.
  • the at least one processor can be further configured to cause the UE to detect the event trigger based on retrieving online energy status information from a charging function.
  • Some implementations of the method and apparatuses described herein may further include a processor for wireless communication, comprising at least one controller coupled with at least one memory and configured to cause the processor to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • a processor for wireless communication comprising at least one controller coupled with at least one memory and configured to cause the processor to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • the at least one controller can be further configured to cause the processor to provide information indicating the trigger event to a network entity to initiate the transfer, and obtain, from the network entity, a task transfer notification comprising an indication of the target entity
  • the at least one controller can be further configured to cause the processor to obtain energy status information of one or more entities comprising the target entity, rank the one or more entities based on the energy status information, and select the target entity from the one or more entities based on the rank.
  • Some implementations of the method and apparatuses described herein may further include a method performed by a user equipment (UE), the method comprising commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • the method may further comprise transmitting information indicating the trigger event to a network entity to initiate the transfer, and receiving, from the network entity, a task transfer notification comprising an indication of the target entity.
  • the method may further comprise obtaining energy status information of one or more entities comprising the target entity, ranking the one or more entities based on the energy status information, and selecting the target entity from the one or more entities based on the rank.
  • the trigger event may indicate that an energy usage indicator has satisfied, is expected to be satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation .
  • the at least one processor can be further configured to cause the network entity to select the target entity by retrieving and comparing energy status information of a plurality of candidate entities.
  • the at least one processor can be further configured to cause the network entity to retrieve the energy status information of at least one of the plurality of candidate entities from a repository.
  • the at least one processor can be further configured to cause the network entity to update a repository to indicate that the entity is unavailable to perform machine learning operations.
  • Figure 1 illustrates an example of a wireless communications system in accordance with aspects of the present disclosure.
  • Figure 2 illustrates an on -network AIMLE functional model of AIML enablement.
  • Figure 3 illustrates a process for transferring an ML operation.
  • Figure 4 illustrates another process for transferring an ML operation.
  • FIG. 5 illustrates an example of a user equipment (UE) 500 in accordance with aspects of the present disclosure.
  • Figure 6 illustrates an example of a processor 600 in accordance with aspects of the present disclosure.
  • Figure 7 illustrates an example of a network equipment (NE) 700 in accordance with aspects of the present disclosure.
  • Figure 8 illustrate a flowchart of a method performed by a UE in accordance with aspects of the present disclosure.
  • a wireless communications system including one or more communication devices may be enabled (e.g. configured) to support machine learning (ML), and more generally artificial intelligence (Al) (referred to collectively as AIML) for various applications or services associated with the wireless communications system.
  • ML machine learning
  • Al artificial intelligence
  • an AI/ML member e.g., AIMLE Client, VAL Client
  • An AI/ML member is configured to perform or participate in the performance of an AI/ML task (also referred to as an “ML process” herein).
  • the AI/ML member source AI/ML member
  • can transfer the intermediate AI/ML information e.g., the intermediate AI/ML operation status and results
  • target AI/ML member another AI/ML member for further operations to complete the AI/ML task.
  • a problem is how to efficiently facilitate offloading of an ongoing AI/ML operation from a UE (e.g.
  • the source AI/ML client comprising the source AI/ML client) to another application entity (at edge, cloud or other UE) using energy criteria, and in particular an energy usage indicator such as the energy efficiency, energy consumption, energy budget or energy credit level, while ensuring that the AI/ML operation performance requirements are met.
  • An AI/ML task transfer relates to an AI/ML task (also referred to as AI/ML operation or process), such as the ML model training or inference which is running and ongoing.
  • AI/ML operation the entity performing the operation identifies that the operation needs to migrate to another entity, in particular another VAL UE or edge / cloud application (AIML Enablement, AIMLE, server or VAL server or Edge application Server EAS).
  • the AI/ML task transfer may originate from an AI/ML member (mentioned as source AIML member) who can be an AIMLE client (at a source VAL UE), which identifies that the task cannot be completed.
  • One key reason may be the energy status of the VAL UE, which may run out of battery due to the high consumption for the Al operation or the expectation or prediction that this may happen.
  • the training aspect of an AI/ML operation may be particularly energy intensive. High energy consumption may be detected by monitoring an energy usage indicator such as the energy credit level of the respective UE. A low remaining credit means higher probability of disruption of the Al operation if continues at the source AI/ML member.
  • Detect a trigger event related to the energy usage indicator e.g. the energy credit level
  • a VAL UE denoted as source AI/ML member
  • an ML operation which can be ML model training or ML model inference.
  • the source AI/ML member e.g. AIMLE client
  • the source AI/ML member can send event trigger information to an AIMLE server for transferring the (incomplete) ML operation.
  • the AIMLE server can authorizes the request and check other available entities such as AIMLE clients / edge AIMLE server capabilities that could potentially undertake the incomplete task.
  • the AIMLE server may also check the energy information (e.g. an energy profile) of one or more the VAL UEs with the respective AIMLE clients, and may rate them based on the performance and energy cost trade-off.
  • the AIMLE server can cause an AI/ML task transfer from the source VAL UE to another entity (either VAL UE or another AIMLE/VAL server) that can undertake the operation taking associated energy status information into account.
  • the energy status information of the plurality of candidate entities may comprise an energy usage indicator and/or an energy usage threshold.
  • the energy usage indicator may relate to consumption and/or energy efficiency and/or an energy credit level or an energy credit status. As explained in more detail below an energy credit level is merely an example of an energy usage indicator. In other implementations, the energy usage indicator may be an energy usage parameter, an energy efficiency parameter, or an energy consumption parameter, and the UE (or an application running on the UE) is associated with a corresponding energy usage threshold, limit or target.
  • the AIMLE server can then remove or suspend or mark as unavailable the source VAL UE. This may include sending a notification to an ML repository to update the energy usage indicator or an energy status for the VAL UE in a register of entities configured for AI/ML operations.
  • Embodiments of the present disclosure are described in the context of a wireless communications system.
  • FIG. 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure.
  • the wireless communications system 100 may include one or more NE 102, one or more UE 104, and a core network (CN) 106.
  • the wireless communications system 100 may support various radio access technologies.
  • the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE -Advanced (LTE-A) network.
  • LTE-A LTE -Advanced
  • the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network.
  • the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20.
  • IEEE Institute of Electrical and Electronics Engineers
  • Wi-Fi Wi-Fi
  • WiMAX IEEE 802.16
  • IEEE 802.20 The wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc.
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • the one or more NE 102 may be dispersed throughout a geographic region to form the wireless communications system 100.
  • One or more of the NE 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a network function, a network entity, a radio access network (RAN), a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology.
  • An NE 102 and a UE 104 may communicate via a communication link, which may be a wireless or wired connection.
  • an NE 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
  • An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area.
  • an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies.
  • an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN).
  • NTN non-terrestrial network
  • different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
  • the one or more UE 104 may be dispersed throughout a geographic region of the wireless communications system 100.
  • a UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology.
  • the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples.
  • the UE 104 may be referred to as an Internet-of- Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
  • LoT Internet-of- Things
  • LoE Internet-of-Everything
  • MTC machine-type communication
  • a UE 104 may be able to support wireless communication directly with other UEs
  • a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
  • D2D device-to-device
  • the communication link may be referred to as a sidelink.
  • a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
  • An NE 102 may support communications with the CN 106, or with another NE 102, or both.
  • an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., SI, N2, N2, or network interface).
  • the NE 102 may communicate with each other directly.
  • the NE 102 may communicate with each other or indirectly (e.g., via the CN 106.
  • one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC).
  • An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
  • TRPs transmission-reception points
  • the CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
  • the CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)).
  • EPC evolved packet core
  • 5GC 5G core
  • MME mobility management entity
  • AMF access and mobility management functions
  • S-GW serving gateway
  • PDN gateway Packet Data Network gateway
  • UPF user plane function
  • control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
  • NAS non-access stratum
  • the CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an SI, N2, N2, or another network interface).
  • the packet data network may include an application server.
  • one or more UEs 104 may communicate with the application server.
  • a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102.
  • the CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session).
  • the PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
  • the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications).
  • the NEs 102 and the UEs 104 may support different resource structures.
  • the NEs 102 and the UEs 104 may support different frame structures.
  • the NEs 102 and the UEs 104 may support a single frame structure.
  • the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures).
  • the NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
  • One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
  • a first subcarrier spacing e.g., 15 kHz
  • a normal cyclic prefix e.g. 15 kHz
  • the first subcarrier spacing e.g., 15 kHz
  • a time interval of a resource may be organized according to frames (also referred to as radio frames).
  • Each frame may have a duration, for example, a 10 millisecond (ms) duration.
  • each frame may include multiple subframes.
  • each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
  • each frame may have the same duration.
  • each subframe of a frame may have the same duration.
  • a time interval of a resource may be organized according to slots.
  • a subframe may include a number (e.g., quantity) of slots.
  • the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
  • Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols).
  • the number (e.g., quantity) of slots for a subframe may depend on a numerology.
  • a slot For a normal cyclic prefix, a slot may include 14 symbols.
  • a slot For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols.
  • a first subcarrier spacing e.g. 15 kHz
  • an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
  • the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4-1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz).
  • FR1 410 MHz - 7.125 GHz
  • FR2 24.25 GHz - 52.6 GHz
  • FR3 7.125 GHz - 24.25 GHz
  • FR4 (52.6 GHz - 114.25 GHz
  • FR4a or FR4-1 52.6 GHz - 71 GHz
  • FR5 114.25 GHz - 300 GHz
  • the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
  • FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data).
  • FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
  • FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies).
  • FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies).
  • 3 GPP SA6 is the application enablement and critical communications applications group for vertical markets.
  • the main objective of SA6 is to provide application layer architecture specifications for 3 GPP verticals, including architecture requirements and functional architecture for supporting the integration of verticals to 3 GPP systems.
  • enablers for vertical applications e.g., automotive
  • service frameworks e.g. Common API Framework, Service Enabler Architecture Layer (SEAL), Edge Application enablement.
  • AD AES Application Data Analytics Enablement Service
  • 3 GPP TR 23.700-36 is an enablement service (which can be part of SEAL) and discusses new potential application data analytics services (stats/predictions) to optimize the application service operation by notifying the application specific layer, and potentially 5GS, for expected/predicted application service parameters changes considering both on-network and off-network deployments (e.g., related to application QoS parameters)
  • AIMLE AIML Enablement
  • 3GPP TS 23.482 and TS 23.434 3GPP TS 23.482 and TS 23.434 is illustrated in Figure 2.
  • Figure 2 illustrates an on-network AIMLE functional model of AIML enablement.
  • a UE 104 may comprise a UE modem and one or more of the following functionalities: an application client (e.g. a VAL client), an application enablement client, an edge enablement client, a SEAL client (e.g. an AIMLE client), a vertical application.
  • an application client e.g. a VAL client
  • an application enablement client e.g. an application enablement client
  • an edge enablement client e.g. an AIMLE client
  • SEAL client e.g. an AIMLE client
  • the UE 104 shown in Figure 2 comprises a vertical application layer (VAL) client 202, a SEAL client in the form of an AIMLE client 204, and a UE modem (not shown in Figure 2), and therefore may be termed a VAL UE.
  • the VAL client 202 is a vertical application client, for example an loT application or a V2X application..
  • the VAL client 202 communicates with the VAL server 206 over VAL-UU reference point.
  • VAL-UU supports both unicast and multicast delivery modes.
  • the AIMLE functional entities on the UE 104 and the server are grouped into AIMLE client(s) 204 and AIMLE server(s) 208 respectively.
  • the AIMLE server 208 is a type of SEAL server which includes of a common set of services for comprehensive enablement of AIML functionality.
  • the AIMLE server 208 defines or otherwise supports the following group of capabilities:
  • HFL/VFL operations including FL member registration, FL grouping and FL-related events notification, VFL feature alignment, HFL training.
  • the AIMLE client 204 communicates with the AIMLE server(s) 208 over one or more AIML-UU reference points.
  • the AIMLE client 204 provides functionality to the VAL client(s) 202 over AIML-C reference point.
  • the VAL server(s) 206 communicate with the AIMLE server(s) 208 over AIML-S reference points.
  • the AIMLE server(s) 208 communicate with the underlying 3 GPP network systems using the respective 3 GPP interfaces specified by the 3 GPP network system.
  • the AIML-E reference point enables interactions between two AIMLE servers (e.g. central and edge AIMLE servers).
  • the AIMLE client 204 is a functional entity which acts as an application client supporting AIMLE services.
  • the AIMLE server 208 interacts with a ML repository 210 which serves as (i) a registry for ML/FL members (e.g. application layer entities participating in an AI/ML operation) and (ii) as a repository for application layer ML model related information.
  • a UE monitors an energy usage indicator and based on this monitoring detects a trigger event which indicates that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by a module on the UE.
  • a predetermined threshold e.g. an energy usage limit
  • an energy usage indicator is an energy credit level.
  • Energy related issues are considered in 5G core as a part of TR 23.700-66, which identifies enhancements including network energy related information exposure, subscription, and policy control to enable energy as service criteria to improve energy efficiency and to support energy saving in the network. Energy enhancements are also considering the use of renewable energy and control of carbon dioxide emissions. Energy as serving criteria can be applied considering different granularities including UE level, PDU session, QoS flow or application, slice, service, and network function (NF).
  • the energy credit level may be or may be representative of a quantity of credit associated with a subscriber that can be used for credit control by the 5 G system.
  • energy credit can be associated to the following five concepts related to new energy events and energy event monitoring: a) the ability for the network operator to create a 'maximum energy credit' policy, after which services are gated, b) the ability for the network operator to inform an AS of the 'maximum energy credit expired' event, c) the ability for the 5G system to calculate 'energy credit' use, d) the ability to monitor and provide to the AS the use of 'energy credits' (or other energy 'quantum'), e) the support a new policy that establishes the energy consequence for charging control - either charging for use of energy or establishing an 'energy credit limit' for enforcement by the 5G system.
  • Energy credit control relates to comparing an energy credit level (indicating energy usage) against a second energy credit limit (e.g. a threshold).
  • the result of energy credit control may include, e.g., gating, increased charging rates, data throttling, or change of QoS class, etc.
  • the energy credit limit may be associated with a UE by the means of subscription, i.e., as a maximum energy credit limit. Energy credit can also be introduced in the context of a network slice, i.e., per UE per DNN for S-NSSAI level.
  • the energy credit limit (e.g. for an application or per UE) referred to herein may be defined in a similar manner as the term energy credit in the 5G core (e.g. as per as per TR 22.882 and TS 22.261); however embodiments of the present disclosure are not limited to this definition and the energy credit limit may be any form of energy usage allowance or budget for an application or aggregately for an application service provider for providing an application service for one or group of UEs or for a given service area (e.g. a multiplayer VR game).
  • the energy credit limit can be coupled with the charging of the application for utilizing the mobile communications system capabilities and in particular the energy demand for the user plane and control plane capabilities involved with the application.
  • Such an energy credit limit may be configured by Service Level Agreement (SLA) or by the service agreement between the Mobile Network Operator (MNO) and the vertical / Application Service Provider (ASP).
  • SLA Service Level Agreement
  • MNO Mobile Network Operator
  • ASP Vertical / Application Service Provider
  • an energy credit level is merely an example of an energy usage indicator.
  • the energy usage indicator may be an energy usage parameter, an energy efficiency parameter, or an energy consumption parameter and the UE (or an application running on the UE) is associated with a corresponding energy usage limit or target.
  • responsibility for execution of the (remainder) of the ML operation may be optimally offloaded from the source UE to another (target) UE based on a trigger event caused by energy usage of the source UE, while ensuring that the performance requirements of the ML operation are met.
  • the energy usage of a UE comprises the energy usage of the constituent functionalities of the UE (UE modem functionalities, application clients, etc.) as well as the communication with the network for supporting the operation of such functionalities.
  • Embodiments of the present disclosure relate to the detection of a trigger event caused by energy usage of a user equipment that is involved in execution of an ML operation, and the subsequent identification of one or more entities to offload the execution of this stage to.
  • the cause of the handover of the ML operation may be a trigger event comprising an energy usage indicator (of the user equipment, of an application running on the UE, or of the ML operation which the UE is executing) having reached, being expected to reach, or being predicted to reach, a predetermined threshold during execution of the ML operation.
  • the information of the target AI/ML member 304 (e.g. another AIMLE Client or VAL Client different from the source AI/ML member 302) is unknown at the source AI/ML member 304.
  • the source AI/ML member 302 decides that AIMLE server based AI/ML task transfer is needed.
  • the AIMLE server 208 may be aware of the energy status information such as the credit limit or energy efficiency/consumption target for all AIMLE or VAL clients e.g. based on the stored information in the ML repository 210 or based on the registration of the AIMLE clients.
  • the AIMLE client profile (as provided in the AIMLE client registration request which is specified in 3GPP TS 23.482 (clause 8.7.2.2 and 8.7.2.3) may comprise further information.
  • An example of content of the AIMLE client profile is shown in Table 1 below.
  • the source AI/ML member 302 detects a trigger event caused by energy usage of the associated entity or application.
  • the trigger event may indicate that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by the source AI/ML member 302.
  • Step S306 may be performed in a number of different ways.
  • An energy usage indicator is referred to as being expected to satisfy (e.g. expected to reach) a predetermined threshold to mean that energy usage indicator is anticipated to reach the predetermined threshold imminently e.g. within a predetermined time period.
  • an energy usage indicator is referred to as being predicted to satisfy (e.g. predicted to reach) a predetermined threshold to mean that there is a prediction with a certain confidence level as output of an analytics function.
  • the source AI/ML member 302 may detect the trigger event based on monitoring the energy status at the source entity (e.g. a source UE). This can be done by monitoring the battery status, the application traffic schedule or application traffic pattern from an application, e.g. a VAL application, (this can be also detected locally if the source AI/ML member 302 is used for the distribution of the application messages) or based on energy-related monitoring from e.g. the UE modem (up to implementation).
  • the source entity e.g. a source UE
  • the application traffic schedule or application traffic pattern e.g. a VAL application
  • the detection of the trigger event by the source AI/ML member 302 at step S306 may be based on application layer AI/ML member capability Analytics (e.g. as described in TS 23.436 clause 8.16).
  • This step requires (i) the addition of energy criteria (e.g. an energy usage indicator and/or an energy usage threshold) per VAL UE or VAL/AIMLE client in ADAES analytics service; and (ii) AIMLE client (directly or via AIMLE server or via VAL client) to be a consumer of such analytics.
  • energy criteria e.g. an energy usage indicator and/or an energy usage threshold
  • AIMLE client directly or via AIMLE server or via VAL client
  • the source AIML member 302 may alternatively or complementary detect the trigger event by fetching online energy status information such as energy credit information from a charging function in the operator’s charging domain (e.g. via the AIMLE server208 or via the network). The interaction with the charging function may occur when the source AIML member 302 is deployed by the network operator. Alternatively, the source AIML member 302 may obtain the energy credit status from a charging domain from the service provider (e.g. platform provider, vertical).
  • the service provider e.g. platform provider, vertical
  • the source AI/ML member 302 transmits information indicating the trigger event.
  • the source AI/ML member 302 sends an event trigger message to the AIMLE server 208.
  • the information transmitted to the AIMLE server 208 at step S306 may indicate:
  • the information transmitted to the AIMLE server 208 at step S406 may indicate: (i) that an energy usage threshold (e.g.
  • an energy usage indicator or capability e.g. energy credit limit
  • Energy status information may be comprised by the AIMLE client profile in the ML repository 210).
  • the AIMLE server 208 can request energy status information from an energy monitoring function 212, such as the Energy Information Function (EIF) (or other energy monitoring function at operations, administration, and management, 0AM, or the application enablement layer), to obtain the energy status information such as the energy credit level for VAL UEs of the respective candidate AIMLE members.
  • EIF Energy Information Function
  • the AIMLE server 208 may also collect analytics by extending Application Layer AI/ML Member Capability Analytics (as described in 3GPP TS 23.436 clause 8.16).
  • the AIMLE server 208 may evaluate (by rating or ranking) the candidate AIML members (e.g. candidate AIMLE clients) for executing the ML operation.
  • the AIMLE server 208 may evaluate based on the energy status information (e.g. energy credit level) as well as the performance of the ML operation and an energy sustainability factor (e.g. whether the new selection will be sustainable till the end of the session and completion of the ML operation). For the sustainability, further inputs may be required like the application traffic schedule and the UE mobility.
  • the AIMLE server 208 can determine the entity to serve as target AI/ML member from the candidate AIML members based on the evaluation and/or the energy criteria.
  • the AIMLE server 208 can transmit a message as response/notification to the source AI/ML member 302.
  • the response may provide candidate AIML members including the target AIML member 304.
  • the AIMLE server 208 may transmit a command to the target AI/ML member 304 comprising an indication of selection.
  • the command may comprise the energy cause for the transfer (e.g. the energy usage indicator of the source AIML member 302) and optionally any operation context that may be required to complete the transfer and the incomplete ML operation.
  • the source AI/ML member 302 can then perform AI/ML task transfer to the target AI/ML member 304 via the AIMLE server 208.
  • the AIMLE server 208 can transmit a message to the ML repository 210 to remove or suspend or mark as unavailable the source VAL UE. This may include sending a notification to the ML repository to update the status of the VAL UE.
  • the task transfer from the source AIML member 302 to the target AIML member 304 may be determined and facilitated by the source AIML member 302. Less input may then be required from the AIMLE server 208 in order to facilitate the task transfer.
  • Eigure 4 shows a process 400 for transferring an ML operation in accordance with aspects of the present disclosure.
  • the process 400 may, for example, be implemented within the architecture shown in Figure 2.
  • the AI/ML task transfer trigger is the energy usage indicator (e.g. an energy credit level, energy budget, energy usage threshold, energy efficiency threshold), and in particular e.g. the energy credit level per VAL UE (or per application) which is reaching a threshold.
  • the source AI/ML member 302 determines that the ongoing ML process should be transferred to a target AI/ML member 304 which may be another AIMLE client.
  • the information of candidate AI/ML members comprising the target AIML member 304 may be known at the source AI/ML member 302, or the source AI/ML member 302 may be configured to obtain the candidate AI/ML member information from the ML repository 210.
  • the ML repository 210 is aware of the energy status information (e.g. the energy credit limit or energy efficiency/consumption thresholds of AIMLE or VAL clients) based on the stored information in the ML repository 210.
  • the source AI/ML member 302 detects a trigger event caused by energy usage of the associated entity or application.
  • the trigger event may indicate that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by the source AI/ML member 302.
  • Step S406 may be performed in a number of different ways.
  • the source AI/ML member 302 may detect the trigger event based on monitoring the energy status at the source entity (e.g. a source UE). This can be done by monitoring the battery status, the application traffic schedule or application traffic pattern from an application, e.g. a VAL application, (this can be also detected locally if the source AI/ML member 302 is used for the distribution of the application messages) or based on energy-related monitoring from e.g. the UE modem (up to implementation).
  • the source entity e.g. a source UE
  • the application traffic schedule or application traffic pattern e.g. a VAL application
  • the detection of the trigger event by the source AI/ML member 302 at step S406 may be based on application layer AI/ML member capability Analytics (e.g. as described in TS 23.436 clause 8.16).
  • This step requires (i) the addition of energy criteria (e.g. an energy usage indicator and/or an energy usage threshold) per VAL UE or VAL/AIMLE client in ADAES analytics service; and (ii) AIMLE client (directly or via AIMLE server or via VAL client) to be a consumer of such analytics.
  • energy criteria e.g. an energy usage indicator and/or an energy usage threshold
  • AIMLE client directly or via AIMLE server or via VAL client
  • the source AIML member 302 may alternatively or complementary detect the trigger event by fetching online energy credit information from a charging function in the operator’s charging domain (e.g. via the AIMLE server 208 or via the network). The interaction with the charging function may occur when the source AIML member 302 is deployed by the network operator. Alternatively, the source AIML member 302 may obtain the energy credit status from a charging domain from the service provider (e.g. platform provider, vertical).
  • the service provider e.g. platform provider, vertical
  • the source AIML member 302 may transmit a query to one or more candidate AIMLE clients comprising the target AIMLE member 304 to obtain an energy usage indicator and/or other energy status information (e.g. energy capability information such as the energy credit level or energy budget for the respective VAL UEs).
  • the AIMLE clients (after interacting with the VAL clients) can transmit the requested information to the source AIML member 302.
  • an AIMLE client may request from the AIMLE server 208 or from an energy monitoring function 212 such as the EIF (via the network exposure function NEF) or other energy monitoring function at 0AM or application enablement layer to provide the energy credit status for the list of alternative VAL UEs with the respective AIMLE clients.
  • the AIMLE server 208 receives the energy usage indicator and/or other energy status information of the VAL UEs of the candidate AIML members.
  • source AIML member 302 may alternatively or in addition retrieve online energy status information from a charging function in the operator’s charging domain (e.g. via AIMLE server or via the network) for the respective VAL UE(s) which are determined as candidate to undertake the ML task.
  • the source AIML member 302 may evaluate (by rating or ranking) the candidate AIML members (e.g. candidate AIMLE clients) for executing the ML operation.
  • the source AIML member may evaluate based on the energy status information and energy usage indicator (e.g. energy credit level) as well as the performance of the ML operation and an energy sustainability factor (e.g. whether the new selection will be sustainable till the end of the session and completion of the ML operation). For the sustainability, further inputs may be required like the application traffic schedule and the UE mobility.
  • the source AIML member 302 can determine the entity to serve as target AI/ML member 304 from the candidate AIML members based on the evaluation (e.g. the ranking) and/or otherwise based on the energy criteria.
  • the source AIMLE member 302 sends a command or request to the target AI/ML member 304 that is selected with an indication of selection.
  • the command or request may include the energy cause for the transfer and optionally any operation context required for the transfer and/or for completion of the ML operation.
  • the source AI/ML member 302 performs AI/ML task transfer to the target AI/ML member 304 directly.
  • the source (or target) AIMLE member transmits via the AIMLE server 208 a message to ML repository 210 to remove or suspend or mark as unavailable the source VAL UE based on the energy criteria. This may include sending a notification to the ML repository 210 to update the stored energy profile of the VAL UE comprising the source A I ML member 302.
  • FIG. 5 illustrates an example of a UE 500 in accordance with aspects of the present disclosure.
  • the UE 500 may include a processor 502, a memory 504, a controller 506, and a transceiver 508.
  • the processor 502, the memory 504, the controller 506, or the transceiver 508, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
  • the processor 502, the memory 504, the controller 506, or the transceiver 508, or various combinations or components thereof may be implemented in hardware (e.g., circuitry).
  • the hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • the processor 502 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 502 may be configured to operate the memory 504. In some other implementations, the memory 504 may be integrated into the processor 502. The processor 502 may be configured to execute computer-readable instructions stored in the memory 504 to cause the UE 500 to perform various functions of the present disclosure.
  • an intelligent hardware device e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof.
  • the processor 502 may be configured to operate the memory 504. In some other implementations, the memory 504 may be integrated into the processor 502.
  • the processor 502 may be configured to execute computer-readable instructions stored in the memory 504 to cause the UE 500 to perform various functions of the present disclosure.
  • the memory 504 may include volatile or non-volatile memory.
  • the memory 504 may store computer-readable, computer-executable code including instructions when executed by the processor 502 cause the UE 500 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such the memory 504 or another type of memory.
  • Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or specialpurpose computer.
  • the processor 502 and the memory 504 coupled with the processor 502 may be configured to cause the UE 500 to perform one or more of the functions described herein (e.g., executing, by the processor 502, instructions stored in the memory 504).
  • the processor 502 may support wireless communication at the UE 500 in accordance with examples as disclosed herein.
  • the UE 500 may be configured to support a means for commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • the controller 506 may manage input and output signals for the UE 500.
  • the controller 506 may also manage peripherals not integrated into the UE 500.
  • the controller 506 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems.
  • the controller 506 may be implemented as part of the processor 502.
  • the UE 500 may include at least one transceiver 508. In some other implementations, the UE 500 may have more than one transceiver 508.
  • the transceiver 508 may represent a wireless transceiver.
  • the transceiver 508 may include one or more receiver chains 510, one or more transmitter chains 512, or a combination thereof.
  • a receiver chain 510 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receiver chain 510 may include one or more antennas for receive the signal over the air or wireless medium.
  • the receiver chain 510 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal.
  • the receiver chain 510 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receiver chain 510 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • a transmitter chain 512 may be configured to generate and transmit signals (e.g., control information, data, packets).
  • the transmitter chain 512 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM).
  • the transmitter chain 512 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmitter chain 512 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
  • FIG. 6 illustrates an example of a processor 600 in accordance with aspects of the present disclosure.
  • the processor 600 may be an example of a processor configured to perform various operations in accordance with examples as described herein.
  • the processor 600 may include a controller 602 configured to perform various operations in accordance with examples as described herein.
  • the processor 600 may optionally include at least one memory 604, which may be, for example, an L1/L2/L3 cache. Additionally, or alternatively, the processor 600 may optionally include one or more arithmetic-logic units (ALUs) 606.
  • ALUs arithmetic-logic units
  • One or more of these components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses).
  • the processor 600 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
  • a protocol stack e.g., a software stack
  • operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
  • the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 600) or other memory (e.g., random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others).
  • RAM random access memory
  • ROM read-only memory
  • DRAM dynamic RAM
  • SDRAM synchronous dynamic RAM
  • SRAM static RAM
  • FeRAM ferroelectric RAM
  • MRAM magnetic RAM
  • RRAM resistive RAM
  • flash memory phase change memory
  • PCM phase change memory
  • the controller 602 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 604 and determine subsequent instruction(s) to be executed to cause the processor 600 to support various operations in accordance with examples as described herein.
  • the controller 602 may be configured to track memory address of instructions associated with the memory 604.
  • the controller 602 may be configured to decode instructions to determine the operation to be performed and the operands involved.
  • the controller 602 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 600 to cause the processor 600 to support various operations in accordance with examples as described herein.
  • the controller 602 may be configured to manage flow of data within the processor 600.
  • the controller 602 may be configured to control transfer of data between registers, arithmetic logic units (ALUs), and other functional units of the processor 600.
  • ALUs arithmetic logic units
  • the memory 604 may include one or more caches (e.g., memory local to or included in the processor 600 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementations, the memory 604 may reside within or on a processor chipset (e.g., local to the processor 600). In some other implementations, the memory 604 may reside external to the processor chipset (e.g., remote to the processor 600).
  • caches e.g., memory local to or included in the processor 600 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc.
  • the memory 604 may reside within or on a processor chipset (e.g., local to the processor 600). In some other implementations, the memory 604 may reside external to the processor chipset (e.g., remote to the processor 600).
  • the memory 604 may store computer-readable, computer-executable code including instructions that, when executed by the processor 600, cause the processor 600 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
  • the controller 602 and/or the processor 600 may be configured to execute computer-readable instructions stored in the memory 604 to cause the processor 600 to perform various functions.
  • the processor 600 and/or the controller 602 may be coupled with or to the memory 604, the processor 600, the controller 602, and the memory 604 may be configured to perform various functions described herein.
  • the processor 600 may include multiple processors and the memory 604 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
  • the one or more ALUs 606 may be configured to support various operations in accordance with examples as described herein.
  • the one or more ALUs 606 may reside within or on a processor chipset (e.g., the processor 600).
  • the one or more ALUs 606 may reside external to the processor chipset (e.g., the processor 600).
  • One or more ALUs 606 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
  • one or more ALUs 606 may receive input operands and an operation code, which determines an operation to be executed.
  • One or more ALUs 606 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 606 may support logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND), enabling the one or more ALUs 606 to handle conditional operations, comparisons, and bitwise operations.
  • logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND)
  • the processor 600 may support wireless communication in accordance with examples as disclosed herein.
  • the processor 600 may be configured to or operable to support a means for commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • FIG. 7 illustrates an example of a NE 700 in accordance with aspects of the present disclosure.
  • the NE 700 may include a processor 702, a memory 704, a controller 706, and a transceiver 708.
  • the processor 702, the memory 704, the controller 706, or the transceiver 708, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
  • the processor 702, the memory 704, the controller 706, or the transceiver 708, or various combinations or components thereof may be implemented in hardware (e.g., circuitry).
  • the hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • the processor 702 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 702 may be configured to operate the memory 704. In some other implementations, the memory 704 may be integrated into the processor 702. The processor 702 may be configured to execute computer-readable instructions stored in the memory 704 to cause the NE 700 to perform various functions of the present disclosure.
  • an intelligent hardware device e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof.
  • the processor 702 may be configured to operate the memory 704. In some other implementations, the memory 704 may be integrated into the processor 702.
  • the processor 702 may be configured to execute computer-readable instructions stored in the memory 704 to cause the NE 700 to perform various functions of the present disclosure.
  • the memory 704 may include volatile or non-volatile memory.
  • the memory 704 may store computer-readable, computer-executable code including instructions when executed by the processor 702 cause the NE 700 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such the memory 704 or another type of memory.
  • Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or specialpurpose computer.
  • the processor 702 and the memory 704 coupled with the processor 702 may be configured to cause the NE 700 to perform one or more of the functions described herein (e.g., executing, by the processor 702, instructions stored in the memory 704).
  • the processor 702 may support wireless communication at the NE 700 in accordance with examples as disclosed herein.
  • the NE 700 may be configured to support a means for receiving information indicating a trigger event caused by energy usage of an entity involved in execution of a machine learning operation, determining, based on the information, that a part of the machine learning operation is to be transferred to another entity, selecting a target entity for transferring the part of the machine learning operation based on energy status information of the target entity, and transmitting a task transfer notification indicating the target entity.
  • the controller 706 may manage input and output signals for the NE 700.
  • the controller 706 may also manage peripherals not integrated into the NE 700.
  • the controller 706 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems.
  • the controller 706 may be implemented as part of the processor 702.
  • the NE 700 may include at least one transceiver 708. In some other implementations, the NE 700 may have more than one transceiver 708.
  • the transceiver 708 may represent a wireless transceiver.
  • the transceiver 708 may include one or more receiver chains 710, one or more transmitter chains 712, or a combination thereof.
  • a receiver chain 710 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receiver chain 710 may include one or more antennas for receive the signal over the air or wireless medium.
  • the receiver chain 710 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal.
  • the receiver chain 710 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receiver chain 710 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • a transmitter chain 712 may be configured to generate and transmit signals (e.g., control information, data, packets).
  • the transmitter chain 712 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM).
  • the transmitter chain 712 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmitter chain 712 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
  • Figure 8 illustrates a flowchart of a method in accordance with aspects of the present disclosure.
  • the operations of the method may be implemented by a UE as described herein.
  • the UE may execute a set of instructions to control the function elements of the UE to perform the described functions.
  • the method may include commencing execution of a machine learning operation.
  • the operations of 802 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 802 may be performed by a UE as described with reference to Figure 5.
  • the method may include detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation.
  • the operations of 804 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 804 may be performed by a UE as described with reference to Figure 5.
  • the method may include transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
  • the operations of 806 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 806 may be performed a UE as described with reference to Figure 5.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Various aspects of the present disclosure relate to a user equipment (UE) for wireless communication, comprising at least one memory and at least one processor coupled with the at least one memory and configured to cause the UE to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.

Description

MACHINE LEARNING TASK TRANSFER
TECHNICAL FIELD
[0001] The present disclosure relates to wireless communications, and more specifically to machine learning task transfer.
BACKGROUND
[0002] A wireless communications system may include one or multiple network communication devices, such as base stations, which may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE), or other suitable terminology. The wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers, or the like). Additionally, the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G)).
SUMMARY
[0003] An article “a” before an element is unrestricted and understood to refer to “at least one” of those elements or “one or more” of those elements. The terms “a,” “at least one,” “one or more,” and “at least one of one or more” may be interchangeable. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of’ or “one or more of’ or “one or both of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on. Further, as used herein, including in the claims, a “set” may include one or more elements.
[0004] Some implementations of the method and apparatuses described herein may further include a user equipment (UE) for wireless communication, comprising at least one memory and at least one processor coupled with the at least one memory and configured to cause the UE to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
[0005] The trigger event may indicate that an energy usage indicator has satisfied, is expected to satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation.
[0006] The at least one processor can be further configured to cause the UE to transmit information indicating the trigger event to a network entity to initiate the transfer, and receive, from the network entity, a task transfer notification comprising an indication of the target entity.
[0007] The at least one processor can be further configured to cause the UE to obtain energy status information of one or more entities comprising the target entity, rank the one or more entities based on the energy status information, and select the target entity from the one or more entities based on the rank.
[0008] The at least one processor can be further configured to cause the UE to send, to the one or more entities, a query related to energy capability information for the one or more entities, and receive the energy status information from the one or more entities in response to the query.
[0009] The at least one processor can be further configured to cause the UE to update a repository (e.g. the ML repository) to indicate that the UE is unavailable to perform machine learning operations. [0010] The at least one processor can be further configured to cause the UE to monitor the energy usage indicator at the UE. The energy usage indicator may comprises one of an energy credit level, an energy budget, a battery status, an application traffic schedule, and a pattern from an application for example. The pattern (or traffic pattern) can be similar to a traffic schedule, including the type of traffic (e.g. bursty traffic). A traffic pattern may also be used in cases of discontinuous transmission / reception based on UE or operator policies.
[0011] The at least one processor can be further configured to cause the UE to detect the event trigger based on retrieving online energy status information from a charging function.
[0012] Some implementations of the method and apparatuses described herein may further include a processor for wireless communication, comprising at least one controller coupled with at least one memory and configured to cause the processor to commence execution of a machine learning operation, detect a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
[0013] The at least one controller can be further configured to cause the processor to provide information indicating the trigger event to a network entity to initiate the transfer, and obtain, from the network entity, a task transfer notification comprising an indication of the target entity
[0014] The at least one controller can be further configured to cause the processor to obtain energy status information of one or more entities comprising the target entity, rank the one or more entities based on the energy status information, and select the target entity from the one or more entities based on the rank.
[0015] Some implementations of the method and apparatuses described herein may further include a method performed by a user equipment (UE), the method comprising commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event. [0016] The method may further comprise transmitting information indicating the trigger event to a network entity to initiate the transfer, and receiving, from the network entity, a task transfer notification comprising an indication of the target entity.
[0017] The method may further comprise obtaining energy status information of one or more entities comprising the target entity, ranking the one or more entities based on the energy status information, and selecting the target entity from the one or more entities based on the rank.
[0018] Some implementations of the method and apparatuses described herein may further include a network entity for wireless communication, comprising at least one memory, and at least one processor coupled with the at least one memory and configured to cause the base station to receive information indicating a trigger event caused by energy usage of an entity involved in execution of a machine learning operation, determine, based on the information, that a part of the machine learning operation is to be transferred to another entity, select a target entity for transferring the part of the machine learning operation based on energy status information of the target entity, and transmit a task transfer notification indicating the target entity.
[0019] The trigger event may indicate that an energy usage indicator has satisfied, is expected to be satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation .
[0020] The at least one processor can be further configured to cause the network entity to select the target entity by retrieving and comparing energy status information of a plurality of candidate entities.
[0021] The at least one processor can be further configured to cause the network entity to retrieve the energy status information of at least one of the plurality of candidate entities from a repository.
[0022] The at least one processor can be further configured to cause the network entity to update a repository to indicate that the entity is unavailable to perform machine learning operations. BRIEF DESCRIPTION OF THE DRAWINGS
[0023] Figure 1 illustrates an example of a wireless communications system in accordance with aspects of the present disclosure.
[0024] Figure 2 illustrates an on -network AIMLE functional model of AIML enablement.
[0025] Figure 3 illustrates a process for transferring an ML operation.
[0026] Figure 4 illustrates another process for transferring an ML operation.
[0027] Figure 5 illustrates an example of a user equipment (UE) 500 in accordance with aspects of the present disclosure.
[0028] Figure 6 illustrates an example of a processor 600 in accordance with aspects of the present disclosure.
[0029] Figure 7 illustrates an example of a network equipment (NE) 700 in accordance with aspects of the present disclosure.
[0030] Figure 8 illustrate a flowchart of a method performed by a UE in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0031] A wireless communications system, including one or more communication devices may be enabled (e.g. configured) to support machine learning (ML), and more generally artificial intelligence (Al) (referred to collectively as AIML) for various applications or services associated with the wireless communications system.
[0032] In some instances (e.g., due to changes of available resource, changes of available time), an AI/ML member (e.g., AIMLE Client, VAL Client) may not be able to finish an assigned AI/ML task. An AI/ML member is configured to perform or participate in the performance of an AI/ML task (also referred to as an “ML process” herein). To save resource and time, the AI/ML member (source AI/ML member) can transfer the intermediate AI/ML information (e.g., the intermediate AI/ML operation status and results) to another AI/ML member (target AI/ML member) for further operations to complete the AI/ML task. [0033] A problem is how to efficiently facilitate offloading of an ongoing AI/ML operation from a UE (e.g. comprising the source AI/ML client) to another application entity (at edge, cloud or other UE) using energy criteria, and in particular an energy usage indicator such as the energy efficiency, energy consumption, energy budget or energy credit level, while ensuring that the AI/ML operation performance requirements are met.
[0034] An AI/ML task transfer relates to an AI/ML task (also referred to as AI/ML operation or process), such as the ML model training or inference which is running and ongoing. During the AI/ML operation, the entity performing the operation identifies that the operation needs to migrate to another entity, in particular another VAL UE or edge / cloud application (AIML Enablement, AIMLE, server or VAL server or Edge application Server EAS). The AI/ML task transfer may originate from an AI/ML member (mentioned as source AIML member) who can be an AIMLE client (at a source VAL UE), which identifies that the task cannot be completed. One key reason may be the energy status of the VAL UE, which may run out of battery due to the high consumption for the Al operation or the expectation or prediction that this may happen. Especially the training aspect of an AI/ML operation may be particularly energy intensive. High energy consumption may be detected by monitoring an energy usage indicator such as the energy credit level of the respective UE. A low remaining credit means higher probability of disruption of the Al operation if continues at the source AI/ML member.
[0035] The present disclosure relates to the following procedure:
[0036] Detect a trigger event related to the energy usage indicator (e.g. the energy credit level) of an application or a VAL UE (denoted as source AI/ML member), which is performing an ML operation, which can be ML model training or ML model inference.
[0037] In response, the source AI/ML member (e.g. AIMLE client) can send event trigger information to an AIMLE server for transferring the (incomplete) ML operation.
[0038] The AIMLE server can authorizes the request and check other available entities such as AIMLE clients / edge AIMLE server capabilities that could potentially undertake the incomplete task. The AIMLE server may also check the energy information (e.g. an energy profile) of one or more the VAL UEs with the respective AIMLE clients, and may rate them based on the performance and energy cost trade-off. [0039] The AIMLE server can cause an AI/ML task transfer from the source VAL UE to another entity (either VAL UE or another AIMLE/VAL server) that can undertake the operation taking associated energy status information into account. The energy status information of the plurality of candidate entities may comprise an energy usage indicator and/or an energy usage threshold. The energy usage indicator may relate to consumption and/or energy efficiency and/or an energy credit level or an energy credit status. As explained in more detail below an energy credit level is merely an example of an energy usage indicator. In other implementations, the energy usage indicator may be an energy usage parameter, an energy efficiency parameter, or an energy consumption parameter, and the UE (or an application running on the UE) is associated with a corresponding energy usage threshold, limit or target.
[0040] The AIMLE server can then remove or suspend or mark as unavailable the source VAL UE. This may include sending a notification to an ML repository to update the energy usage indicator or an energy status for the VAL UE in a register of entities configured for AI/ML operations.
[0041] Embodiments of the present disclosure are described in the context of a wireless communications system.
[0042] Figure 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure. The wireless communications system 100 may include one or more NE 102, one or more UE 104, and a core network (CN) 106. The wireless communications system 100 may support various radio access technologies. In some implementations, the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE -Advanced (LTE-A) network. In some other implementations, the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network. In other implementations, the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20. The wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc.
[0043] The one or more NE 102 may be dispersed throughout a geographic region to form the wireless communications system 100. One or more of the NE 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a network function, a network entity, a radio access network (RAN), a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology. An NE 102 and a UE 104 may communicate via a communication link, which may be a wireless or wired connection. For example, an NE 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
[0044] An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area. For example, an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies. In some implementations, an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN). In some implementations, different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
[0045] The one or more UE 104 may be dispersed throughout a geographic region of the wireless communications system 100. A UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology. In some implementations, the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples. Additionally, or alternatively, the UE 104 may be referred to as an Internet-of- Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
[0046] A UE 104 may be able to support wireless communication directly with other UEs
104 over a communication link. For example, a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link. In some implementations, such as vehicle-to-vehicle (V2V) deployments, vehicle-to-everything (V2X) deployments, or cellular-V2X deployments, the communication link may be referred to as a sidelink. For example, a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
[0047] An NE 102 may support communications with the CN 106, or with another NE 102, or both. For example, an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., SI, N2, N2, or network interface). In some implementations, the NE 102 may communicate with each other directly. In some other implementations, the NE 102 may communicate with each other or indirectly (e.g., via the CN 106. In some implementations, one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC). An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
[0048] The CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions. The CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). In some implementations, the control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
[0049] The CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an SI, N2, N2, or another network interface). The packet data network may include an application server. In some implementations, one or more UEs 104 may communicate with the application server. A UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102. The CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session). The PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
[0050] In the wireless communications system 100, the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications). In some implementations, the NEs 102 and the UEs 104 may support different resource structures. For example, the NEs 102 and the UEs 104 may support different frame structures. In some implementations, such as in 4G, the NEs 102 and the UEs 104 may support a single frame structure. In some other implementations, such as in 5G and among other suitable radio access technologies, the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures). The NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
[0051] One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix. A first numerology (e.g., /r=0) may be associated with a first subcarrier spacing (e.g., 15 kHz) and a normal cyclic prefix. In some implementations, the first numerology (e.g., /r=0) associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe. A second numerology (e.g., /r=l) may be associated with a second subcarrier spacing (e.g., 30 kHz) and a normal cyclic prefix. A third numerology (e.g., /r=2) may be associated with a third subcarrier spacing (e.g., 60 kHz) and a normal cyclic prefix or an extended cyclic prefix. A fourth numerology (e.g., /r=3) may be associated with a fourth subcarrier spacing (e.g., 120 kHz) and a normal cyclic prefix. A fifth numerology (e.g., /r=4) may be associated with a fifth subcarrier spacing (e.g., 240 kHz) and a normal cyclic prefix.
[0052] A time interval of a resource (e.g., a communication resource) may be organized according to frames (also referred to as radio frames). Each frame may have a duration, for example, a 10 millisecond (ms) duration. In some implementations, each frame may include multiple subframes. For example, each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration. In some implementations, each frame may have the same duration. In some implementations, each subframe of a frame may have the same duration.
[0053] Additionally or alternatively, a time interval of a resource (e.g., a communication resource) may be organized according to slots. For example, a subframe may include a number (e.g., quantity) of slots. The number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100. For instance, the first, second, third, fourth, and fifth numerologies (i.e., /r=0, jU=l, /r=2, jU=3, /r=4) associated with respective subcarrier spacings of 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may utilize a single slot per subframe, two slots per subframe, four slots per subframe, eight slots per subframe, and 16 slots per subframe, respectively. Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols). In some implementations, the number (e.g., quantity) of slots for a subframe may depend on a numerology. For a normal cyclic prefix, a slot may include 14 symbols. For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols. The relationship between the number of symbols per slot, the number of slots per subframe, and the number of slots per frame for a normal cyclic prefix and an extended cyclic prefix may depend on a numerology. It should be understood that reference to a first numerology (e.g., /i =0) associated with a first subcarrier spacing (e.g., 15 kHz) may be used interchangeably between subframes and slots.
[0054] In the wireless communications system 100, an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc. By way of example, the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4-1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz). In some implementations, the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands. In some implementations, FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data). In some implementations, FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
[0055] FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies). For example, FR1 may be associated with a first numerology (e.g., /r=0), which includes 15 kHz subcarrier spacing; a second numerology (e.g., /r=l), which includes 30 kHz subcarrier spacing; and a third numerology (e.g., /r=2), which includes 60 kHz subcarrier spacing. FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies). For example, FR2 may be associated with a third numerology (e.g., /r=2), which includes 60 kHz subcarrier spacing; and a fourth numerology (e.g., /r=3), which includes 120 kHz subcarrier spacing.
[0056] 3 GPP SA6 is the application enablement and critical communications applications group for vertical markets. The main objective of SA6 is to provide application layer architecture specifications for 3 GPP verticals, including architecture requirements and functional architecture for supporting the integration of verticals to 3 GPP systems. With respect to application enablement, the main focus is on enablers for vertical applications (e.g., automotive) and service frameworks (e.g. Common API Framework, Service Enabler Architecture Layer (SEAL), Edge Application enablement).
[0057] AD AES (Application Data Analytics Enablement Service), e.g. as described in 3 GPP TR 23.700-36, is an enablement service (which can be part of SEAL) and discusses new potential application data analytics services (stats/predictions) to optimize the application service operation by notifying the application specific layer, and potentially 5GS, for expected/predicted application service parameters changes considering both on-network and off-network deployments (e.g., related to application QoS parameters)
[0058] One SEAL service which was defined in Rel-19 is AIML Enablement (AIMLE) service as described in 3GPP TS 23.482 and TS 23.434 is illustrated in Figure 2. Figure 2 illustrates an on-network AIMLE functional model of AIML enablement.
[0059] The devices shown in Figure 2 may be implemented by aspects of the wireless communications system 100 described herein with reference to Figure 1. For example, the UE 104 shown in Figure 2, may be an example of a UE 104 as described herein with reference to Figure 1. Furthermore, the 3 GPP system shown in Figure 2 may include one or more NE 102 and/or the CN 106 described herein with reference to Figure 1. [0060] A UE 104 may comprise a UE modem and one or more of the following functionalities: an application client (e.g. a VAL client), an application enablement client, an edge enablement client, a SEAL client (e.g. an AIMLE client), a vertical application.
[0061] The UE 104 shown in Figure 2 comprises a vertical application layer (VAL) client 202, a SEAL client in the form of an AIMLE client 204, and a UE modem (not shown in Figure 2), and therefore may be termed a VAL UE. The VAL client 202 is a vertical application client, for example an loT application or a V2X application.. In VAL, the VAL client 202 communicates with the VAL server 206 over VAL-UU reference point. VAL-UU supports both unicast and multicast delivery modes. The AIMLE functional entities on the UE 104 and the server are grouped into AIMLE client(s) 204 and AIMLE server(s) 208 respectively.
[0062] The AIMLE server 208 is a type of SEAL server which includes of a common set of services for comprehensive enablement of AIML functionality. The AIMLE server 208 defines or otherwise supports the following group of capabilities:
Support for application-layer ML model related aspects, including model retrieval, model training, model monitoring, model selection, model update and model storage or discovery.
Assistance in AI/ML task transfer and split AI/ML operations.
Support HFL/VFL operations, including FL member registration, FL grouping and FL-related events notification, VFL feature alignment, HFL training.
Support for AIMLE client registration, discovery, participation, and selection.
[0063] The AIMLE client 204 communicates with the AIMLE server(s) 208 over one or more AIML-UU reference points. The AIMLE client 204 provides functionality to the VAL client(s) 202 over AIML-C reference point. The VAL server(s) 206 communicate with the AIMLE server(s) 208 over AIML-S reference points. The AIMLE server(s) 208 communicate with the underlying 3 GPP network systems using the respective 3 GPP interfaces specified by the 3 GPP network system. The AIML-E reference point enables interactions between two AIMLE servers (e.g. central and edge AIMLE servers).
[0064] The AIMLE client 204 is a functional entity which acts as an application client supporting AIMLE services. [0065] The AIMLE server 208 interacts with a ML repository 210 which serves as (i) a registry for ML/FL members (e.g. application layer entities participating in an AI/ML operation) and (ii) as a repository for application layer ML model related information.
[0066] In some embodiments of the present disclosure a UE monitors an energy usage indicator and based on this monitoring detects a trigger event which indicates that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by a module on the UE. One example of an energy usage indicator is an energy credit level. [0067] Energy related issues are considered in 5G core as a part of TR 23.700-66, which identifies enhancements including network energy related information exposure, subscription, and policy control to enable energy as service criteria to improve energy efficiency and to support energy saving in the network. Energy enhancements are also considering the use of renewable energy and control of carbon dioxide emissions. Energy as serving criteria can be applied considering different granularities including UE level, PDU session, QoS flow or application, slice, service, and network function (NF).
[0068] The energy credit level may be or may be representative of a quantity of credit associated with a subscriber that can be used for credit control by the 5 G system. In particular, energy credit can be associated to the following five concepts related to new energy events and energy event monitoring: a) the ability for the network operator to create a 'maximum energy credit' policy, after which services are gated, b) the ability for the network operator to inform an AS of the 'maximum energy credit expired' event, c) the ability for the 5G system to calculate 'energy credit' use, d) the ability to monitor and provide to the AS the use of 'energy credits' (or other energy 'quantum'), e) the support a new policy that establishes the energy consequence for charging control - either charging for use of energy or establishing an 'energy credit limit' for enforcement by the 5G system.
[0069] Energy credit control relates to comparing an energy credit level (indicating energy usage) against a second energy credit limit (e.g. a threshold). The result of energy credit control may include, e.g., gating, increased charging rates, data throttling, or change of QoS class, etc.
[0070] The energy credit limit may be associated with a UE by the means of subscription, i.e., as a maximum energy credit limit. Energy credit can also be introduced in the context of a network slice, i.e., per UE per DNN for S-NSSAI level.
[0071] An energy credit limit can be calculated either by: (i) a new dedicated 5G core network function (NF) or (ii) charging function (CHF) that is provided to the 5G core, where needed, e.g., Policy Control Function (PCF) or Session Management Function (SMF). The energy credit limit can be communicated to/from the 5G core to the respective Application Function (AF). Alternatively, the energy credit limit can be communicated to the UE by an SMS. For each UE an energy credit profile can be provisioned as a subscription information. [0072] If a UE’s energy credit level has a reached zero or dropped to a predetermined level the PCF can take the policy decision e.g. to reject establishing a PDU Session for that UE, i.e., the PDU Session response shall include, e.g., rejection cause 'no energy credit'.
[0073] The energy credit limit (e.g. for an application or per UE) referred to herein may be defined in a similar manner as the term energy credit in the 5G core (e.g. as per as per TR 22.882 and TS 22.261); however embodiments of the present disclosure are not limited to this definition and the energy credit limit may be any form of energy usage allowance or budget for an application or aggregately for an application service provider for providing an application service for one or group of UEs or for a given service area (e.g. a multiplayer VR game).
[0074] The energy credit limit can be coupled with the charging of the application for utilizing the mobile communications system capabilities and in particular the energy demand for the user plane and control plane capabilities involved with the application. Such an energy credit limit may be configured by Service Level Agreement (SLA) or by the service agreement between the Mobile Network Operator (MNO) and the vertical / Application Service Provider (ASP).
[0075] As explained in more detail below an energy credit level is merely an example of an energy usage indicator. In other implementations, the energy usage indicator may be an energy usage parameter, an energy efficiency parameter, or an energy consumption parameter and the UE (or an application running on the UE) is associated with a corresponding energy usage limit or target.
[0076] In embodiments of the present disclosure, during execution of a ML operation by a module on a UE, responsibility for execution of the (remainder) of the ML operation may be optimally offloaded from the source UE to another (target) UE based on a trigger event caused by energy usage of the source UE, while ensuring that the performance requirements of the ML operation are met. The energy usage of a UE comprises the energy usage of the constituent functionalities of the UE (UE modem functionalities, application clients, etc.) as well as the communication with the network for supporting the operation of such functionalities.
[0077] Embodiments of the present disclosure relate to the detection of a trigger event caused by energy usage of a user equipment that is involved in execution of an ML operation, and the subsequent identification of one or more entities to offload the execution of this stage to. The cause of the handover of the ML operation may be a trigger event comprising an energy usage indicator (of the user equipment, of an application running on the UE, or of the ML operation which the UE is executing) having reached, being expected to reach, or being predicted to reach, a predetermined threshold during execution of the ML operation.
[0078] Figure 3 shows a process 300 for transferring an ML operation in accordance with aspects of the present disclosure. The process 300 may, for example, be implemented within the architecture shown in Figure 2.
[0079] In this embodiment, the reason for AI/ML task transfer trigger is the energy usage indicator, and in particular the energy credit level per VAL UE (or per application) which is reaching a threshold. Hence, the source AI/ML member with the assistance of AIMLE server offloads the ongoing ML task to a target AI/ML member which can be an edge AIMLE server or another AIMLE client.
[0080] In advance of the process 300 being performed several pre-conditions may be satisfied. The information of the target AI/ML member 304 (e.g. another AIMLE Client or VAL Client different from the source AI/ML member 302) is unknown at the source AI/ML member 304. The source AI/ML member 302 decides that AIMLE server based AI/ML task transfer is needed. The AIMLE server 208 may be aware of the energy status information such as the credit limit or energy efficiency/consumption target for all AIMLE or VAL clients e.g. based on the stored information in the ML repository 210 or based on the registration of the AIMLE clients. In this case, the AIMLE client profile (as provided in the AIMLE client registration request which is specified in 3GPP TS 23.482 (clause 8.7.2.2 and 8.7.2.3) may comprise further information. An example of content of the AIMLE client profile is shown in Table 1 below.
Table 1 [0081] At step S306, the source AI/ML member 302 detects a trigger event caused by energy usage of the associated entity or application. The trigger event may indicate that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by the source AI/ML member 302. Step S306 may be performed in a number of different ways. An energy usage indicator is referred to as being expected to satisfy (e.g. expected to reach) a predetermined threshold to mean that energy usage indicator is anticipated to reach the predetermined threshold imminently e.g. within a predetermined time period. Additionally, or alternatively, an energy usage indicator is referred to as being predicted to satisfy (e.g. predicted to reach) a predetermined threshold to mean that there is a prediction with a certain confidence level as output of an analytics function.
[0082] Alternatively, or additionally, the source AI/ML member 302 may detect the trigger event based on monitoring the energy status at the source entity (e.g. a source UE). This can be done by monitoring the battery status, the application traffic schedule or application traffic pattern from an application, e.g. a VAL application, (this can be also detected locally if the source AI/ML member 302 is used for the distribution of the application messages) or based on energy-related monitoring from e.g. the UE modem (up to implementation).
[0083] Alternatively, or additionally, the detection of the trigger event by the source AI/ML member 302 at step S306 may be based on application layer AI/ML member capability Analytics (e.g. as described in TS 23.436 clause 8.16). This step requires (i) the addition of energy criteria (e.g. an energy usage indicator and/or an energy usage threshold) per VAL UE or VAL/AIMLE client in ADAES analytics service; and (ii) AIMLE client (directly or via AIMLE server or via VAL client) to be a consumer of such analytics. Thus it can be seen that step S306 can use an additional or alternative way of obtaining the energy status information that is based on analytics e.g. predictions or statistics
[0084] In step S306, the source AIML member 302 may alternatively or complementary detect the trigger event by fetching online energy status information such as energy credit information from a charging function in the operator’s charging domain (e.g. via the AIMLE server208 or via the network). The interaction with the charging function may occur when the source AIML member 302 is deployed by the network operator. Alternatively, the source AIML member 302 may obtain the energy credit status from a charging domain from the service provider (e.g. platform provider, vertical).
[0085] At step S308, in response to the detection of the trigger event, at step S306 the source AI/ML member 302 transmits information indicating the trigger event. In particular, the source AI/ML member 302 sends an event trigger message to the AIMLE server 208. The information transmitted to the AIMLE server 208 at step S306 may indicate: The information transmitted to the AIMLE server 208 at step S406 may indicate: (i) that an energy usage threshold (e.g. an energy credit limit) is expected to be reached; (ii) a high predicted or actual or expected energy usage indicator for the VAL UE or application, wherein the predicted/actual/expected energy usage indicator is (or is predicted or expected to be) higher that an energy usage threshold or within a threshold range from an energy usage limit; (iii) a low predicted or actual or expected energy credit for a UE or application, wherein the predicted/actual/expected energy credit is (or is predicted or expected to be) within a threshold range from a zero credit balance or less than a predefined credit balance; and/or (iv) a low predicted or actual or expected energy efficiency parameter, wherein the energy efficiency parameter is (or is predicted or expected to be) lower than an energy efficiency threshold.
[0086] At step 310, The AIMLE sever 208 may authenticate and authorizes the request from the source AI/ML member 302 and determine to initiate a task transfer based on the trigger event. If the request is authorized, the AIMLE server 208 may determine the availability and capability (e.g. credit limit level) of one or more candidate AI/ML member(s) for AI/ML task transfer. The determination may be based on energy criteria and energy status information associated with the candidate AI/ML members.
[0087] If the energy usage indicator is the cause for the transfer request, at the determination of candidate AIML members, an energy usage indicator or capability (e.g. energy credit limit) of the candidate AIMLE clients can be fetched or obtained by the ML repository or by AIMLE clients themselves. Energy status information may be comprised by the AIMLE client profile in the ML repository 210).
[0088] At step 312, the AIMLE server 208 may also send a request to one or more prospective AIMLE clients to serve as candidate AIML members for the ML task to fetch the energy capability information, which may in particular comprise the energy credit levl or energy budget for the respective VAL UEs. The AIMLE clients (after interacting with the VAL clients) send the requested information on the energy capability and/or status to the AIMLE server 208.
[0089] At step 314, alternatively (for VAL UEs), or complementary to the determination in step S310 and the retrieval of information from other UEs in step 312, the AIMLE server 208 can request energy status information from an energy monitoring function 212, such as the Energy Information Function (EIF) (or other energy monitoring function at operations, administration, and management, 0AM, or the application enablement layer), to obtain the energy status information such as the energy credit level for VAL UEs of the respective candidate AIMLE members.
[0090] At step 316, for the candidate AIMLE members, the AIMLE server 208 may also collect analytics by extending Application Layer AI/ML Member Capability Analytics (as described in 3GPP TS 23.436 clause 8.16).
[0091] At step 318, the AIMLE server 208 may evaluate (by rating or ranking) the candidate AIML members (e.g. candidate AIMLE clients) for executing the ML operation. The AIMLE server 208 may evaluate based on the energy status information (e.g. energy credit level) as well as the performance of the ML operation and an energy sustainability factor (e.g. whether the new selection will be sustainable till the end of the session and completion of the ML operation). For the sustainability, further inputs may be required like the application traffic schedule and the UE mobility.
[0092] At step 320, the AIMLE server 208 can determine the entity to serve as target AI/ML member from the candidate AIML members based on the evaluation and/or the energy criteria.
[0093] At step 322, the AIMLE server 208 can transmit a message as response/notification to the source AI/ML member 302. The response may provide candidate AIML members including the target AIML member 304. Additionally, the AIMLE server 208 may transmit a command to the target AI/ML member 304 comprising an indication of selection. The command may comprise the energy cause for the transfer (e.g. the energy usage indicator of the source AIML member 302) and optionally any operation context that may be required to complete the transfer and the incomplete ML operation. The source AI/ML member 302 can then perform AI/ML task transfer to the target AI/ML member 304 via the AIMLE server 208.
[0094] At step 324, the AIMLE server 208 can transmit a message to the ML repository 210 to remove or suspend or mark as unavailable the source VAL UE. This may include sending a notification to the ML repository to update the status of the VAL UE.
[0095] In another example, the task transfer from the source AIML member 302 to the target AIML member 304 may be determined and facilitated by the source AIML member 302. Less input may then be required from the AIMLE server 208 in order to facilitate the task transfer.
[0096] Eigure 4 shows a process 400 for transferring an ML operation in accordance with aspects of the present disclosure. The process 400 may, for example, be implemented within the architecture shown in Figure 2.
[0097] In this embodiment, the AI/ML task transfer trigger is the energy usage indicator (e.g. an energy credit level, energy budget, energy usage threshold, energy efficiency threshold), and in particular e.g. the energy credit level per VAL UE (or per application) which is reaching a threshold. Hence, the source AI/ML member 302 determines that the ongoing ML process should be transferred to a target AI/ML member 304 which may be another AIMLE client.
[0098] In advance of the process 400 being performed, several pre-conditions may be satisfied. The information of candidate AI/ML members comprising the target AIML member 304 (e.g., other AIMLE Clients or VAL Clients different from the source AI/ML member) may be known at the source AI/ML member 302, or the source AI/ML member 302 may be configured to obtain the candidate AI/ML member information from the ML repository 210. In this case, the ML repository 210 is aware of the energy status information (e.g. the energy credit limit or energy efficiency/consumption thresholds of AIMLE or VAL clients) based on the stored information in the ML repository 210.
[0099] At step S406, the source AI/ML member 302 detects a trigger event caused by energy usage of the associated entity or application. The trigger event may indicate that an energy usage indicator has satisfied (e.g. reached), is expected to satisfy, or is predicted to satisfy a predetermined threshold (e.g. an energy usage limit) during execution of the ML operation by the source AI/ML member 302. Step S406 may be performed in a number of different ways. We refer to an energy usage indicator being expected to satisfy (e.g. expected to reach) a predetermined threshold to mean that energy usage indicator is anticipated to reach the predetermined threshold imminently e.g. within a predetermined time period. We refer to an energy usage indicator being predicted to satisfy (e.g. predicted to reach) a predetermined threshold to mean that there is a prediction with a certain confidence level as output of an analytics function.
[0100] Alternatively or additionally, the source AI/ML member 302 may detect the trigger event based on monitoring the energy status at the source entity (e.g. a source UE). This can be done by monitoring the battery status, the application traffic schedule or application traffic pattern from an application, e.g. a VAL application, (this can be also detected locally if the source AI/ML member 302 is used for the distribution of the application messages) or based on energy-related monitoring from e.g. the UE modem (up to implementation).
[0101] Alternatively or additionally, the detection of the trigger event by the source AI/ML member 302 at step S406 may be based on application layer AI/ML member capability Analytics (e.g. as described in TS 23.436 clause 8.16). This step requires (i) the addition of energy criteria (e.g. an energy usage indicator and/or an energy usage threshold) per VAL UE or VAL/AIMLE client in ADAES analytics service; and (ii) AIMLE client (directly or via AIMLE server or via VAL client) to be a consumer of such analytics. Thus it can be seen that step S406 can use an additional or alternative way of obtaining the energy status information that is based on analytics e.g. predictions or statistics
[0102] At step S406, the source AIML member 302 may alternatively or complementary detect the trigger event by fetching online energy credit information from a charging function in the operator’s charging domain (e.g. via the AIMLE server 208 or via the network). The interaction with the charging function may occur when the source AIML member 302 is deployed by the network operator. Alternatively, the source AIML member 302 may obtain the energy credit status from a charging domain from the service provider (e.g. platform provider, vertical).
[0103] At step S408, the source AIML member 302 may transmit a query to one or more candidate AIMLE clients comprising the target AIMLE member 304 to obtain an energy usage indicator and/or other energy status information (e.g. energy capability information such as the energy credit level or energy budget for the respective VAL UEs). The AIMLE clients (after interacting with the VAL clients) can transmit the requested information to the source AIML member 302.
[0104] At step S410, for VAL UEs, alternatively or complementary to the retrieval of information from other UEs in step S408, an AIMLE client may request from the AIMLE server 208 or from an energy monitoring function 212 such as the EIF (via the network exposure function NEF) or other energy monitoring function at 0AM or application enablement layer to provide the energy credit status for the list of alternative VAL UEs with the respective AIMLE clients. The AIMLE server 208 receives the energy usage indicator and/or other energy status information of the VAL UEs of the candidate AIML members. In this step, source AIML member 302 may alternatively or in addition retrieve online energy status information from a charging function in the operator’s charging domain (e.g. via AIMLE server or via the network) for the respective VAL UE(s) which are determined as candidate to undertake the ML task.
[0105] At step 412, the source AIML member 302 may evaluate (by rating or ranking) the candidate AIML members (e.g. candidate AIMLE clients) for executing the ML operation. The source AIML member may evaluate based on the energy status information and energy usage indicator (e.g. energy credit level) as well as the performance of the ML operation and an energy sustainability factor (e.g. whether the new selection will be sustainable till the end of the session and completion of the ML operation). For the sustainability, further inputs may be required like the application traffic schedule and the UE mobility.
[0106] At step 414, the source AIML member 302 can determine the entity to serve as target AI/ML member 304 from the candidate AIML members based on the evaluation (e.g. the ranking) and/or otherwise based on the energy criteria.
[0107] At step 416, the source AIMLE member 302 sends a command or request to the target AI/ML member 304 that is selected with an indication of selection. The command or request may include the energy cause for the transfer and optionally any operation context required for the transfer and/or for completion of the ML operation.
[0108] At step S418, the source AI/ML member 302 performs AI/ML task transfer to the target AI/ML member 304 directly. [0109] At step S420, the source (or target) AIMLE member transmits via the AIMLE server 208 a message to ML repository 210 to remove or suspend or mark as unavailable the source VAL UE based on the energy criteria. This may include sending a notification to the ML repository 210 to update the stored energy profile of the VAL UE comprising the source A I ML member 302.
[0110] Figure 5 illustrates an example of a UE 500 in accordance with aspects of the present disclosure. The UE 500 may include a processor 502, a memory 504, a controller 506, and a transceiver 508. The processor 502, the memory 504, the controller 506, or the transceiver 508, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
[0111] The processor 502, the memory 504, the controller 506, or the transceiver 508, or various combinations or components thereof may be implemented in hardware (e.g., circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
[0112] The processor 502 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 502 may be configured to operate the memory 504. In some other implementations, the memory 504 may be integrated into the processor 502. The processor 502 may be configured to execute computer-readable instructions stored in the memory 504 to cause the UE 500 to perform various functions of the present disclosure.
[0113] The memory 504 may include volatile or non-volatile memory. The memory 504 may store computer-readable, computer-executable code including instructions when executed by the processor 502 cause the UE 500 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such the memory 504 or another type of memory. Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or specialpurpose computer.
[0114] In some implementations, the processor 502 and the memory 504 coupled with the processor 502 may be configured to cause the UE 500 to perform one or more of the functions described herein (e.g., executing, by the processor 502, instructions stored in the memory 504). For example, the processor 502 may support wireless communication at the UE 500 in accordance with examples as disclosed herein. The UE 500 may be configured to support a means for commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
[0115] The controller 506 may manage input and output signals for the UE 500. The controller 506 may also manage peripherals not integrated into the UE 500. In some implementations, the controller 506 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems. In some implementations, the controller 506 may be implemented as part of the processor 502.
[0116] In some implementations, the UE 500 may include at least one transceiver 508. In some other implementations, the UE 500 may have more than one transceiver 508. The transceiver 508 may represent a wireless transceiver. The transceiver 508 may include one or more receiver chains 510, one or more transmitter chains 512, or a combination thereof.
[0117] A receiver chain 510 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium. For example, the receiver chain 510 may include one or more antennas for receive the signal over the air or wireless medium. The receiver chain 510 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal. The receiver chain 510 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal. The receiver chain 510 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data. [0118] A transmitter chain 512 may be configured to generate and transmit signals (e.g., control information, data, packets). The transmitter chain 512 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium. The at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM). The transmitter chain 512 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium. The transmitter chain 512 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
[0119] Figure 6 illustrates an example of a processor 600 in accordance with aspects of the present disclosure. The processor 600 may be an example of a processor configured to perform various operations in accordance with examples as described herein. The processor 600 may include a controller 602 configured to perform various operations in accordance with examples as described herein. The processor 600 may optionally include at least one memory 604, which may be, for example, an L1/L2/L3 cache. Additionally, or alternatively, the processor 600 may optionally include one or more arithmetic-logic units (ALUs) 606. One or more of these components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses).
[0120] The processor 600 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 600) or other memory (e.g., random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others). [0121] The controller 602 may be configured to manage and coordinate various operations (e.g., signaling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 600 to cause the processor 600 to support various operations in accordance with examples as described herein. For example, the controller 602 may operate as a control unit of the processor 600, generating control signals that manage the operation of various components of the processor 600. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
[0122] The controller 602 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 604 and determine subsequent instruction(s) to be executed to cause the processor 600 to support various operations in accordance with examples as described herein. The controller 602 may be configured to track memory address of instructions associated with the memory 604. The controller 602 may be configured to decode instructions to determine the operation to be performed and the operands involved. For example, the controller 602 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 600 to cause the processor 600 to support various operations in accordance with examples as described herein. Additionally, or alternatively, the controller 602 may be configured to manage flow of data within the processor 600. The controller 602 may be configured to control transfer of data between registers, arithmetic logic units (ALUs), and other functional units of the processor 600.
[0123] The memory 604 may include one or more caches (e.g., memory local to or included in the processor 600 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementations, the memory 604 may reside within or on a processor chipset (e.g., local to the processor 600). In some other implementations, the memory 604 may reside external to the processor chipset (e.g., remote to the processor 600).
[0124] The memory 604 may store computer-readable, computer-executable code including instructions that, when executed by the processor 600, cause the processor 600 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. The controller 602 and/or the processor 600 may be configured to execute computer-readable instructions stored in the memory 604 to cause the processor 600 to perform various functions. For example, the processor 600 and/or the controller 602 may be coupled with or to the memory 604, the processor 600, the controller 602, and the memory 604 may be configured to perform various functions described herein. In some examples, the processor 600 may include multiple processors and the memory 604 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
[0125] The one or more ALUs 606 may be configured to support various operations in accordance with examples as described herein. In some implementations, the one or more ALUs 606 may reside within or on a processor chipset (e.g., the processor 600). In some other implementations, the one or more ALUs 606 may reside external to the processor chipset (e.g., the processor 600). One or more ALUs 606 may perform one or more computations such as addition, subtraction, multiplication, and division on data. For example, one or more ALUs 606 may receive input operands and an operation code, which determines an operation to be executed. One or more ALUs 606 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 606 may support logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND), enabling the one or more ALUs 606 to handle conditional operations, comparisons, and bitwise operations.
[0126] The processor 600 may support wireless communication in accordance with examples as disclosed herein. The processor 600 may be configured to or operable to support a means for commencing execution of a machine learning operation, detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation, and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
[0127] Figure 7 illustrates an example of a NE 700 in accordance with aspects of the present disclosure. The NE 700 may include a processor 702, a memory 704, a controller 706, and a transceiver 708. The processor 702, the memory 704, the controller 706, or the transceiver 708, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
[0128] The processor 702, the memory 704, the controller 706, or the transceiver 708, or various combinations or components thereof may be implemented in hardware (e.g., circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
[0129] The processor 702 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 702 may be configured to operate the memory 704. In some other implementations, the memory 704 may be integrated into the processor 702. The processor 702 may be configured to execute computer-readable instructions stored in the memory 704 to cause the NE 700 to perform various functions of the present disclosure.
[0130] The memory 704 may include volatile or non-volatile memory. The memory 704 may store computer-readable, computer-executable code including instructions when executed by the processor 702 cause the NE 700 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such the memory 704 or another type of memory. Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or specialpurpose computer.
[0131] In some implementations, the processor 702 and the memory 704 coupled with the processor 702 may be configured to cause the NE 700 to perform one or more of the functions described herein (e.g., executing, by the processor 702, instructions stored in the memory 704). For example, the processor 702 may support wireless communication at the NE 700 in accordance with examples as disclosed herein. The NE 700 may be configured to support a means for receiving information indicating a trigger event caused by energy usage of an entity involved in execution of a machine learning operation, determining, based on the information, that a part of the machine learning operation is to be transferred to another entity, selecting a target entity for transferring the part of the machine learning operation based on energy status information of the target entity, and transmitting a task transfer notification indicating the target entity.
[0132] The controller 706 may manage input and output signals for the NE 700. The controller 706 may also manage peripherals not integrated into the NE 700. In some implementations, the controller 706 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems. In some implementations, the controller 706 may be implemented as part of the processor 702.
[0133] In some implementations, the NE 700 may include at least one transceiver 708. In some other implementations, the NE 700 may have more than one transceiver 708. The transceiver 708 may represent a wireless transceiver. The transceiver 708 may include one or more receiver chains 710, one or more transmitter chains 712, or a combination thereof.
[0134] A receiver chain 710 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium. For example, the receiver chain 710 may include one or more antennas for receive the signal over the air or wireless medium. The receiver chain 710 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal. The receiver chain 710 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal. The receiver chain 710 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
[0135] A transmitter chain 712 may be configured to generate and transmit signals (e.g., control information, data, packets). The transmitter chain 712 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium. The at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM). The transmitter chain 712 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium. The transmitter chain 712 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
[0136] Figure 8 illustrates a flowchart of a method in accordance with aspects of the present disclosure. The operations of the method may be implemented by a UE as described herein. In some implementations, the UE may execute a set of instructions to control the function elements of the UE to perform the described functions.
[0137] At 802, the method may include commencing execution of a machine learning operation. The operations of 802 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 802 may be performed by a UE as described with reference to Figure 5.
[0138] At 804, the method may include detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation. The operations of 804 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 804 may be performed by a UE as described with reference to Figure 5.
[0139] At 806, the method may include transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event. The operations of 806 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 806 may be performed a UE as described with reference to Figure 5.
[0140] It should be noted that the method described herein describes A possible implementation, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible.
[0141] It should be noted that the method described herein describes a possible implementation, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. [0142] The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.

Claims

CLAIMS What is claimed is:
1. A user equipment (UE) for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the UE to: commence execution of a machine learning operation; detect a trigger event caused by energy usage of the UE in execution of the machine learning operation; and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
2. The UE of claim 1, wherein the trigger event indicates that an energy usage indicator has satisfied, is expected to satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation.
3. The UE of claim 1 or 2, wherein the at least one processor is further configured to cause the UE to: transmit information indicating the trigger event to a network entity to initiate the transfer; and receive, from the network entity, a task transfer notification comprising an indication of the target entity.
4. The UE of claim 1 or 2, wherein the at least one processor is further configured to cause the UE to: obtain energy status information of one or more entities comprising the target entity; rank the one or more entities based on the energy status information; and select the target entity from the one or more entities based on the rank.
5. The UE of claim 4, wherein the at least one processor is further configured to cause the UE to: send, to the one or more entities, a query related to energy capability information for the one or more entities; and receive the energy status information from the one or more entities in response to the query.
6. The UE of claim 4 or 5, wherein the at least one processor is further configured to cause the UE to update a repository to indicate that the UE is unavailable to perform machine learning operations.
7. The UE of any of the preceding claims, wherein the at least one processor is further configured to cause the UE to monitor the energy usage indicator at the UE.
8. The UE of any of the preceding claims, wherein the energy usage indicator comprises at least one of an energy credit level, an energy budget, a battery status, an application traffic schedule, and a pattern from an application.
9. The UE of any of the preceding claims, wherein the at least one processor is further configured to cause the UE to detect the event trigger based on retrieving online energy status information from a charging function.
10. A processor for wireless communication, comprising: at least one controller coupled with at least one memory and configured to cause the processor to: commence execution of a machine learning operation; detect a trigger event caused by energy usage of the UE in execution of the machine learning operation; and transfer, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
11. The processor of claim 10, wherein the at least one controller is further configured to cause the processor to: provide information indicating the trigger event to a network entity to initiate the transfer; and obtain, from the network entity, a task transfer notification comprising an indication of the target entity
12. The processor of claim 10, wherein the at least one controller is further configured to cause the processor to: obtain energy status information of one or more entities comprising the target entity; rank the one or more entities based on the energy status information; and select the target entity from the one or more entities based on the rank.
13. A method performed by a user equipment (UE), the method comprising: commencing execution of a machine learning operation; detecting a trigger event caused by energy usage of the UE in execution of the machine learning operation; and transferring, to a target entity, at least a part of the machine learning operation based on detection of the trigger event.
14. The method of claim 13, further comprising: transmitting information indicating the trigger event to a network entity to initiate the transfer; and receiving, from the network entity, a task transfer notification comprising an indication of the target entity.
15. The method of claim 13, further comprising: obtaining energy status information of one or more entities comprising the target entity; ranking the one or more entities based on the energy status information; and selecting the target entity from the one or more entities based on the rank.
16. A network entity for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the base station to: receive information indicating a trigger event caused by energy usage of an entity involved in execution of a machine learning operation; determine, based on the information, that a part of the machine learning operation is to be transferred to another entity; select a target entity for transferring the part of the machine learning operation based on energy status information of the target entity; and transmit a task transfer notification indicating the target entity.
17. The network entity of claim 16, wherein the trigger event indicates that an energy usage indicator has satisfied, is expected to satisfy, or is predicted to satisfy a predetermined threshold during execution of the machine learning operation
18. The network entity of claim 16 or 17, wherein the at least one processor is further configured to cause the network entity to select the target entity by retrieving and comparing energy status information of a plurality of candidate entities.
19. The network entity of any of claims 16 to 18, wherein the at least one processor is further configured to cause the network entity to retrieve the energy status information of at least one of the plurality of candidate entities from a repository.
20. The network entity of any of claims 16 to 19, wherein the at least one processor is further configured to cause the network entity to update a repository to indicate that the entity is unavailable to perform machine learning operations.
PCT/EP2025/050956 2025-01-08 2025-01-15 Machine learning task transfer Pending WO2025176384A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20250100010 2025-01-08
GR20250100010 2025-01-08

Publications (1)

Publication Number Publication Date
WO2025176384A1 true WO2025176384A1 (en) 2025-08-28

Family

ID=94383177

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/050956 Pending WO2025176384A1 (en) 2025-01-08 2025-01-15 Machine learning task transfer

Country Status (1)

Country Link
WO (1) WO2025176384A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213857A1 (en) * 2022-05-06 2023-11-09 Telefonaktiebolaget Lm Ericsson (Publ) Splitting a machine learning inference process
US20240015203A1 (en) * 2016-12-28 2024-01-11 Intel Corporation Application computation offloading for mobile edge computing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240015203A1 (en) * 2016-12-28 2024-01-11 Intel Corporation Application computation offloading for mobile edge computing
WO2023213857A1 (en) * 2022-05-06 2023-11-09 Telefonaktiebolaget Lm Ericsson (Publ) Splitting a machine learning inference process

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Study on application layer support for AI/ML services; (Release 19)", no. V0.2.0, 24 November 2023 (2023-11-24), pages 1 - 34, XP052552969, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/23_series/23.700-82/23700-82-020.zip 23700-82-020-rm.docx> [retrieved on 20231124] *

Similar Documents

Publication Publication Date Title
WO2024079365A1 (en) Notification handling for vertical federated learning enablement
WO2024110083A1 (en) Support for machine learning enabled analytics
WO2024125885A1 (en) Analytics for energy conditions related to background data traffic in a wireless communication network
WO2024153361A1 (en) Model training in a wireless communication network
WO2024170136A1 (en) Determining the energy performance of a portion of a wireless communication network
WO2025176384A1 (en) Machine learning task transfer
WO2024088614A1 (en) Analytics and energy efficiency policies applied in a wireless communication network
WO2025237545A1 (en) Candidate federated learning member updates
WO2025171982A1 (en) Multi-stage machine learning operation execution
US20250168765A1 (en) Jointly optimize network and device power saving
US20250168767A1 (en) Admission control considering network energy saving
US20250141753A1 (en) Interfacing services of an application data analytics enabler server
US20250056204A1 (en) Apparatus and method for analytics subscription in a wireless network
WO2025228555A1 (en) Performing a model operation in a wireless communication system
US20250168780A1 (en) Network energy saving correlated with quality of service
WO2024198474A1 (en) Conflict mitigation in near-rt ric
WO2025168231A1 (en) Machine learning model performance degradation detection
WO2024146146A1 (en) Computing service in networks
WO2024120680A1 (en) Applying energy criteria to background data in a wireless communication network
WO2025223730A1 (en) Supporting a model operation in a wireless communication system
WO2025195638A1 (en) Network slice admission control in a wireless communications system based on energy criteria
WO2024175226A1 (en) Grouping a plurality of entities for participating in a machine learning operation
WO2024256050A1 (en) Transfer learning in application layer machine learning tasks in a wireless communication network
WO2025098657A1 (en) Provision of energy analytics
WO2025082663A1 (en) Machine learning model accuracy for vertical federated learning in a wireless communication network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25701145

Country of ref document: EP

Kind code of ref document: A1