[go: up one dir, main page]

WO2024166779A1 - Communication control method - Google Patents

Communication control method Download PDF

Info

Publication number
WO2024166779A1
WO2024166779A1 PCT/JP2024/003200 JP2024003200W WO2024166779A1 WO 2024166779 A1 WO2024166779 A1 WO 2024166779A1 JP 2024003200 W JP2024003200 W JP 2024003200W WO 2024166779 A1 WO2024166779 A1 WO 2024166779A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
model
csi
code
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2024/003200
Other languages
French (fr)
Japanese (ja)
Inventor
光孝 秦
憲弘 滝本
真人 藤代
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyocera Corp
Original Assignee
Kyocera Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyocera Corp filed Critical Kyocera Corp
Publication of WO2024166779A1 publication Critical patent/WO2024166779A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. Transmission Power Control [TPC] or power classes
    • H04W52/02Power saving arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • H04W72/23Control channels or signalling for resource management in the downlink direction of a wireless link, i.e. towards a terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/20Manipulation of established connections
    • H04W76/28Discontinuous transmission [DTX]; Discontinuous reception [DRX]

Definitions

  • This disclosure relates to a communication control method.
  • a communication control method is a communication control method in a mobile communication system.
  • the communication control method includes a step in which a transmitting entity creates a trained model using predetermined data and a code representing the predetermined data as training data.
  • the communication control method also includes a step in which the transmitting entity transmits the trained model to a receiving entity.
  • the communication control method further includes a step in which the transmitting entity infers a code from the predetermined data using the trained model.
  • the communication control method further includes a step in which the transmitting entity transmits the code to the receiving entity.
  • the communication control method further includes a step in which the receiving entity obtains the predetermined data from the code using the trained model.
  • a communication control method is a communication control method in a mobile communication system.
  • the communication control method includes a step in which a network node creates a trained model for inferring data transmission timing in intermittent reception.
  • the communication control method also includes a step in which the network node transmits the trained model to a user device.
  • the communication control method further includes a step in which the user device infers data transmission timing using the trained model.
  • the communication control method further includes a step in which the user device performs data reception processing at the data transmission timing.
  • FIG. 1 is a diagram showing an example of the configuration of a mobile communication system according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of the configuration of a UE (user equipment) according to the first embodiment.
  • Figure 3 is a diagram showing an example configuration of a gNB (base station) according to the first embodiment.
  • FIG. 4 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment.
  • FIG. 5 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of a functional block configuration of the AI/ML technology according to the first embodiment.
  • FIG. 7 is a diagram illustrating an example of an operation in the AI/ML technique according to the first embodiment.
  • FIG. 8 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 9 is a diagram illustrating an example of reducing CSI-RS according to the first embodiment.
  • FIG. 10 is a diagram illustrating an example of reducing CSI-RS according to the first embodiment.
  • FIG. 11 is a diagram illustrating an example of an operation according to the first embodiment.
  • FIG. 12 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 13 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 14 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 15 is a diagram illustrating an example of an operation according to the first embodiment.
  • FIG. 16 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 17 is a diagram illustrating an example of an operation according to the first embodiment.
  • FIG. 18 is a diagram illustrating an example of an operation according to the first embodiment.
  • FIG. 19 is a diagram illustrating an example of a setting message according to the first embodiment.
  • FIG. 20 is a diagram showing the correspondence between codes and CSI according to the first embodiment.
  • FIG. 21 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
  • FIG. 22 is a diagram illustrating an example of an operation according to the first embodiment.
  • FIG. 23 is a diagram illustrating another operation example according to the first embodiment.
  • FIG. 24 is a diagram showing the correspondence between codes and CSI according to the first embodiment.
  • FIG. 25 is a diagram illustrating another operation example according to the first embodiment.
  • FIG. 26 is a diagram illustrating an example of transmission timing and reception timing according to the second embodiment.
  • FIG. 27 is a diagram showing an example of an arrangement of functional blocks of the AI/ML technology according to the second embodiment.
  • FIG. 28 is a diagram illustrating an example of operation according to the second embodiment.
  • FIG. 29 is a diagram illustrating an example of a margin time according to the second embodiment.
  • the purpose of this disclosure is to provide a communication control method that can reduce the amount of information.
  • FIG. 1 is a diagram showing an example of the configuration of a mobile communication system 1 according to the first embodiment.
  • the mobile communication system 1 complies with the 5th generation system (5GS: 5th Generation System) of the 3GPP standard.
  • 5GS 5th Generation System
  • 5GS will be described as an example, but an LTE (Long Term Evolution) system may be applied at least partially to the mobile communication system.
  • 6G sixth generation
  • 6G sixth generation
  • the mobile communication system 1 has a user equipment (UE) 100, a 5G radio access network (NG-RAN: Next Generation Radio Access Network) 10, and a 5G core network (5GC: 5G Core Network) 20.
  • UE user equipment
  • NG-RAN Next Generation Radio Access Network
  • 5GC 5G Core Network
  • the NG-RAN 10 may be simply referred to as the RAN 10.
  • the 5GC 20 may be simply referred to as the core network (CN) 20.
  • UE100 is a mobile wireless communication device.
  • UE100 may be any device that is used by a user.
  • UE100 is a mobile phone terminal (including a smartphone) and/or a tablet terminal, a notebook PC, a communication module (including a communication card or chipset), a sensor or a device provided in a sensor, a vehicle or a device provided in a vehicle (Vehicle UE), or an aircraft or a device provided in an aircraft (Aerial UE).
  • NG-RAN10 includes base station (called “gNB” in 5G system) 200.
  • gNB200 are connected to each other via Xn interface, which is an interface between base stations.
  • gNB200 manages one or more cells.
  • gNB200 performs wireless communication with UE100 that has established a connection with its own cell.
  • gNB200 has a radio resource management (RRM) function, a routing function for user data (hereinafter simply referred to as “data”), a measurement control function for mobility control and scheduling, etc.
  • RRM radio resource management
  • Cell is used as a term indicating the smallest unit of a wireless communication area.
  • Cell is also used as a term indicating a function or resource for performing wireless communication with UE100.
  • One cell belongs to one carrier frequency (hereinafter simply referred to as "frequency").
  • gNBs can also be connected to the Evolved Packet Core (EPC), which is the core network of LTE.
  • EPC Evolved Packet Core
  • LTE base stations can also be connected to 5GC.
  • LTE base stations and gNBs can also be connected via a base station-to-base station interface.
  • the 5GC20 includes an AMF (Access and Mobility Management Function) and a UPF (User Plane Function) 300.
  • the AMF performs various mobility controls for the UE 100.
  • the AMF manages the mobility of the UE 100 by communicating with the UE 100 using NAS (Non-Access Stratum) signaling.
  • the UPF controls data forwarding.
  • the AMF and the UPF 300 are connected to the gNB 200 via an NG interface, which is an interface between a base station and a core network.
  • the AMF and the UPF 300 may be core network devices included in the CN 20.
  • FIG. 2 is a diagram showing an example of the configuration of a UE 100 (user equipment) according to the first embodiment.
  • the UE 100 includes a receiver 110, a transmitter 120, and a controller 130.
  • the receiver 110 and the transmitter 120 constitute a communication unit that performs wireless communication with the gNB 200.
  • the UE 100 is an example of a communication device.
  • the receiving unit 110 performs various types of reception under the control of the control unit 130.
  • the receiving unit 110 includes an antenna and a receiver.
  • the receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 130.
  • the transmitting unit 120 performs various transmissions under the control of the control unit 130.
  • the transmitting unit 120 includes an antenna and a transmitter.
  • the transmitter converts the baseband signal (transmission signal) output by the control unit 130 into a radio signal and transmits it from the antenna.
  • the control unit 130 performs various controls and processes in the UE 100. Such processes include the processes of each layer described below.
  • the control unit 130 includes at least one processor and at least one memory.
  • the memory stores programs executed by the processor and information used in the processes by the processor.
  • the processor may include a baseband processor and a CPU (Central Processing Unit).
  • the baseband processor performs modulation/demodulation and encoding/decoding of baseband signals.
  • the CPU executes programs stored in the memory to perform various processes. Note that the processes or operations performed in the UE 100 may be performed in the control unit 130.
  • FIG. 3 is a diagram showing an example of the configuration of a gNB 200 (base station) according to the first embodiment.
  • the gNB 200 includes a transmitter 210, a receiver 220, a controller 230, and a backhaul communication unit 250.
  • the transmitter 210 and the receiver 220 constitute a communication unit that performs wireless communication with the UE 100.
  • the backhaul communication unit 250 constitutes a network communication unit that performs communication with the CN 20.
  • the gNB 200 is another example of a communication device.
  • the transmitting unit 210 performs various transmissions under the control of the control unit 230.
  • the transmitting unit 210 includes an antenna and a transmitter.
  • the transmitter converts the baseband signal (transmission signal) output by the control unit 230 into a radio signal and transmits it from the antenna.
  • the receiving unit 220 performs various types of reception under the control of the control unit 230.
  • the receiving unit 220 includes an antenna and a receiver.
  • the receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 230.
  • the control unit 230 performs various controls and processes in the gNB 200. Such processes include the processes of each layer described below.
  • the control unit 230 includes at least one processor and at least one memory.
  • the memory stores programs executed by the processor and information used in the processes by the processor.
  • the processor may include a baseband processor and a CPU.
  • the baseband processor performs modulation/demodulation and encoding/decoding of baseband signals.
  • the CPU executes programs stored in the memory to perform various processes. Note that the processes or operations performed in the gNB 200 may be performed by the control unit 230.
  • the backhaul communication unit 250 is connected to adjacent base stations via an Xn interface, which is an interface between base stations.
  • the backhaul communication unit 250 is connected to the AMF/UPF 300 via an NG interface, which is an interface between a base station and a core network.
  • the gNB 200 may be composed of a central unit (CU) and a distributed unit (DU) (i.e., functionally divided), and the two units may be connected via an F1 interface, which is a fronthaul interface.
  • Figure 4 shows an example of the protocol stack configuration for the wireless interface of the user plane that handles data.
  • the user plane radio interface protocol has a physical (PHY) layer, a medium access control (MAC) layer, a radio link control (RLC) layer, a packet data convergence protocol (PDCP) layer, and a service data adaptation protocol (SDAP) layer.
  • PHY physical
  • MAC medium access control
  • RLC radio link control
  • PDCP packet data convergence protocol
  • SDAP service data adaptation protocol
  • the PHY layer performs encoding/decoding, modulation/demodulation, antenna mapping/demapping, and resource mapping/demapping. Data and control information are transmitted between the PHY layer of UE100 and the PHY layer of gNB200 via a physical channel.
  • the PHY layer of UE100 receives downlink control information (DCI) transmitted from gNB200 on a physical downlink control channel (PDCCH).
  • DCI downlink control information
  • PDCCH physical downlink control channel
  • RNTI radio network temporary identifier
  • the DCI transmitted from gNB200 has CRC (Cyclic Redundancy Code) parity bits scrambled by the RNTI added.
  • UE100 can use a bandwidth narrower than the system bandwidth (i.e., the cell bandwidth).
  • gNB200 sets a bandwidth portion (BWP) consisting of consecutive PRBs (Physical Resource Blocks) to UE100.
  • BWP bandwidth portion
  • UE100 transmits and receives data and control signals in the active BWP.
  • BWP bandwidth portion
  • up to four BWPs may be set to UE100.
  • Each BWP may have a different subcarrier spacing.
  • the BWPs may overlap each other in frequency.
  • gNB200 can specify which BWP to apply by controlling the downlink.
  • gNB200 dynamically adjusts the UE bandwidth according to the amount of data traffic of UE100, etc., thereby reducing UE power consumption.
  • the gNB200 can, for example, configure up to three control resource sets (CORESETs) for each of up to four BWPs on the serving cell.
  • the CORESET is a radio resource for control information to be received by the UE100. Up to 12 or more CORESETs may be configured on the serving cell for the UE100.
  • Each CORESET may have an index of 0 to 11 or more.
  • the CORESET may consist of six resource blocks (PRBs) and one, two, or three consecutive OFDM (Orthogonal Frequency Division Multiplex) symbols in the time domain.
  • PRBs resource blocks
  • OFDM Orthogonal Frequency Division Multiplex
  • the MAC layer performs data priority control, retransmission processing using Hybrid Automatic Repeat reQuest (HARQ), and random access procedures. Data and control information are transmitted between the MAC layer of UE100 and the MAC layer of gNB200 via a transport channel.
  • the MAC layer of gNB200 includes a scheduler. The scheduler determines the uplink and downlink transport format (transport block size, modulation and coding scheme (MCS)) and the resource blocks to be assigned to UE100.
  • MCS modulation and coding scheme
  • the RLC layer uses the functions of the MAC layer and PHY layer to transmit data to the RLC layer on the receiving side. Data and control information are transmitted between the RLC layer of UE100 and the RLC layer of gNB200 via logical channels.
  • the PDCP layer performs header compression/decompression, encryption/decryption, etc.
  • the SDAP layer maps IP flows, which are the units for which the core network controls QoS (Quality of Service), to radio bearers, which are the units for which the access stratum (AS) controls QoS. Note that if the RAN is connected to the EPC, SDAP is not necessary.
  • Figure 5 shows the configuration of the protocol stack for the wireless interface of the control plane that handles signaling (control signals).
  • the protocol stack of the radio interface of the control plane has a radio resource control (RRC) layer and a non-access stratum (NAS) instead of the SDAP layer shown in Figure 4.
  • RRC radio resource control
  • NAS non-access stratum
  • RRC signaling for various settings is transmitted between the RRC layer of UE100 and the RRC layer of gNB200.
  • the RRC layer controls logical channels, transport channels, and physical channels in response to the establishment, re-establishment, and release of radio bearers.
  • RRC connection connection between the RRC of UE100 and the RRC of gNB200
  • UE100 is in an RRC connected state.
  • RRC connection no connection between the RRC of UE100 and the RRC of gNB200
  • UE100 is in an RRC idle state.
  • UE100 is in an RRC inactive state.
  • the NAS which is located above the RRC layer, performs session management, mobility management, etc.
  • NAS signaling is transmitted between the NAS of UE100 and the NAS of AMF300.
  • UE100 also has an application layer, etc.
  • the layer below the NAS is called the AS (Access Stratum).
  • FIG. 6 is a diagram showing an example of the configuration of functional blocks of the AI/ML technology in the mobile communication system 1 according to the first embodiment.
  • the functional block configuration example shown in FIG. 6 includes a data collection unit A1, a model learning unit A2, a model inference unit A3, and a data processing unit A4.
  • the data collection unit A1 collects input data, specifically, learning data and inference data.
  • the data collection unit A1 outputs the learning data to the model learning unit A2.
  • the data collection unit A1 also outputs the inference data to the model inference unit A3.
  • the data collection unit A1 may acquire data in the device on which the data collection unit A1 is provided as input data.
  • the data collection unit A1 may acquire data in another device as input data.
  • machine learning includes supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is a method in which correct answer data is used for learning data. Unsupervised learning is a method in which correct answer data is not used for learning data.
  • unsupervised learning feature points are memorized from a large amount of learning data, and a correct answer is determined (range is estimated).
  • Reinforcement learning is a method in which a score is assigned to an output result, and a method of maximizing the score is learned.
  • supervised learning will be described, but as machine learning, unsupervised learning or reinforcement learning may be applied.
  • the model inference unit A3 may provide model performance feedback to the model learning unit A2.
  • the data processing unit A4 receives the inference result data and performs processing that utilizes the inference result data.
  • FIG. 7 shows an example of the operation of the AI/ML technology according to the first embodiment.
  • the transmitting entity TE is, for example, an entity where machine learning is performed.
  • the transmitting entity TE performs machine learning to derive a trained model.
  • the transmitting entity TE uses the trained model to generate inference result data as an inference result.
  • the transmitting entity TE transmits the inference result data to the receiving entity RE.
  • the receiving entity RE is, for example, an entity in which machine learning is not performed.
  • the transmitting entity TE performs various processes using the inference result data received from the transmitting entity TE.
  • the entity may be, for example, a device.
  • the entity may be a function block included in the device.
  • the entity may be a hardware block included in the device.
  • the transmitting entity TE may be a UE 100
  • the receiving entity RE may be a gNB 200 or a core network device.
  • the transmitting entity TE may be a gNB 200 or a core network device
  • the receiving entity RE may be a UE 100.
  • the transmitting entity TE transmits control data related to AI/ML technology to the receiving entity RE and receives the control data from the receiving entity RE.
  • the control data may be an RRC message, which is signaling of the RRC layer (i.e., layer 3).
  • the control data may be a MAC Control Element (CE), which is signaling of the MAC layer (i.e., layer 2).
  • the control data may be Downlink Control Information (DCI), which is signaling of the PHY layer (i.e., layer 1).
  • DCI Downlink Control Information
  • the downlink signaling may be UE-specific signaling.
  • the downlink signaling may be broadcast signaling.
  • the control data may be a control message in a control layer (e.g., an AI/ML layer) specialized for artificial intelligence or machine learning.
  • Example of functional block arrangement in "CSI feedback improvement” represents a use case in which machine learning technology is applied to CSI fed back from UE100 to gNB200, for example.
  • CSI is information on the channel state in the downlink between UE100 and gNB200.
  • CSI includes at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and a rank indicator (RI).
  • CQI channel quality indicator
  • PMI precoding matrix indicator
  • RI rank indicator
  • gNB200 Based on the CSI feedback from UE100, gNB200 performs, for example, downlink scheduling.
  • Figure 8 is a diagram showing an example of the arrangement of each functional block in "CSI feedback improvement".
  • a data collection unit A1 a model learning unit A2, and a model inference unit A3 are included in the control unit 130 of the UE 100.
  • a data processing unit A4 is included in the control unit 230 of the gNB 200.
  • model learning and model inference are performed in the UE 100.
  • Figure 8 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.
  • the gNB 200 transmits a reference signal for the UE 100 to estimate the downlink channel state.
  • the reference signal will be described using a CSI reference signal (CSI-RS) as an example, but the reference signal may also be a demodulation reference signal (DMRS).
  • CSI-RS CSI reference signal
  • DMRS demodulation reference signal
  • UE100 receives a first reference signal from gNB200 using a first resource. Then, UE100 (model learning unit A2) derives a learned model for inferring CSI from the reference signal using learning data including the first reference signal and CSI. Such a first reference signal may be referred to as a full CSI-RS.
  • the CSI generation unit 131 performs channel estimation using the received signal (CSI-RS) received by the receiving unit 110, and generates CSI.
  • the transmitting unit 120 transmits the generated CSI to the gNB 200.
  • the model learning unit A2 performs model learning using a set of the received signal (CSI-RS) and CSI as learning data, and derives a learned model for inferring CSI from the received signal (CSI-RS).
  • the receiving unit 110 receives a second reference signal from the gNB 200 using a second resource that is less than the first resource. Then, the model inference unit A3 uses the learned model to infer the CSI as inference result data using the second reference signal as inference data.
  • a second reference signal may be referred to as a partial CSI-RS or a punctured CSI-RS.
  • the model inference unit A3 inputs the partial CSI-RS received by the receiving unit 110 into the trained model as inference data, and infers CSI from the CSI-RS.
  • the transmitting unit 120 transmits the inferred CSI to the gNB 200.
  • UE100 can feed back (or transmit) accurate (complete) CSI to gNB200 from the small amount of CSI-RS (partial CSI-RS) received from gNB200.
  • gNB200 can reduce (puncture) CSI-RS when intended to reduce overhead.
  • UE100 can respond to situations where the radio conditions deteriorate and some CSI-RS cannot be received normally.
  • FIGS. 9 and 10 are diagrams showing an example of reducing CSI-RS according to the first embodiment.
  • FIG. 9 shows an example of reducing CSI-RS by reducing the number of antenna ports that transmit CSI-RS.
  • gNB200 performs the following process. That is, when UE100 is in a mode in which model learning is performed (hereinafter, may be referred to as "learning mode"), gNB200 transmits CSI-RS from all antenna ports of the antenna panel. On the other hand, when UE100 is in a mode in which model inference is performed (hereinafter, may be referred to as "inference mode”), gNB200 reduces the number of antenna ports that transmit CSI-RS and transmits CSI-RS from half the antenna ports of the antenna panel. This reduces overhead, improves the utilization efficiency of antenna ports, and can reduce power consumption. Note that antenna ports are an example of resources.
  • FIG. 10 shows an example in which the radio resources used to transmit the CSI-RS, specifically, the gNB 200 reduces the time-frequency resources.
  • the gNB 200 performs the following process. That is, when the UE 100 is in the learning mode, the gNB 200 transmits the CSI-RS using a predetermined time-frequency resource. On the other hand, when the UE 100 is in the inference mode, the gNB 200 transmits the CSI-RS using a time-frequency resource that is less than the predetermined time-frequency resource. This reduces overhead, improves the utilization efficiency of the radio resources, and reduces power consumption.
  • gNB200 transmits full CSI-RS using a predetermined amount of first resources, and transmits partial CSI-RS using second resources that have a smaller amount of resources than the first resources.
  • FIG. 11 shows an example of the operation of "CSI feedback improvement" according to the first embodiment.
  • gNB200 may notify or set the transmission pattern (puncture pattern) of CSI-RS in inference mode to UE100 as control data. For example, gNB200 transmits to UE100 the antenna port and/or time-frequency resource that transmits or does not transmit CSI-RS in inference mode.
  • step S102 gNB200 may send a switching notification to UE100 to start the learning mode.
  • step S103 UE100 starts the learning mode.
  • step S104 gNB200 transmits the full CSI-RS.
  • the receiver 110 of UE100 receives the full CSI-RS, and the CSI generator 131 generates (or estimates) CSI based on the full CSI-RS.
  • the data collector A1 collects the full CSI-RS and CSI.
  • the model learning unit A2 creates a learned model using the full CSI-RS and the CSI as learning data.
  • step S105 UE100 transmits the generated CSI to gNB200.
  • step S106 when the model learning is completed, the UE 100 transmits a completion notification to the gNB 200 indicating that the model learning is completed.
  • the UE 100 may also transmit a completion notification when the creation of the trained model is completed.
  • step S107 in response to receiving the completion notification, gNB200 transmits a switching notification to UE100 to cause UE100 to switch from learning mode to inference mode.
  • step S108 in response to receiving the switching notification, UE 100 switches from the learning mode to the inference mode.
  • step S109 gNB200 transmits partial CSI-RS.
  • Receiver 110 of UE100 receives the partial CSI-RS.
  • data collection unit A1 collects partial CSI-RS.
  • Model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains CSI as the inference result.
  • step S110 UE100 feeds back (or transmits) the CSI, which is the inference result, to gNB200 as inference result data.
  • UE100 by repeating model learning during the learning mode, a trained model with a predetermined accuracy or higher can be generated. It is expected that the inference result using the trained model thus generated will also have a predetermined accuracy or higher.
  • step S111 if UE100 determines that model learning is necessary, it may transmit a notification indicating that model learning is necessary to gNB200 as control data.
  • the training data is "(full) CSI-RS” and "CSI”
  • the inference data is "(partial) CSI-RS.”
  • the training data and/or the inference data may be referred to as a "dataset.”
  • At least one of the following data or information may be used as a data set:
  • RSRP Reference Signals Received Power
  • RSRQ Reference Signal Received Quality
  • SINR Signal-to-interference-plus-noise ratio
  • output waveform of an AD converter These measurements may be CSI-RS.
  • the measurements may be other received signals received from gNB200.
  • X2 Bit Error Rate (BER) or Block Error Rate (BLER) may be measured based on the CSI-RS with the total number of transmitted bits (or total number of transmitted blocks) being known.
  • (X3) Moving speed of UE 100 (may be measured by a speed sensor in UE 100) It may be set as to what is used as the data set used for machine learning. For example, the following processing may be performed. That is, the UE 100 transmits capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data.
  • the capability information may represent, for example, any of the data or information shown in (X1) to (X3).
  • the capability information may be information in which the learning data and the inference data are separately specified.
  • the gNB 200 transmits the data type information used as the data set to the UE 100 as control data.
  • the data type information may represent, for example, any of the data or information shown in (X1) to (X3).
  • the data type information may be separately specified as the data type information used as the learning data and the data type information used as the inference data.
  • Beam management represents a use case in which, for example, machine learning technology is used to manage which beam is the optimal beam among the beams transmitted from gNB200.
  • gNB200 sequentially transmits beams with different directivities.
  • Each beam includes, for example, a reference signal.
  • UE100 measures the reception quality of each beam using the reference signal included in each beam.
  • UE100 determines, for example, the beam with the best reception quality as the optimal beam.
  • FIG. 12 is a diagram showing an example of the arrangement of each functional block in "beam management".
  • a data collection unit A1, a model learning unit A2, and a model inference unit A3 are included in the control unit 130 of the UE 100.
  • a data processing unit A4 is included in the control unit 230 of the gNB 200. That is, FIG. 12 shows an example in which model learning and model inference are performed in the UE 100.
  • FIG. 12 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.
  • UE 100 has an optimal beam determination unit 132.
  • the optimal beam determination unit 132 determines the optimal beam based on, for example, the reception quality for the reference signal included in each beam. As with “CSI feedback,” an example will be described in which CSI-RS is used as the reference signal, but a demodulation reference signal (DMRS) may also be used as the reference signal.
  • CSI feedback an example will be described in which CSI-RS is used as the reference signal, but a demodulation reference signal (DMRS) may also be used as the reference signal.
  • DMRS demodulation reference signal
  • the transmission unit 120 transmits information representing the determined optimal beam to gNB 200 as the "optimal beam.”
  • beam management operation can be implemented by replacing "CSI feedback" with “optimal beam” in Figure 11.
  • the gNB 200 sequentially transmits beams with different directivities to the UE 100 (step S104).
  • Each beam includes a full CSI-RS.
  • the data collection unit A1 of the UE 100 collects the full CSI-RS and (information representing) the optimal beam.
  • the model learning unit A2 creates a learned model using the CSI-RS and (information representing) the optimal beam as learning data.
  • the full CSI-RS is an example of a first reference signal
  • the partial CSI-RS is an example of a second reference signal.
  • the gNB 200 sequentially transmits beams with different directivities.
  • Each beam includes a partial CSI-RS.
  • the data collection unit A1 collects the partial CSI-RS.
  • the model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains the optimal beam (information representing the optimal beam) as the inference result.
  • the UE 100 transmits the inference result (optimal beam) to the gNB 200 as inference result data.
  • beam management in addition to “CSI-RS” and “optimum beam”, at least one of the following data or information may be used as data in the data set.
  • the measurement target may be CSI-RS.
  • the measurement target may be other received signals received from gNB200
  • BER (or BLER) may be measured based on CSI-RS with the total number of transmitted bits (or total number of transmitted blocks) known)
  • the UE 100 may transmit capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data.
  • the capability information may include any of the information or data from (Y1) to (Y6), or may include any of the information or data from (Y1) to (Y6) separately from the learning data and the inference data.
  • the gNB 200 may also transmit data type information used as a data set to the UE 100 as control data.
  • the data type information may include, for example, any of the data or information shown in (Y1) to (Y6).
  • the data type information may include, for example, any of the information or data from (Y1) to (Y6) separately from the learning data and the inference data.
  • FIG. 13 shows an example of the arrangement of each functional block in "improving location accuracy".
  • the data collection unit A1, model learning unit A2, and model inference unit A3 are included in the control unit 130 of the UE 100.
  • the data processing unit A4 is included in the control unit 230 of the gNB 200. That is, FIG. 13 shows an example in which model learning and model inference are performed in the UE 100.
  • FIG. 13 shows an example in which the transmitting entity TE is the UE 100, and the receiving entity RE is the gNB 200.
  • UE 100 includes a location information generation unit 133.
  • UE 100 may include a Global Navigation Satellite System (GNSS) receiver 150.
  • the location information generation unit 133 generates location data for UE 100 based on a Positioning Reference Signal (PRS) (full PRS or partial PRS) received from gNB 200.
  • PRS Positioning Reference Signal
  • the location information generation unit 133 may receive a GNSS signal (full GNSS signal or partial GNSS signal) received by the GNSS receiver 150, and generate location data for UE 100 based on the GNSS signal.
  • PRS Positioning Reference Signal
  • gNB200 transmits full PRS using a predetermined amount of first resources (e.g., all antenna ports as shown in FIG. 9, or a predetermined amount of time-frequency resources as shown in FIG. 10) in the same manner as full CSI-RS. Also, gNB200 transmits partial PRS using second resources having a smaller amount of resources than the first resources (e.g., half the antenna ports in an antenna panel as shown in FIG. 9, or half the predetermined amount of time-frequency resources as shown in FIG. 10) in the same manner as partial CSI-RS.
  • first resources e.g., all antenna ports as shown in FIG. 9, or a predetermined amount of time-frequency resources as shown in FIG. 10
  • second resources having a smaller amount of resources than the first resources (e.g., half the antenna ports in an antenna panel as shown in FIG. 9, or half the predetermined amount of time-frequency resources as shown in FIG. 10) in the same manner as partial CSI-RS.
  • the full GNSS signal may be a GNSS signal received by the GNSS receiver 150 continuously over time.
  • the partial GNSS signal may be a GNSS signal received by the GNSS receiver 150 intermittently. That is, a predetermined amount of first resources may be used for the full GNSS signal, and a second resource having a smaller amount than the first resources may be used for the partial GNSS signal.
  • An example of the operation for "improving location accuracy” can be implemented by replacing “full CSI-RS” with “full PRS,” “partial CSI-RS” with “partial PRS,” and “CSI feedback” with "location data” in FIG. 11.
  • the location information generation unit 133 In the learning mode (step S103), the location information generation unit 133 generates location data for the UE 100 based on the full PRS received from the gNB 200.
  • the location information generation unit 133 may receive a full GNSS signal received by the GNSS receiver 150 and generate location data for the UE 100 based on the full GNSS signal.
  • the transmission unit 120 feeds back (or transmits) the location data to the gNB 200.
  • the data collection unit A1 collects the full PRS (or full GNSS signal) and location data.
  • the model learning unit A2 creates a learned model using the full PRS (or full GNSS signal) and location data as learning data.
  • the data collection unit A1 collects the partial PRS received by the receiving unit 110 (or the partial GNSS signal received by the GNSS receiver 150).
  • the model inference unit A3 inputs the partial PRS (or the partial GNSS signal) as inference data into the trained model, and obtains location data as the inference result.
  • the UE 100 transmits the inference result (location data) to the gNB 200 as inference result data.
  • the data used in the data set may include, for example, at least one of the following data or information:
  • Z1 RSRP, RSRQ, SINR (signal-to-interference-plus-noise ratio), or output waveform of an AD converter (these measurements may be PRS.
  • the measurements may be other received signals received from gNB200.
  • the moving speed may be measured by GNSS receiver 150.
  • the moving speed may be measured by a speed sensor in UE 100.
  • the UE 100 may transmit capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data.
  • the capability information may include any of the information or data from (Z1) to (Z7), or may include any of the information or data from (Z1) to (Z7) separately from the learning data and the inference data.
  • the gNB 200 may also transmit data type information used as a data set to the UE 100 as control data.
  • the data type information may include, for example, any of the data or information shown in (Z1) to (Z7), or may include any of the information or data from (Z1) to (Z7) separately from the learning data and the inference data.
  • FIG. 14 is a diagram showing another example of the arrangement of "CSI feedback improvement" according to the first embodiment.
  • FIG. 14 shows an example in which a data collection unit A1, a model learning unit A2, a model inference unit A3, and a data processing unit A4 are included in a gN200. That is, FIG. 14 shows an example in which model learning and model inference are performed in a gNB200.
  • FIG. 14 shows an example in which a transmitting entity TE is a gNB200, and a receiving entity RE is a UE100.
  • Figure 14 shows an example in which AI/ML technology is introduced into CSI estimation performed by gNB200 based on SRS (Sounding Reference Signal). Therefore, gNB200 has a CSI generation unit 231 that generates CSI based on SRS.
  • the CSI is information indicating the channel state of the uplink between UE100 and gNB200.
  • gNB200 e.g., data processing unit A4 performs, for example, uplink scheduling based on the CSI generated based on SRS.
  • FIG. 15 shows an example of operation in another arrangement example according to the first embodiment.
  • the gNB 200 performs SRS transmission configuration for the UE 100.
  • the SRS transmission configuration may include type information of the reference signal transmitted by the UE 100.
  • step S202 gNB200 starts learning mode.
  • step S203 UE 100 transmits the full SRS to gNB 200 according to the SRS transmission setting (step S201).
  • the receiver 220 of gNB 200 receives the full SRS.
  • the CSI generator 231 generates (or estimates) CSI based on the full SRS.
  • the data collector A1 collects the full SRS and CSI.
  • the model learning unit A2 creates a learned model using the full SRS and CSI as learning data.
  • gNB200 identifies an SRS transmission pattern (puncture pattern) to be input to the learned model as inference data, and sets the identified SRS transmission pattern to UE100.
  • gNB200 may transmit an SRS transmission setting including the identified SRS transmission pattern to gNB200.
  • step S205 gNB200 switches from learning mode to inference mode. gNB200 starts model inference using the trained model.
  • step S206 UE100 transmits a partial SRS according to the SRS transmission setting (step S204).
  • gNB200 inputs the SRS as inference data into the trained model to obtain a channel estimation result, and then uses the channel estimation result to perform uplink scheduling for UE100 (e.g., control of uplink transmission weight, etc.).
  • uplink scheduling e.g., control of uplink transmission weight, etc.
  • gNB200 may reconfigure UE100 to transmit a full SRS if the inference accuracy of the trained model deteriorates.
  • Federated learning is, for example, a machine learning technique in which machine learning is performed in a distributed state without consolidating data (or a data set).
  • each entity does not need to transmit data, so the security of each entity can be ensured.
  • federated learning can obtain learning results with the same accuracy as conventional centralized machine learning.
  • FIG. 16 is a diagram showing an example of a configuration in which federated learning according to the first embodiment is performed.
  • the example shown in FIG. 16 shows an example in which location estimation of UE100 is performed using federated learning.
  • FIG. 16 shows an example in which UE100 has a data collection unit A1, a model learning unit A2, and a model inference unit A3. That is, it shows an example in which model learning and model inference are performed in UE100.
  • FIG. 16 shows an example in which UE100 is the transmitting entity TE, and gNB200 and/or location server 400 are the receiving entity RE.
  • the federated learning shown in Figure 16 is carried out, for example, in the following steps.
  • the location server 400 transmits the model that serves as the basis for model learning to the UE 100.
  • UE100 performs model learning using data present in UE100.
  • the data present in UE100 is, for example, the PRS received from gNB200 and/or output data (GNSS signal) of GNSS receiver 150.
  • the data present in UE100 may include location data generated by location information generation unit 133 based on the reception result of PRS and/or output data of GNSS receiver 150.
  • UE100 applies the learned model, which is the result of learning, in model inference unit A3, and transmits variable parameters included in the learned model (hereinafter, sometimes referred to as "learned parameters") to location server 400.
  • the optimized a (slope) and b (intercept) correspond to the learned parameters.
  • the location server 400 collects learned parameters from multiple UEs 100 and integrates them.
  • the location server 500 may transmit the learned model obtained by the integration to the UE 100.
  • the location server 400 can estimate the location of the UE 100 based on the learned model and the measurement report from the UE 100.
  • FIG. 17 shows an example of operation in associative learning according to the first embodiment.
  • gNB200 may notify UE100 of a model that serves as a basis for learning.
  • Location server 400 may notify the model via gNB200.
  • gNB200 instructs UE100 to learn the model.
  • gNB200 may set the report timing (trigger condition) of the learned parameters.
  • the report timing may be periodic.
  • the report timing may be triggered by the learning proficiency satisfying a condition (i.e., an event trigger).
  • step S303 UE 100 starts a learning mode.
  • UE 100 performs model learning using the full PRS (or full GNSS signal) and the location data generated by location information generating unit 133 as learning data.
  • step S304 when the reporting timing condition is met, the UE 100 transmits the learned parameters at that time to the network (gNB 200 or location server 400).
  • step S305 the location server 400 integrates the learned parameters reported from multiple UEs 100.
  • the model to be transferred may be a trained model used in model inference.
  • the model may also be an untrained (or untrained) model used in model training.
  • FIG. 18 is a diagram showing an example of an operation of the first operation pattern related to model forwarding according to the first embodiment.
  • the receiving entity RE is mainly described as the UE 100, but the receiving entity RE may be the gNB 200 or the AMF 300.
  • the transmitting entity TE is described as the gNB 200, but the transmitting entity TE may be the UE 100 or the AMF 300.
  • gNB200 transmits a capability inquiry message to UE100 to request transmission of a message including an information element (IE) indicating the execution capability for machine learning processing.
  • IE information element
  • UE100 receives the capability inquiry message.
  • gNB200 may transmit the capability inquiry message when executing machine learning processing (when it has determined that the execution will be performed).
  • UE100 transmits a message including an information element indicating execution capability for machine learning processing (or, from another perspective, execution environment for machine learning processing) to gNB200.
  • gNB200 receives the message.
  • the message may be an RRC message (e.g., a "UE Capability" message, or a newly defined message (e.g., a "UE AI Capability” message, etc.)).
  • the transmitting entity TE may be AMF300 and the message may be a NAS message.
  • the message may be a message of the new layer.
  • the information element indicating the execution capability for machine learning processing may be an information element indicating the capability of a processor for executing machine learning processing and/or an information element indicating the capability of a memory for executing machine learning processing.
  • the information element indicating the processor capability may be an information element indicating the product number (or model number) of the AI processor.
  • the information element indicating the memory capability may be an information element indicating the memory capacity.
  • the information element indicating the execution capability regarding machine learning processing may be an information element indicating the execution capability of inference processing (model inference).
  • the information element indicating the execution capability of inference processing may be an information element indicating whether or not a deep neural network model is supported.
  • the information element may be an information element indicating the time (or response time) required to execute the inference processing.
  • the information element indicating the execution capability related to machine learning processing may be an information element indicating the execution capability of learning processing (model learning).
  • the information element indicating the execution capability of learning processing may be an information element indicating the number of learning processing operations being executed simultaneously.
  • the information element may be an information element indicating the processing capacity of the learning processing.
  • step S403 gNB200 determines the model to be configured (or deployed) in UE100 based on the information elements contained in the message received in step S402.
  • step S404 gNB200 transmits a message including the model determined in step S403 to UE100.
  • UE100 receives the message and performs machine learning processing (i.e., model learning processing and/or model inference processing) using the model included in the message.
  • machine learning processing i.e., model learning processing and/or model inference processing
  • FIG. 19 is a diagram showing an example of a configuration message including a model and additional information according to the first embodiment.
  • the configuration message may be an RRC message (e.g., an "RRC Reconfiguration” message, or a newly defined message (e.g., an "AI Deployment” message or an "AI Reconfiguration” message, etc.)) transmitted from the gNB 200 to the UE 100.
  • the configuration message may be a NAS message transmitted from the AMF 300A to the UE 100.
  • the message may be a message of the new layer.
  • the setting message includes three models (Model #1 to #3). Each model is included as a container in the setting message. However, the setting message may include only one model.
  • the setting message further includes, as additional information, three individual additional information (Info #1 to #3) that is provided individually corresponding to each of the three models (Model #1 to #3), and common additional information (Meta-Info) that is commonly associated with the three models (Model #1 to #3). Each of the individual additional information (Info #1 to #3) includes information unique to the corresponding model.
  • the common additional information (Meta-Info) includes information common to all models in the setting message.
  • the individual additional information may be a model index that indicates an index (index number) assigned to each model.
  • the individual additional information may be a model execution condition that indicates the performance (e.g., processing delay) required to apply (execute) the model.
  • the individual additional information or the common additional information may be a model application that specifies a function to which a model is to be applied (e.g., "CSI feedback,” "beam management,” “positioning,” etc.).
  • the individual additional information or the common additional information may be a model selection criterion that applies (executes) a corresponding model when a specified criterion (e.g., moving speed) is satisfied.
  • UE100 groups the CQI value, the PMI value, and the RI value into one set (this set may be referred to as (CQI, PMI, RI)) and feeds back (or transmits) the three values as one CSI to gNB200. For example, UE100 feeds back (CQI#1, PMI#1, RI#1) as CSI to gNB200 at a certain timing, and feeds back (CQI#2, PMI#2, RI#2) as CSI to gNB200 at another timing.
  • UE100 can transmit a set of three values (hereinafter, sometimes referred to as "CSI") as a CSI status report using one code, the amount of information transmitted by UE100 in one CSI transmission can be reduced compared to when transmitting CSI.
  • CSI makes it possible to compress information compared to when coding is not performed.
  • FIG. 20 is a diagram showing an example of a table according to the first embodiment.
  • the table shown in FIG. 20 shows the correspondence between codes and CSI.
  • (CQI#1, PMI#1, RI#1) corresponds to code “1”
  • (CQI#1, PMI#1, RI#2) corresponds to code "2”
  • (CQI#1, PMI#1, RI#3) corresponds to code "3".
  • UE100 can feed back (or transmit) a code to gNB200, and gNB200 can obtain each value of the CSI status report from the code.
  • the first embodiment aims to reduce the amount of information compared to when a table is used.
  • a learned model based on machine learning technology is used. Specifically, first, a transmitting entity (e.g., UE 100) creates a learned model using predetermined data (e.g., CSI) and a code representing the predetermined data as learning data. Second, the transmitting entity transmits the learned model to a receiving entity (e.g., gNB 200). Third, the transmitting entity infers a code from the predetermined data using the learned model. Fourth, the transmitting entity transmits the code to the receiving entity. Fifth, the receiving entity acquires the predetermined data from the code using the learned model.
  • predetermined data e.g., CSI
  • a code representing the predetermined data as learning data.
  • the transmitting entity transmits the learned model to a receiving entity (e.g., gNB 200). Third, the transmitting entity infers a code from the predetermined data using the learned model. Fourth, the transmitting entity transmits the code to the receiving entity. Fifth, the receiving entity acquires the predetermined data from the code using
  • UE 100 uses a learned model to acquire (or infer) a code, which can reduce the amount of information compared to when a table is used (i.e., when machine learning technology is not used).
  • UE 100 since UE 100 does not transmit CSI but transmits a code representing the CSI, the amount of information transmitted can be reduced.
  • FIG. 21 is a diagram showing an example of arrangement of each functional block according to the first embodiment.
  • the UE 100 has a data collection unit A1, a model learning unit A2, and a model inference unit A3. That is, the example shown in FIG. 21 is an example in which model learning and model inference are performed in the UE 100.
  • the example shows that the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.
  • UE 100 has a code generation unit 135.
  • the code generation unit 135 generates a code representing CSI.
  • the code has a one-to-one correspondence with each CSI. However, the code has a smaller number of bits than the CSI.
  • the correspondence between the code and the CSI may be as shown in FIG. 20.
  • the code generation unit 135 may generate a code each time the CSI generation unit 131 generates CSI.
  • the data collection unit A1 collects the CSI and the code.
  • the model learning unit A2 performs model learning using the CSI and the code as learning data, and creates a learned model for inferring the code from the CSI.
  • the model learning unit A2 creates a learned model for each region.
  • the region may be a region represented by one or more cells.
  • the region may be a region represented by one or more Tracking Areas (TAs).
  • the region may be a region represented by one or more Registration Areas (RAs).
  • the region may be a region represented by one or more Public Land Mobile Networks (PLMNs).
  • a TA includes one or more cells and indicates an area in which a UE 100 in an RRC idle state can move without updating the MME.
  • An RA includes one or more cells and is defined as a collection of TAs.
  • a PLMN indicates the range in which a telecommunications carrier can provide services.
  • the region in which the trained model should be created may be set by gNB200 using control data, for example.
  • the control data may specify the region in which the trained model should be created by an identifier representing the region (e.g., a cell ID, a Tracking Area Identity (TAI), an identifier representing each RA, a PLMN ID, etc.).
  • an identifier representing the region e.g., a cell ID, a Tracking Area Identity (TAI), an identifier representing each RA, a PLMN ID, etc.
  • the amount of information (or size) of the trained model can be reduced compared to creating a trained model without taking the region into consideration. Also, by creating a trained model for each region in UE100, the overhead when UE100 transmits to gNB200 can be reduced and communication efficiency can be improved compared to creating a trained model without taking the region into consideration.
  • the transmitting unit 120 transmits the trained model to the gNB 200.
  • the transmitting unit 120 may transmit the trained model to the gNB 200.
  • the transmitting unit 120 may include the trained model in an RRC message and transmit it.
  • the transmitting unit 120 may include the trained model in a new message and transmit it.
  • the transmitting unit 120 may include the trained model in a NAS message and transmit it to the AMF 300.
  • the transmitting unit 120 may further add common additional information and/or individual additional information to the message to be transmitted.
  • the data collection unit A1 collects CSI.
  • the model inference unit A3 uses the learned model to infer a code from the CSI, and outputs the code as inference result data.
  • the transmission unit 120 transmits the code to the gNB 200.
  • the transmission unit 120 transmits the code together with regional identification information that identifies the region when the learned model was created.
  • the regional identification information may be represented by an identifier that represents each region.
  • the gNB 200 (control unit 230) receives the learned model, and uses the learned model to obtain CSI from the code received from the UE 100.
  • FIG. 22 shows an example of operation according to the first embodiment.
  • gNB200 may notify or set the transmission pattern (puncture pattern) of CSI-RS in inference mode to UE100 as control data.
  • gNB200 may notify or set data type information (here, data type information representing a data set of CSI and code) indicating the type of data used as learning data to UE100 as control data.
  • gNB200 may transmit a switching notification to UE100 as control data to start the learning mode.
  • step S502 UE100 starts the learning mode.
  • step S503 gNB200 transmits full CSI-RS.
  • UE 100 creates CSI based on the full CSI-RS.
  • UE 100 creates a set of CQI, PMI, and RI (i.e., CSI) as a CSI status report based on the full CSI-RS.
  • step S505 UE100 transmits CSI to gNB200.
  • step S506 UE100 performs model learning using the CSI and the code as learning data to create a learned model. At this time, UE100 creates a learned model for each region. UE100 may transmit a completion notification indicating that the learned model has been created to gNB200 as control data.
  • step S507 UE100 switches from the learning mode to the inference mode.
  • UE100 may switch to the inference mode in accordance with the switching notification received from gNB200.
  • UE100 transmits the learned model created in step S506 to gNB200.
  • UE100 may transmit an RRC message (or a newly defined message) including the learned model to gNB200.
  • the RRC message may include regional identification information.
  • gNB200 receives the learned model.
  • gNB200 transmits partial CSI-RS (or full CSI-RS). gNB200 may transmit partial CSI-RS in response to a completion notification received from UE100.
  • step S510 UE 100 infers a code from the learned model created in step S506.
  • CSI generation unit 131 generates CSI based on partial CSI-RS
  • model inference unit A3 infers a code by inputting the CSI as inference data into the learned model.
  • step S511 UE100 transmits the code and regional identification information to gNB200.
  • gNB200 acquires CSI using a learned model corresponding to the regional identification information.
  • the learned model is a model that inputs CSI and outputs (or infers) a code.
  • the learned model may be a model that inputs a code and outputs (or infers) CSI.
  • gNB200 can input the code received in step S511 to the learned model and obtain the CSI that UE100 intends to report.
  • a learned model is capable of outputting a code from CSI, outputting CSI from a code, and performing processing in both directions.
  • gNB200 may input several CSIs into the learned model until the learned model outputs the same code as the code received in step S511, and may treat the CSI at the time when the same code as the code received in step S511 is output as the CSI reported by UE100 (i.e., CSI status report).
  • UE100 transmits at least a portion of the data set used to create the learned model to gNB200 together with the learned model. Then, gNB200 may use the data set to input CSI into the learned model, and may treat the CSI at the time when the same code as the code received in step S511 is output as the CSI reported by UE100.
  • gNB200 may use the acquired CSI to perform scheduling control or beam control.
  • FIG. 23 is a diagram showing another example of operation according to the first embodiment.
  • FIG. 23 shows an example in which the learning model used in the first embodiment is used. For example, the following case is assumed.
  • UE100-1 performs model learning using code #1 and CSI #1 as learning data to create model #1.
  • Model #1 is, for example, a model being learned by model learning.
  • UE100-2 performs model learning using model #1 in order to further improve the accuracy of the model.
  • UE100-2 performs model learning using the same code #1 used by UE100-1 and CSI #2 (or code #2) that is different from CSI #1 used by UE100.
  • the learning data used in UE100-1 may be overwritten by the learning data used in UE100-2 through model learning in UE100-2, and a learned model that does not reflect the learning results of UE#1 may be created.
  • UE100-1 performs model learning using code #1 and CSI #1 as learning data
  • UE100-2 performs model learning using code #1 and CSI #2 as learning data, so that even if CSI #1 is input, code #1 is not output as an inference result, and a learned model is created in which code #1 is output as an inference result only after CSI #2 is input.
  • two solutions are used to create an appropriate trained model.
  • the first solution is an example in which identification information of each UE 100 is added to the code.
  • the code includes user equipment identification information (e.g., identification information of each UE) that identifies the user equipment (e.g., UE 100).
  • Figure 24 is a diagram showing the correspondence between codes and CSI when a UEID is added to the code as identification information for each UE 100.
  • UEID#1 is added to the code as the UEID of UE 100-1. Therefore, in UE 100-1, model learning is performed using a code including its own UEID (e.g., UEID#1) and CSI as learning data.
  • UE 100-2 model learning is performed using a code including its own UEID (e.g., UEID#2) and CSI as learning data.
  • UE 100-1 uses learning data in which its own UEID is added to code #1
  • UE 100-2 uses learning data in which its own UEID is added to code #1. Therefore, a learned model using different codes is created. For example, in the example of FIG. 23, a model is created in UE 100-1 using "1_UEID#1" and "CSI#1" as learning data, and a model is created in UE 100-2 using "1_UEID#2" and "CSI#2" as learning data, and the learning data is not overwritten. Therefore, an appropriate learned model can be created.
  • the UE 100-1 can transmit the code instead of the CSI, so that, as in the first embodiment, it is possible to reduce the amount of information compared to when a table is used.
  • the UE identification information added to (or included in) the code may be (temporarily) assigned by the network.
  • the identification information may be identification information pre-installed in each UE.
  • the identification information may be IMSI (International Mobile Subscriber Identity), SUCI (Subscription Concealed Identifier), GUTI (Globally Unique Temporary UE Identity), TMSI (Temporary Mobile Subscriber Identity), RNTI (Radio Network Temporary Identifier), etc.
  • the second solution is an example in which the range of codes used for training data is made different between UEs 100.
  • a base station e.g., gNB 200
  • sets the range to be used for codes in a user device e.g., UE 100.
  • the user device creates a trained model using predetermined data (e.g., CSI) and codes within the set range as training data.
  • predetermined data e.g., CSI
  • the range of codes used by UE 100-1 when performing model learning e.g., code "1” to code "10"
  • the range of codes used by UE 100-2 when performing model learning e.g., code "11” to code "20”
  • the codes used as learning data differ between UE 100-1 and UE 100-2, and therefore the learning data created by UE 100-1 is not overwritten by UE 100-2. Therefore, an appropriate learned model can be created.
  • FIG. 25 shows another example of operation according to the first embodiment.
  • step S601 the gNB 200 transmits control data including information indicating the code range to the UE 100.
  • the gNB 200 may notify or set the CSI-RS transmission pattern in the inference mode, the type of data used as learning data, and switching to the learning mode.
  • step S602 UE100 starts the learning mode.
  • step S603 gNB200 transmits full CSI-RS.
  • step S604 UE100 creates CSI based on the full CSI-RS.
  • step S605 UE100 transmits CSI to gNB200.
  • step S606 UE100 performs model learning by adding a UEID to the code. If information indicating the code range is set by gNB200, UE100 does not need to add a UEID to the code. In this case, UE100 performs model learning using a code within the range set by gNB200. UE100 may perform model learning for each region to create a learned model, as in the first embodiment.
  • the UE 100 may create a learned model for each time period or at a certain timing.
  • the UE 100 may create a learned model according to the moving speed of the UE 100.
  • the UE 100 may transmit time information, timing information, or moving speed information of the UE 100 to the gNB 200 instead of the region identification information (step S511).
  • the CSI-RS in the downlink is measured, and the measurement result (channel state) is discretized and coded for each index such as CQI, PMI, and RI according to a predetermined codebook. While the amount of data can be reduced by feeding back the code (digital value), the problem is that it contains an error with respect to the actual channel state (analog amount). This problem can be solved by using a machine learning model. For example, in FIG. 8, the receiving unit 110 receives the CSI-RS and transfers the measurement result of the channel state to the data collecting unit A1.
  • the model inference unit A3 outputs reproducible information for making the channel state reproducible in the gNB 200 as the measurement result (input data) and the inference result (output data).
  • the transmitting unit 120 transmits (feeds back) the reproducible information to the receiving unit 220.
  • the gNB 200 inputs the reproducible information to its own model inference unit (e.g., data processing unit A4), and the model inference unit estimates (reproduces) the channel state. This allows the channel state (channel estimation) measured by the UE 100 to be reproduced in analog quantities (or a resolution close to this) in the gNB 200.
  • the gNB 200 can appropriately (with reduced error) determine the MCS, beam/antenna weighting, MIMO rank, etc. based on the channel state. Note that this method may be implemented not only for the reproduction of the channel state described here, but also for each of the indicators (e.g., PMI or beam/antenna weighting) (individually and independently).
  • the gNB 200 is assumed to detect a case in which the accuracy of the CSI obtained from the inference result of the trained model is poor by comparing it with the CSI reported from the UE 100 or the most recent CSI. That is, a case is assumed in which the gNB 200 detects a deviation between the trained model and actual operation. In such a case, the gNB 200 may stop the use of the trained model for the UE 100 as a relief measure. Alternatively, the gNB 200 may start transmitting a trained model other than the trained model being used as a relief measure in order to start using the trained model other than the trained model being used.
  • CSI is used as an example of data corresponding to a code.
  • data transmission timing is used as an example of data corresponding to a code.
  • DRX discontinuous Reception
  • UE100 When DRX is set for UE100, UE100 goes into wake-up mode during the On-duration period of the DRX cycle to monitor the PDCCH from the network, and goes into sleep mode during periods other than the On-duration period to turn off some of the functions of UE100 so that there is no need to attempt to receive data from the network.
  • the periodic repetition of sleep mode and wake-up mode is called DRX, for example.
  • DRX can reduce the power consumption of UE100 compared to UE100 that always operates in wake-up mode.
  • DRX includes connected mode DRX (C-DRX) in which UE100 performs DRX operation in an RRC connected state, and idle mode DRX (I-DRX) in which UE100 performs DRX operation in an RRC idle state or an RRC inactive state.
  • C-DRX connected mode DRX
  • I-DRX idle mode DRX
  • the above-mentioned operation is the operation in C-DRX.
  • UE100 and gNB200 use the UE100 identifier (IMSI: International Mobile Subscriber Identity) to calculate a paging occasion (PO), which is a subframe in which a paging message is transmitted, and a paging frame (PF), which is a radio frame containing the PO.
  • IMSI International Mobile Subscriber Identity
  • PO paging occasion
  • PF paging frame
  • the gNB 200 transmits a paging message in a periodic PF, and the UE 100 receives the paging message, thereby performing discontinuous
  • the DRX setting (drx-Config) is set in the UE 100 from the gNB 200 using an RRC message (such as an RRCConnectionReconfiguration message or an RRCConnectionSetup message).
  • RRC message such as an RRCConnectionReconfiguration message or an RRCConnectionSetup message.
  • I-DRX parameters used in the calculation are notified using SIB.
  • the UE 100 can calculate the PO and PF using the notified parameters.
  • FIG. 26 is a diagram showing an example of the transmission timing and reception timing of DL data according to the second embodiment.
  • the gNB 200 transmits DL data at a certain timing during the DRX On Duration period (i.e., the reception timing of the UE in FIG. 26), and the UE 100 receives the DL data at the same timing during the On Duration period.
  • the table stores whether or not DL data is to be transmitted at each timing, which represents the time of a day in subframe units starting from midnight.
  • UE 100 can accurately determine the timing of transmitting DL data by using the table.
  • the timing of receiving DL data is determined using AI/ML technology.
  • a base station e.g., gNB200
  • the base station transmits the trained model to a user device (e.g., UE100).
  • the user device infers the timing of transmitting data using the trained model.
  • the user device performs a data reception process at the timing of transmitting data.
  • the UE 100 since the reception process is not performed using a table, it is possible to reduce the amount of information compared to the case where a table is used.
  • FIG. 27 is a diagram showing an example of the arrangement of an AI/ML model according to the second embodiment.
  • the gNB 200 performs model learning and model inference, so the gNB 200 becomes the transmitting entity TE, and the UE 100 becomes the receiving entity RE.
  • the gNB 200 has a user data generation unit 240 and a timing generation unit 236.
  • the user data generation unit 240 generates user data (DL data) addressed to the UE 100.
  • the transmission unit 210 transmits the user data to the UE 100.
  • the timing generation unit 236 generates timing (or code) that indicates the elapsed time from a reference time.
  • the reference time may be January 1st of every year, the 1st of every month, or 0:00 every day.
  • the elapsed time may be expressed in a predetermined time unit. Specifically, the elapsed time may be expressed in subframe units. The elapsed time may be expressed in slot units. The elapsed time may be expressed in radio frame units or seconds.
  • the timing generation unit 236 when the timing generation unit 236 receives user data from the user data generation unit 240, it outputs information indicating the execution of transmission of the user data to the data collection unit A1 at the timing of receipt. Furthermore, the timing generation unit 236 outputs the user data transmission timing at which the transmission of the user data is executed to the data collection unit A1.
  • the transmission timing may be represented by a code.
  • the timing generation unit 236 may output information indicating that the transmission of the user data will not be executed at any timing other than the timing at which the user data is received to the data collection unit A1.
  • the learning data used by the model learning unit A2 is information (or predetermined data) indicating the execution of transmission of user data, and the transmission timing (or code) of the user data.
  • the model learning unit A2 receives information indicating the execution of transmission of user data, it creates a learned model that infers the transmission timing of the user data.
  • the model learning unit A2 receives information indicating a timing (or code)
  • it creates a learned model that infers whether it is the transmission timing. For example, when the learned model receives "10:40:30" (timing), it can infer “transmission timing” or “non-transmission timing” (whether it is the transmission timing).
  • the learned model may be a model for inferring the transmission timing of user data.
  • the transmission unit 210 transmits the learned model to the UE 100.
  • FIG. 28 shows an example of operation according to the second embodiment.
  • gNB200 performs DRX configuration for UE100.
  • the DRX configuration is configured using an RRC message.
  • gNB200 may notify UE100 of information indicating that it will create a trained model.
  • step S702 gNB200 starts learning mode.
  • gNB200 creates a trained model.
  • gNB200 creates a trained model in which information representing the execution of transmission of user data is used as inference data and the transmission timing (or code) of the user data is the inference result.
  • gNB200 creates a trained model that infers whether it is a transmission timing when information representing a timing (or code) is input. For example, with this trained model, when “10:40:30" (timing) is input, it can infer “transmission timing” or “non-transmission timing” (whether it is a transmission timing).
  • step S704 gNB200 switches from learning mode to inference mode.
  • step S705 gNB200 transmits the trained model created in step S703 to UE100.
  • step S706 the gNB 200 uses control data to transmit information indicating where the current time is from the reference timing to the UE 100.
  • the information may be information indicating the current time.
  • step S707 discontinuous reception is started according to the DRX setting from gNB200 (step S701).
  • UE100 infers the transmission timing of user data using the learned model received in step S705.
  • UE100 may set the transmission timing as the reception timing.
  • UE100 determines the reception timing (or the transmission timing) based on information indicating the current position from the reference timing (step S706).
  • UE100 may set the reception timing to a time with a margin time secured for the transmission timing.
  • Figure 29 shows an example in which a margin time is secured for the reception timing with respect to the transmission timing.
  • the margin time may be set to UE100 by gNB200 using control data in step S701.
  • step S709 UE 100 receives DL data at the transmission timing (or reception timing) inferred in step S708.
  • the gNB 200 creates a trained model that infers the transmission timing of DL data.
  • the UE 100 may create a trained model that infers the transmission timing of UL data.
  • the UE 100 performs model learning using information representing the execution of transmission of user data and the transmission timing (or code) of the user data as learning data.
  • the UE 100 creates a trained model in which the transmission timing of the user data is the inference result, using information representing the execution of transmission of user data (UL data) as inference data, by model learning.
  • the UE 100 transmits the trained model to the gNB 200, and the gNB 200 infers the transmission timing of the UL data using the trained model.
  • the gNB 200 receives the UL data from the UE 100 at the inferred transmission timing (or reception timing).
  • the gNB 200 detects a difference between the actual uplink data and the trained model. It is also assumed that a case is detected in which downlink data cannot be transmitted from the gNB 200 to the UE 100 due to the difference, and the transmission buffer accumulates. In other words, it is assumed that a deviation is detected between the trained model and the actual operation. In such a case, as a relief measure, the gNB 200 may stop using the trained model at the next reception timing of the UE 100 for the UE 100. Alternatively, as a relief measure, the gNB 200 may start transmitting another trained model to start using the trained model other than the currently used trained model.
  • UE100 may notify gNB200 of the situation and request the suspension of use of the trained model or the transmission of another trained model.
  • the notification and the request may be transmitted using an RRC message (or a newly defined message) or the like.
  • gNB200 may specify in advance the implementation conditions of the notification, such as notifying gNB200 of the absence of downlink data at a predetermined number of reception timings (e.g., five times) for UE100.
  • the notification and the specification of the implementation conditions may also be transmitted using an RRC message (or a newly defined message) or the like.
  • first and second embodiments supervised learning has been mainly described, but the present invention is not limited thereto.
  • the first to third embodiments may be applied to unsupervised learning or reinforcement learning.
  • Each of the above-mentioned operation flows can be implemented not only separately but also by combining two or more operation flows. For example, some steps of one operation flow can be added to another operation flow, or some steps of one operation flow can be replaced with some steps of another operation flow. In each flow, it is not necessary to execute all steps, and only some of the steps can be executed.
  • the base station is an NR base station (gNB)
  • the base station may be an LTE base station (eNB) or a 6G base station.
  • the base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node.
  • the base station may be a DU of an IAB node.
  • the UE 100 may also be an MT (Mobile Termination) of an IAB node.
  • network node primarily refers to a base station, but may also refer to a core network device or part of a base station (CU, DU, or RU).
  • a network node may also be composed of a combination of at least part of a core network device and at least part of a base station.
  • a program e.g., an information processing program that causes a computer to execute each process or each function according to the above-mentioned embodiment may be provided.
  • a program e.g., a mobile communication program that causes the mobile communication system 1 to execute each process or each function according to the above-mentioned embodiment may be provided.
  • the program may be recorded in a computer-readable medium. Using a computer-readable medium, it is possible to install the program in a computer.
  • the computer-readable medium on which the program is recorded may be a non-transient recording medium.
  • the non-transient recording medium is not particularly limited, and may be, for example, a recording medium such as a CD-ROM or a DVD-ROM.
  • Such a recording medium may be a memory included in the UE 100 and the gNB 200.
  • a circuit that executes each process performed by the UE 100 or the gNB 200 may be integrated, and at least a part of the UE 100 or the gNB 200 may be configured as a semiconductor integrated circuit (chip set, SoC: System on a chip).
  • UE100 or gNB200 network node
  • a processor includes transistors and other circuits and is considered to be circuitry or processing circuitry.
  • a processor may be a programmed processor that executes a program stored in a memory.
  • circuitry, unit, and means are hardware that is programmed to realize the described functions or hardware that executes them.
  • the hardware may be any hardware disclosed herein or any hardware known to be programmed or capable of performing the described functions. If the hardware is a processor considered to be a type of circuitry, the circuitry, means, or unit is a combination of hardware and software used to configure the hardware and/or processor.
  • the terms “based on” and “depending on/in response to” do not mean “based only on” or “only in response to” unless otherwise specified.
  • the term “based on” means both “based only on” and “based at least in part on”.
  • the term “in response to” means both “only in response to” and “at least in part on”.
  • the terms “include”, “comprise”, and variations thereof do not mean including only the recited items, but may include only the recited items or may include additional items in addition to the recited items.
  • the term “or” as used in this disclosure is not intended to mean an exclusive or.
  • a communication control method in a mobile communication system comprising: A transmitting entity creates a trained model using predetermined data and a code representing the predetermined data as training data; the transmitting entity transmitting the trained model to a receiving entity; the transmitting entity inferring the code from the given data using the trained model; the transmitting entity transmitting the code to the receiving entity; The receiving entity obtains the predetermined data from the code using the learned model.
  • the transmitting entity is a user equipment and the receiving entity is a network node;
  • CQI Channel Quality Indicator
  • PMI Precoding Matrix Indicator
  • RI Rank Indicator
  • the transmitting entity is a user equipment and the receiving entity is a network node;
  • the communication control method according to any one of Supplementary Note 1 to Supplementary Note 4, wherein the creating step includes a step in which the user device creates the trained model using the code including user device identification information.
  • the transmitting entity is a user equipment and the receiving entity is a network node;
  • the method further comprises the step of: the network node configuring the user equipment with a range to be used for the code;
  • the communication control method according to any one of Supplementary Note 1 to Supplementary Note 5, wherein the creating step includes a step in which the user device creates the trained model using the specified data and the code within the set range as the training data.
  • a communication control method in a mobile communication system comprising: A network node creates a trained model for inferring data transmission timing in discontinuous reception; The network node transmits the trained model to a user equipment; The user equipment infers a transmission timing of the data using the trained model; The user device performs a receiving process of the data at a timing to transmit the data.
  • the transmission timing represents an elapsed time from a reference timing
  • Mobile communication system 20 5GC (CN) 100: UE 110: Receiving unit 120: Transmitting unit 130: Control unit 131: CSI generating unit 132: Optimal beam determining unit 133: Position information generating unit 135: Code generating unit 150: GNSS receiver 200: gNB 210: Transmitter 220: Receiver 230: Controller 231: CSI generator 236: Timing generator 240: User data generator A1: Data collector A2: Model learning unit A3: Model inference unit A4: Data processor TE: Transmitting entity RE: Receiving entity

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Provided is a communication control method in a mobile communication system. The communication control method has a step in which a transmission entity creates a trained model by using prescribed data and a code indicating the prescribed data as training data. Additionally, the communication control method has a step in which the transmission entity transmits the trained model to a reception entity. Further, the communication control method has a step in which the transmission entity infers a code from the prescribed data by using the trained model. Furthermore, the communication control method has a step in which the transmission entity transmits the code to the reception entity. Moreover, the communication control method has a step in which the reception entity acquires the prescribed data from the code by using the trained model.

Description

通信制御方法Communication Control Method

 本開示は、通信制御方法に関する。 This disclosure relates to a communication control method.

 近年、移動通信システムの標準化プロジェクトである3GPP(Third Generation Partnership Project)(登録商標)において、人工知能(AI:Artificial Intelligence)技術、特に、機械学習(ML:Machine Learning)技術を移動通信システムの無線通信(エアインターフェイス)に適用しようとする検討が行われている。 In recent years, the Third Generation Partnership Project (3GPP) (registered trademark), a standardization project for mobile communications systems, has been considering applying artificial intelligence (AI) technology, and in particular machine learning (ML) technology, to the wireless communications (air interface) of mobile communications systems.

3GPP寄書:RP-213599、“New SI: Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface”3GPP contribution: RP-213599, “New SI: Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface”

 一態様に係る通信制御方法は、移動通信システムにおける通信制御方法である。前記通信制御方法は、送信エンティティが、所定データと当該所定データを表すコードとを学習用データとして学習済モデルを作成するステップを有する。また、前記通信制御方法は、送信エンティティが、学習済モデルを受信エンティティへ送信するステップを有する。更に、前記通信制御方法は、送信エンティティが、学習済モデルを用いて所定データからコードを推論するステップを有する。更に、前記通信制御方法は、送信エンティティが、コードを受信エンティティへ送信するステップを有する。更に、前記通信制御方法は、受信エンティティが、学習済モデルを用いてコードから所定データを取得するステップを有する。 A communication control method according to one embodiment is a communication control method in a mobile communication system. The communication control method includes a step in which a transmitting entity creates a trained model using predetermined data and a code representing the predetermined data as training data. The communication control method also includes a step in which the transmitting entity transmits the trained model to a receiving entity. The communication control method further includes a step in which the transmitting entity infers a code from the predetermined data using the trained model. The communication control method further includes a step in which the transmitting entity transmits the code to the receiving entity. The communication control method further includes a step in which the receiving entity obtains the predetermined data from the code using the trained model.

 また、一態様に係る通信制御方法は、移動通信システムにおける通信制御方法である。前記通信制御方法は、ネットワークノードが、間欠受信におけるデータの送信タイミングを推論するための学習済モデルを作成するステップを有する。また、前記通信制御方法は、ネットワークノードが、学習済モデルをユーザ装置へ送信するステップを有する。更に、前記通信制御方法は、ユーザ装置が、学習済モデルを用いてデータの送信タイミングを推論するステップを有する。更に、前記通信制御方法は、ユーザ装置が、データの送信タイミングで、データの受信処理を行うステップと、を有する。 A communication control method according to one embodiment is a communication control method in a mobile communication system. The communication control method includes a step in which a network node creates a trained model for inferring data transmission timing in intermittent reception. The communication control method also includes a step in which the network node transmits the trained model to a user device. The communication control method further includes a step in which the user device infers data transmission timing using the trained model. The communication control method further includes a step in which the user device performs data reception processing at the data transmission timing.

図1は、第1実施形態に係る移動通信システムの構成例を示す図である。FIG. 1 is a diagram showing an example of the configuration of a mobile communication system according to the first embodiment. 図2は、第1実施形態に係るUE(ユーザ装置)の構成例を示す図である。FIG. 2 is a diagram illustrating an example of the configuration of a UE (user equipment) according to the first embodiment. 図3は、第1実施形態に係るgNB(基地局)の構成例を示す図である。Figure 3 is a diagram showing an example configuration of a gNB (base station) according to the first embodiment. 図4は、第1実施形態に係るプロトコルスタックの構成例を示す図である。FIG. 4 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment. 図5は、第1実施形態に係るプロトコルスタックの構成例を示す図である。FIG. 5 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment. 図6は、第1実施形態に係るAI/ML技術の機能ブロックの構成例を示す図である。FIG. 6 is a diagram illustrating an example of a functional block configuration of the AI/ML technology according to the first embodiment. 図7は、第1実施形態に係るAI/ML技術における動作例を表す図である。FIG. 7 is a diagram illustrating an example of an operation in the AI/ML technique according to the first embodiment. 図8は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 8 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図9は、第1実施形態に係るCSI-RSを削減する例を表す図である。FIG. 9 is a diagram illustrating an example of reducing CSI-RS according to the first embodiment. 図10は、第1実施形態に係るCSI-RSを削減する例を表す図である。FIG. 10 is a diagram illustrating an example of reducing CSI-RS according to the first embodiment. 図11は、第1実施形態に係る動作例を表す図である。FIG. 11 is a diagram illustrating an example of an operation according to the first embodiment. 図12は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 12 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図13は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 13 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図14は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 14 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図15は、第1実施形態に係る動作例を表す図である。FIG. 15 is a diagram illustrating an example of an operation according to the first embodiment. 図16は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 16 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図17は、第1実施形態に係る動作例を表す図である。FIG. 17 is a diagram illustrating an example of an operation according to the first embodiment. 図18は、第1実施形態に係る動作例を表す図である。FIG. 18 is a diagram illustrating an example of an operation according to the first embodiment. 図19は、第1実施形態に係る設定メッセージの一例を表す図である。FIG. 19 is a diagram illustrating an example of a setting message according to the first embodiment. 図20は、第1実施形態に係るコードとCSIとの対応関係を表す図である。FIG. 20 is a diagram showing the correspondence between codes and CSI according to the first embodiment. 図21は、第1実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 21 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment. 図22は、第1実施形態に係る動作例を表す図である。FIG. 22 is a diagram illustrating an example of an operation according to the first embodiment. 図23は、第1実施形態に係る他の動作例を表す図である。FIG. 23 is a diagram illustrating another operation example according to the first embodiment. 図24は、第1実施形態に係るコードとCSIとの対応関係を表す図である。FIG. 24 is a diagram showing the correspondence between codes and CSI according to the first embodiment. 図25は、第1実施形態に係る他の動作例を表す図である。FIG. 25 is a diagram illustrating another operation example according to the first embodiment. 図26は、第2実施形態に係る送信タイミングと受信タイミングの例を表す図である。FIG. 26 is a diagram illustrating an example of transmission timing and reception timing according to the second embodiment. 図27は、第2実施形態に係るAI/ML技術の機能ブロックの配置例を表す図である。FIG. 27 is a diagram showing an example of an arrangement of functional blocks of the AI/ML technology according to the second embodiment. 図28は、第2実施形態に係る動作例を表す図である。FIG. 28 is a diagram illustrating an example of operation according to the second embodiment. 図29は、第2実施形態に係るマージン時間の例を表す図である。FIG. 29 is a diagram illustrating an example of a margin time according to the second embodiment.

 本開示は、情報量の削減が可能な通信制御方法を提供することを目的とする。 The purpose of this disclosure is to provide a communication control method that can reduce the amount of information.

 [第1実施形態]
 図面を参照しながら、第1実施形態に係る移動通信システムについて説明する。図面の記載において、同一又は類似の部分には同一又は類似の符号を付している。
[First embodiment]
The mobile communication system according to the first embodiment will be described with reference to the drawings. In the description of the drawings, the same or similar parts are denoted by the same or similar reference numerals.

 (移動通信システムの構成)
 第1実施形態に係る移動通信システムの構成について説明する。図1は、第1実施形態に係る移動通信システム1の構成例を示す図である。移動通信システム1は、3GPP規格の第5世代システム(5GS:5th Generation System)に準拠する。以下において、5GSを例に挙げて説明するが、移動通信システムには、LTE(Long Term Evolution)システムが少なくとも部分的に適用されてもよい。移動通信システムには、第6世代(6G)システム以降のシステムが少なくとも部分的に適用されてもよい。
(Configuration of a mobile communication system)
A configuration of a mobile communication system according to the first embodiment will be described. FIG. 1 is a diagram showing an example of the configuration of a mobile communication system 1 according to the first embodiment. The mobile communication system 1 complies with the 5th generation system (5GS: 5th Generation System) of the 3GPP standard. In the following, 5GS will be described as an example, but an LTE (Long Term Evolution) system may be applied at least partially to the mobile communication system. A sixth generation (6G) system or later system may be applied at least partially to the mobile communication system.

 移動通信システム1は、ユーザ装置(UE:User Equipment)100と、5Gの無線アクセスネットワーク(NG-RAN:Next Generation Radio Access Network)10と、5Gのコアネットワーク(5GC:5G Core Network)20とを有する。以下において、NG-RAN10を単にRAN10と呼ぶことがある。また、5GC20を単にコアネットワーク(CN)20と呼ぶことがある。 The mobile communication system 1 has a user equipment (UE) 100, a 5G radio access network (NG-RAN: Next Generation Radio Access Network) 10, and a 5G core network (5GC: 5G Core Network) 20. In the following, the NG-RAN 10 may be simply referred to as the RAN 10. Also, the 5GC 20 may be simply referred to as the core network (CN) 20.

 UE100は、移動可能な無線通信装置である。UE100は、ユーザにより利用される装置であればどのような装置であっても構わない。例えば、UE100は、携帯電話端末(スマートフォンを含む)及び/又はタブレット端末、ノートPC、通信モジュール(通信カード又はチップセットを含む)、センサ若しくはセンサに設けられる装置、車両若しくは車両に設けられる装置(Vehicle UE)、飛行体若しくは飛行体に設けられる装置(Aerial UE)である。 UE100 is a mobile wireless communication device. UE100 may be any device that is used by a user. For example, UE100 is a mobile phone terminal (including a smartphone) and/or a tablet terminal, a notebook PC, a communication module (including a communication card or chipset), a sensor or a device provided in a sensor, a vehicle or a device provided in a vehicle (Vehicle UE), or an aircraft or a device provided in an aircraft (Aerial UE).

 NG-RAN10は、基地局(5Gシステムにおいて「gNB」と呼ばれる)200を含む。gNB200は、基地局間インターフェイスであるXnインターフェイスを介して相互に接続される。gNB200は、1又は複数のセルを管理する。gNB200は、自セルとの接続を確立したUE100との無線通信を行う。gNB200は、無線リソース管理(RRM)機能、ユーザデータ(以下、単に「データ」という)のルーティング機能、モビリティ制御・スケジューリングのための測定制御機能等を有する。「セル」は、無線通信エリアの最小単位を示す用語として用いられる。「セル」は、UE100との無線通信を行う機能又はリソースを示す用語としても用いられる。1つのセルは1つのキャリア周波数(以下、単に「周波数」と呼ぶ)に属する。 NG-RAN10 includes base station (called "gNB" in 5G system) 200. gNB200 are connected to each other via Xn interface, which is an interface between base stations. gNB200 manages one or more cells. gNB200 performs wireless communication with UE100 that has established a connection with its own cell. gNB200 has a radio resource management (RRM) function, a routing function for user data (hereinafter simply referred to as "data"), a measurement control function for mobility control and scheduling, etc. "Cell" is used as a term indicating the smallest unit of a wireless communication area. "Cell" is also used as a term indicating a function or resource for performing wireless communication with UE100. One cell belongs to one carrier frequency (hereinafter simply referred to as "frequency").

 なお、gNBがLTEのコアネットワークであるEPC(Evolved Packet Core)に接続することもできる。LTEの基地局が5GCに接続することもできる。LTEの基地局とgNBとが基地局間インターフェイスを介して接続されることもできる。 In addition, gNBs can also be connected to the Evolved Packet Core (EPC), which is the core network of LTE. LTE base stations can also be connected to 5GC. LTE base stations and gNBs can also be connected via a base station-to-base station interface.

 5GC20は、AMF(Access and Mobility Management Function)及びUPF(User Plane Function)300を含む。AMFは、UE100に対する各種モビリティ制御等を行う。AMFは、NAS(Non-Access Stratum)シグナリングを用いてUE100と通信することにより、UE100のモビリティを管理する。UPFは、データの転送制御を行う。AMF及びUPF300は、基地局-コアネットワーク間インターフェイスであるNGインターフェイスを介してgNB200と接続される。AMF及びUPF300は、CN20に含まれるコアネットワーク装置であってもよい。 5GC20 includes an AMF (Access and Mobility Management Function) and a UPF (User Plane Function) 300. The AMF performs various mobility controls for the UE 100. The AMF manages the mobility of the UE 100 by communicating with the UE 100 using NAS (Non-Access Stratum) signaling. The UPF controls data forwarding. The AMF and the UPF 300 are connected to the gNB 200 via an NG interface, which is an interface between a base station and a core network. The AMF and the UPF 300 may be core network devices included in the CN 20.

 図2は、第1実施形態に係るUE100(ユーザ装置)の構成例を示す図である。UE100は、受信部110、送信部120、及び制御部130を備える。受信部110及び送信部120は、gNB200との無線通信を行う通信部を構成する。UE100は、通信装置の一例である。 FIG. 2 is a diagram showing an example of the configuration of a UE 100 (user equipment) according to the first embodiment. The UE 100 includes a receiver 110, a transmitter 120, and a controller 130. The receiver 110 and the transmitter 120 constitute a communication unit that performs wireless communication with the gNB 200. The UE 100 is an example of a communication device.

 受信部110は、制御部130の制御下で各種の受信を行う。受信部110は、アンテナ及び受信機を含む。受信機は、アンテナが受信する無線信号をベースバンド信号(受信信号)に変換して制御部130に出力する。 The receiving unit 110 performs various types of reception under the control of the control unit 130. The receiving unit 110 includes an antenna and a receiver. The receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 130.

 送信部120は、制御部130の制御下で各種の送信を行う。送信部120は、アンテナ及び送信機を含む。送信機は、制御部130が出力するベースバンド信号(送信信号)を無線信号に変換してアンテナから送信する。 The transmitting unit 120 performs various transmissions under the control of the control unit 130. The transmitting unit 120 includes an antenna and a transmitter. The transmitter converts the baseband signal (transmission signal) output by the control unit 130 into a radio signal and transmits it from the antenna.

 制御部130は、UE100における各種の制御及び処理を行う。このような処理は、後述の各レイヤの処理を含む。制御部130は、少なくとも1つのプロセッサ及び少なくとも1つのメモリを含む。メモリは、プロセッサにより実行されるプログラム、及びプロセッサによる処理に用いられる情報を記憶する。プロセッサは、ベースバンドプロセッサと、CPU(Central Processing Unit)とを含んでもよい。ベースバンドプロセッサは、ベースバンド信号の変調・復調及び符号化・復号等を行う。CPUは、メモリに記憶されるプログラムを実行して各種の処理を行う。なお、UE100で行われる処理又は動作は、制御部130において行われてもよい。 The control unit 130 performs various controls and processes in the UE 100. Such processes include the processes of each layer described below. The control unit 130 includes at least one processor and at least one memory. The memory stores programs executed by the processor and information used in the processes by the processor. The processor may include a baseband processor and a CPU (Central Processing Unit). The baseband processor performs modulation/demodulation and encoding/decoding of baseband signals. The CPU executes programs stored in the memory to perform various processes. Note that the processes or operations performed in the UE 100 may be performed in the control unit 130.

 図3は、第1実施形態に係るgNB200(基地局)の構成例を示す図である。gNB200は、送信部210、受信部220、制御部230、及びバックホール通信部250を備える。送信部210及び受信部220は、UE100との無線通信を行う通信部を構成する。バックホール通信部250は、CN20との通信を行うネットワーク通信部を構成する。gNB200は、通信装置の他の例である。 FIG. 3 is a diagram showing an example of the configuration of a gNB 200 (base station) according to the first embodiment. The gNB 200 includes a transmitter 210, a receiver 220, a controller 230, and a backhaul communication unit 250. The transmitter 210 and the receiver 220 constitute a communication unit that performs wireless communication with the UE 100. The backhaul communication unit 250 constitutes a network communication unit that performs communication with the CN 20. The gNB 200 is another example of a communication device.

 送信部210は、制御部230の制御下で各種の送信を行う。送信部210は、アンテナ及び送信機を含む。送信機は、制御部230が出力するベースバンド信号(送信信号)を無線信号に変換してアンテナから送信する。 The transmitting unit 210 performs various transmissions under the control of the control unit 230. The transmitting unit 210 includes an antenna and a transmitter. The transmitter converts the baseband signal (transmission signal) output by the control unit 230 into a radio signal and transmits it from the antenna.

 受信部220は、制御部230の制御下で各種の受信を行う。受信部220は、アンテナ及び受信機を含む。受信機は、アンテナが受信する無線信号をベースバンド信号(受信信号)に変換して制御部230に出力する。 The receiving unit 220 performs various types of reception under the control of the control unit 230. The receiving unit 220 includes an antenna and a receiver. The receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 230.

 制御部230は、gNB200における各種の制御及び処理を行う。このような処理は、後述の各レイヤの処理を含む。制御部230は、少なくとも1つのプロセッサ及び少なくとも1つのメモリを含む。メモリは、プロセッサにより実行されるプログラム、及びプロセッサによる処理に用いられる情報を記憶する。プロセッサは、ベースバンドプロセッサと、CPUとを含んでもよい。ベースバンドプロセッサは、ベースバンド信号の変調・復調及び符号化・復号等を行う。CPUは、メモリに記憶されるプログラムを実行して各種の処理を行う。なお、gNB200で行われる処理又は動作は、制御部230で行われてもよい。 The control unit 230 performs various controls and processes in the gNB 200. Such processes include the processes of each layer described below. The control unit 230 includes at least one processor and at least one memory. The memory stores programs executed by the processor and information used in the processes by the processor. The processor may include a baseband processor and a CPU. The baseband processor performs modulation/demodulation and encoding/decoding of baseband signals. The CPU executes programs stored in the memory to perform various processes. Note that the processes or operations performed in the gNB 200 may be performed by the control unit 230.

 バックホール通信部250は、基地局間インターフェイスであるXnインターフェイスを介して隣接基地局と接続される。バックホール通信部250は、基地局-コアネットワーク間インターフェイスであるNGインターフェイスを介してAMF/UPF300と接続される。なお、gNB200は、セントラルユニット(CU)と分散ユニット(DU)とで構成され(すなわち、機能分割され)、両ユニット間がフロントホールインターフェイスであるF1インターフェイスで接続されてもよい。 The backhaul communication unit 250 is connected to adjacent base stations via an Xn interface, which is an interface between base stations. The backhaul communication unit 250 is connected to the AMF/UPF 300 via an NG interface, which is an interface between a base station and a core network. Note that the gNB 200 may be composed of a central unit (CU) and a distributed unit (DU) (i.e., functionally divided), and the two units may be connected via an F1 interface, which is a fronthaul interface.

 図4は、データを取り扱うユーザプレーンの無線インターフェイスのプロトコルスタックの構成例を示す図である。 Figure 4 shows an example of the protocol stack configuration for the wireless interface of the user plane that handles data.

 ユーザプレーンの無線インターフェイスプロトコルは、物理(PHY)レイヤと、媒体アクセス制御(MAC)レイヤと、無線リンク制御(RLC)レイヤと、パケットデータコンバージェンスプロトコル(PDCP)レイヤと、サービスデータアダプテーションプロトコル(SDAP)レイヤとを有する。 The user plane radio interface protocol has a physical (PHY) layer, a medium access control (MAC) layer, a radio link control (RLC) layer, a packet data convergence protocol (PDCP) layer, and a service data adaptation protocol (SDAP) layer.

 PHYレイヤは、符号化・復号、変調・復調、アンテナマッピング・デマッピング、及びリソースマッピング・デマッピングを行う。UE100のPHYレイヤとgNB200のPHYレイヤとの間では、物理チャネルを介してデータ及び制御情報が伝送される。なお、UE100のPHYレイヤは、gNB200から物理下りリンク制御チャネル(PDCCH)上で送信される下りリンク制御情報(DCI)を受信する。具体的には、UE100は、無線ネットワーク一時識別子(RNTI)を用いてPDCCHのブラインド復号を行い、復号に成功したDCIを自UE宛てのDCIとして取得する。gNB200から送信されるDCIには、RNTIによってスクランブルされたCRC(Cyclic Redundancy Code)パリティビットが付加されている。 The PHY layer performs encoding/decoding, modulation/demodulation, antenna mapping/demapping, and resource mapping/demapping. Data and control information are transmitted between the PHY layer of UE100 and the PHY layer of gNB200 via a physical channel. The PHY layer of UE100 receives downlink control information (DCI) transmitted from gNB200 on a physical downlink control channel (PDCCH). Specifically, UE100 performs blind decoding of PDCCH using a radio network temporary identifier (RNTI) and acquires successfully decoded DCI as DCI addressed to the UE. The DCI transmitted from gNB200 has CRC (Cyclic Redundancy Code) parity bits scrambled by the RNTI added.

 NRでは、UE100は、システム帯域幅(すなわち、セルの帯域幅)よりも狭い帯域幅を使用できる。gNB200は、連続するPRB(Physical Resource Block)からなる帯域幅部分(BWP)をUE100に設定する。UE100は、アクティブなBWPにおいてデータ及び制御信号を送受信する。UE100には、例えば、最大4つのBWPが設定可能であってもよい。各BWPは、異なるサブキャリア間隔を有していてもよい。当該各BWPは、周波数が相互に重複していてもよい。UE100に対して複数のBWPが設定されている場合、gNB200は、下りリンクにおける制御によって、どのBWPを適用するかを指定できる。これにより、gNB200は、UE100のデータトラフィックの量等に応じてUE帯域幅を動的に調整し、UE電力消費を減少させる。 In NR, UE100 can use a bandwidth narrower than the system bandwidth (i.e., the cell bandwidth). gNB200 sets a bandwidth portion (BWP) consisting of consecutive PRBs (Physical Resource Blocks) to UE100. UE100 transmits and receives data and control signals in the active BWP. For example, up to four BWPs may be set to UE100. Each BWP may have a different subcarrier spacing. The BWPs may overlap each other in frequency. When multiple BWPs are set to UE100, gNB200 can specify which BWP to apply by controlling the downlink. As a result, gNB200 dynamically adjusts the UE bandwidth according to the amount of data traffic of UE100, etc., thereby reducing UE power consumption.

 gNB200は、例えば、サービングセル上の最大4つのBWPのそれぞれに最大3つの制御リソースセット(CORESET:control resource set)を設定できる。CORESETは、UE100が受信すべき制御情報のための無線リソースである。UE100には、サービングセル上で最大12個又はそれ以上のCORESETが設定されてもよい。各CORESETは、0乃至11又はそれ以上のインデックスを有してもよい。CORESETは、6つのリソースブロック(PRB)と、時間領域内の1つ、2つ、又は3つの連続するOFDM(Orthogonal Frequency Division Multiplex)シンボルとにより構成されてもよい。 The gNB200 can, for example, configure up to three control resource sets (CORESETs) for each of up to four BWPs on the serving cell. The CORESET is a radio resource for control information to be received by the UE100. Up to 12 or more CORESETs may be configured on the serving cell for the UE100. Each CORESET may have an index of 0 to 11 or more. The CORESET may consist of six resource blocks (PRBs) and one, two, or three consecutive OFDM (Orthogonal Frequency Division Multiplex) symbols in the time domain.

 MACレイヤは、データの優先制御、ハイブリッドARQ(HARQ:Hybrid Automatic Repeat reQuest)による再送処理、及びランダムアクセスプロシージャ等を行う。UE100のMACレイヤとgNB200のMACレイヤとの間では、トランスポートチャネルを介してデータ及び制御情報が伝送される。gNB200のMACレイヤはスケジューラを含む。スケジューラは、上下リンクのトランスポートフォーマット(トランスポートブロックサイズ、変調・符号化方式(MCS:Modulation and Coding Scheme))及びUE100への割当リソースブロックを決定する。 The MAC layer performs data priority control, retransmission processing using Hybrid Automatic Repeat reQuest (HARQ), and random access procedures. Data and control information are transmitted between the MAC layer of UE100 and the MAC layer of gNB200 via a transport channel. The MAC layer of gNB200 includes a scheduler. The scheduler determines the uplink and downlink transport format (transport block size, modulation and coding scheme (MCS)) and the resource blocks to be assigned to UE100.

 RLCレイヤは、MACレイヤ及びPHYレイヤの機能を利用してデータを受信側のRLCレイヤに伝送する。UE100のRLCレイヤとgNB200のRLCレイヤとの間では、論理チャネルを介してデータ及び制御情報が伝送される。 The RLC layer uses the functions of the MAC layer and PHY layer to transmit data to the RLC layer on the receiving side. Data and control information are transmitted between the RLC layer of UE100 and the RLC layer of gNB200 via logical channels.

 PDCPレイヤは、ヘッダ圧縮・伸張、及び暗号化・復号化等を行う。 The PDCP layer performs header compression/decompression, encryption/decryption, etc.

 SDAPレイヤは、コアネットワークがQoS(Quality of Service)制御を行う単位であるIPフローとアクセス層(AS:Access Stratum)がQoS制御を行う単位である無線ベアラとのマッピングを行う。なお、RANがEPCに接続される場合は、SDAPが無くてもよい。 The SDAP layer maps IP flows, which are the units for which the core network controls QoS (Quality of Service), to radio bearers, which are the units for which the access stratum (AS) controls QoS. Note that if the RAN is connected to the EPC, SDAP is not necessary.

 図5は、シグナリング(制御信号)を取り扱う制御プレーンの無線インターフェイスのプロトコルスタックの構成を示す図である。 Figure 5 shows the configuration of the protocol stack for the wireless interface of the control plane that handles signaling (control signals).

 制御プレーンの無線インターフェイスのプロトコルスタックは、図4に示したSDAPレイヤに代えて、無線リソース制御(RRC)レイヤ及び非アクセス層(NAS:Non-Access Stratum)を有する。 The protocol stack of the radio interface of the control plane has a radio resource control (RRC) layer and a non-access stratum (NAS) instead of the SDAP layer shown in Figure 4.

 UE100のRRCレイヤとgNB200のRRCレイヤとの間では、各種設定のためのRRCシグナリングが伝送される。RRCレイヤは、無線ベアラの確立、再確立及び解放に応じて、論理チャネル、トランスポートチャネル、及び物理チャネルを制御する。UE100のRRCとgNB200のRRCとの間にコネクション(RRCコネクション)がある場合、UE100はRRCコネクティッド状態にある。UE100のRRCとgNB200のRRCとの間にコネクション(RRCコネクション)がない場合、UE100はRRCアイドル状態にある。UE100のRRCとgNB200のRRCとの間のコネクションがサスペンドされている場合、UE100はRRCインアクティブ状態にある。 RRC signaling for various settings is transmitted between the RRC layer of UE100 and the RRC layer of gNB200. The RRC layer controls logical channels, transport channels, and physical channels in response to the establishment, re-establishment, and release of radio bearers. When there is a connection (RRC connection) between the RRC of UE100 and the RRC of gNB200, UE100 is in an RRC connected state. When there is no connection (RRC connection) between the RRC of UE100 and the RRC of gNB200, UE100 is in an RRC idle state. When the connection between the RRC of UE100 and the RRC of gNB200 is suspended, UE100 is in an RRC inactive state.

 RRCレイヤよりも上位に位置するNASは、セッション管理及びモビリティ管理等を行う。UE100のNASとAMF300のNASとの間では、NASシグナリングが伝送される。なお、UE100は、無線インターフェイスのプロトコル以外にアプリケーションレイヤ等を有する。また、NASよりも下位のレイヤをAS(Access Stratum)と呼ぶ。 The NAS, which is located above the RRC layer, performs session management, mobility management, etc. NAS signaling is transmitted between the NAS of UE100 and the NAS of AMF300. In addition to the radio interface protocol, UE100 also has an application layer, etc. The layer below the NAS is called the AS (Access Stratum).

 (AI/ML技術)
 次に、実施形態に係るAI/ML技術について説明する。図6は、第1実施形態に係る移動通信システム1におけるAI/ML技術の機能ブロックの構成例を示す図である。
(AI/ML technology)
Next, the AI/ML technology according to the embodiment will be described. Fig. 6 is a diagram showing an example of the configuration of functional blocks of the AI/ML technology in the mobile communication system 1 according to the first embodiment.

 図6に示す機能のブロック構成例は、データ収集部A1と、モデル学習部A2と、モデル推論部A3と、データ処理部A4とを有する。 The functional block configuration example shown in FIG. 6 includes a data collection unit A1, a model learning unit A2, a model inference unit A3, and a data processing unit A4.

 データ収集部A1は、入力データ、具体的には、学習用データ及び推論用データを収集する。データ収集部A1は、学習用データをモデル学習部A2へ出力する。また、データ収集部A1は、推論用データをモデル推論部A3へ出力する。データ収集部A1は、データ収集部A1が設けられる自装置におけるデータを入力データとして取得してもよい。データ収集部A1は、別の装置におけるデータを入力データとして取得してもよい。 The data collection unit A1 collects input data, specifically, learning data and inference data. The data collection unit A1 outputs the learning data to the model learning unit A2. The data collection unit A1 also outputs the inference data to the model inference unit A3. The data collection unit A1 may acquire data in the device on which the data collection unit A1 is provided as input data. The data collection unit A1 may acquire data in another device as input data.

 モデル学習部A2は、モデル学習を行う。具体的には、モデル学習部A2は、学習用データを用いた機械学習により学習モデルのパラメータを最適化し、学習済みモデルを導出(又は生成、又は更新)する。モデル学習部A2は、導出した学習済みモデルをモデル推論部A3に出力する。例えば、
 y=ax+b
で考えると、a(傾き)及びb(切片)がパラメータであって、これらを最適化していくことが機械学習に相当する。一般的に、機械学習には、教師あり学習(supervised learning)、教師なし学習(unsupervised learning)、及び強化学習(reinforcement learning)がある。教師あり学習は、学習用データに正解データを用いる方法である。教師なし学習は、学習用データに正解データを用いない方法である。例えば、教師なし学習では、大量の学習用データから特徴点を覚え、正解の判断(範囲の推定)を行う。強化学習は、出力結果にスコアを付けて、スコアを最大化する方法を学習する方法である。以下では、教師あり学習について説明するが、機械学習としては、教師なし学習が適用されてもよいし、強化学習が適用されてもよい。
The model learning unit A2 performs model learning. Specifically, the model learning unit A2 optimizes parameters of a learning model by machine learning using learning data, and derives (or generates, or updates) a learned model. The model learning unit A2 outputs the derived learned model to the model inference unit A3. For example,
y = ax + b
In this case, a (slope) and b (intercept) are parameters, and optimizing these corresponds to machine learning. Generally, machine learning includes supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is a method in which correct answer data is used for learning data. Unsupervised learning is a method in which correct answer data is not used for learning data. For example, in unsupervised learning, feature points are memorized from a large amount of learning data, and a correct answer is determined (range is estimated). Reinforcement learning is a method in which a score is assigned to an output result, and a method of maximizing the score is learned. In the following, supervised learning will be described, but as machine learning, unsupervised learning or reinforcement learning may be applied.

 モデル推論部A3は、モデル推論を行う。具体的には、モデル推論部A3は、学習済みモデルを用いて推論用データから出力を推論し、推論結果データをデータ処理部A4に出力する。例えば、
 y=ax+b
で考えると、xが推論用データであって、yが推論結果データに相当する。なお、「y=ax+b」はモデルである。傾き及び切片が最適化されたモデル、例えば「y=5x+3」は学習済みモデルである。ここで、モデルの手法(approach)は様々であり、線形回帰分析、ニューラルネットワーク、決定木分析などがある。上記の「y=ax+b」は線形回帰分析の一種と考えることができる。モデル推論部A3は、モデル学習部A2に対してモデル性能フィードバックを行ってもよい。
The model inference unit A3 performs model inference. Specifically, the model inference unit A3 infers an output from inference data using a trained model, and outputs inference result data to the data processing unit A4. For example,
y = ax + b
In this case, x corresponds to inference data and y corresponds to inference result data. Note that "y = ax + b" is a model. A model with optimized slope and intercept, for example, "y = 5x + 3", is a trained model. There are various approaches to the model, including linear regression analysis, neural network, and decision tree analysis. The above "y = ax + b" can be considered as a type of linear regression analysis. The model inference unit A3 may provide model performance feedback to the model learning unit A2.

 データ処理部A4は、推論結果データを受け取り、推論結果データを利用する処理を行う。 The data processing unit A4 receives the inference result data and performs processing that utilizes the inference result data.

 図7は、第1実施形態に係るAI/ML技術における動作例を表す図である。 FIG. 7 shows an example of the operation of the AI/ML technology according to the first embodiment.

 送信エンティティTEは、例えば、機械学習が行われるエンティティである。送信エンティティTEでは、機械学習を行って学習済モデルを導出する。そして、送信エンティティTEでは、学習済モデルを用いて推論結果として、推論結果データを生成する。送信エンティティTEでは、当該推論結果データを受信エンティティREへ送信する。 The transmitting entity TE is, for example, an entity where machine learning is performed. The transmitting entity TE performs machine learning to derive a trained model. The transmitting entity TE then uses the trained model to generate inference result data as an inference result. The transmitting entity TE transmits the inference result data to the receiving entity RE.

 一方、受信エンティティREは、例えば、機械学習が行われないエンティティである。送信エンティティTEでは、送信エンティティTEから受信した推論結果データを用いて種々の処理を行う。 On the other hand, the receiving entity RE is, for example, an entity in which machine learning is not performed. The transmitting entity TE performs various processes using the inference result data received from the transmitting entity TE.

 なお、エンティティとは、例えば、装置であってもよい。当該エンティティとは、装置に含まれる機能ブロックであってもよい。当該エンティティとは、装置に含まれるハードウェアブロックであってもよい。 Note that the entity may be, for example, a device. The entity may be a function block included in the device. The entity may be a hardware block included in the device.

 例えば、送信エンティティTEはUE100であり、受信エンティティREはgNB200又はコアネットワーク装置であってもよい。或いは、送信エンティティTEはgNB200又はコアネットワーク装置であり、受信エンティティREはUE100でもよい。 For example, the transmitting entity TE may be a UE 100, and the receiving entity RE may be a gNB 200 or a core network device. Alternatively, the transmitting entity TE may be a gNB 200 or a core network device, and the receiving entity RE may be a UE 100.

 図7に示すように、ステップS1において、送信エンティティTEは、AI/ML技術に関する制御データを受信エンティティREへ送信したり、当該制御データを受信エンティティREから受信したりする。制御データは、RRCレイヤ(すなわち、レイヤ3)のシグナリングであるRRCメッセージであってもよい。当該制御データは、MACレイヤ(すなわち、レイヤ2)のシグナリングであるMAC CE(Control Element)であってもよい。当該制御データは、PHYレイヤ(すなわち、レイヤ1)のシグナリングである下りリンク制御情報(DCI:Downlink Control Information)であってもよい。下りリンクシグナリングは、UE個別シグナリングであってもよい。当該下りリンクシグナリングは、ブロードキャストシグナリングであってもよい。制御データは、人工知能又は機械学習に特化した制御層(例えばAI/MLレイヤ)における制御メッセージであってもよい。 As shown in FIG. 7, in step S1, the transmitting entity TE transmits control data related to AI/ML technology to the receiving entity RE and receives the control data from the receiving entity RE. The control data may be an RRC message, which is signaling of the RRC layer (i.e., layer 3). The control data may be a MAC Control Element (CE), which is signaling of the MAC layer (i.e., layer 2). The control data may be Downlink Control Information (DCI), which is signaling of the PHY layer (i.e., layer 1). The downlink signaling may be UE-specific signaling. The downlink signaling may be broadcast signaling. The control data may be a control message in a control layer (e.g., an AI/ML layer) specialized for artificial intelligence or machine learning.

 (配置例とユースケース)
 次に、図6に示す各機能ブロックが移動通信システム1においてどのように配置されるかについて説明する。以下では、各機能ブロックの配置例を具体的なユースケースに沿って説明する。
(Deployment examples and use cases)
Next, a description will be given of how the functional blocks shown in Fig. 6 are arranged in the mobile communication system 1. Below, an example of the arrangement of the functional blocks will be described along with a specific use case.

 AI/ML技術で適用されるユースケースとして、例えば、以下の3つがある。 For example, there are three use cases where AI/ML technology can be applied:

 (1.1)「CSI(Channel State Information)フィードバック向上(CSI feedback enhancement)」 (1.1) "CSI (Channel State Information) Feedback Enhancement"

 (1.2)「ビーム管理(Beam management)」 (1.2) "Beam management"

 (1.3)「位置精度向上(Positioning accuracy enhancement)」
 以下、ユースケース毎に機能ブロックの配置例について説明する。
(1.3) “Positioning accuracy enhancement”
Below, an example of the arrangement of functional blocks will be explained for each use case.

 (1.1)「CSIフィードバック向上」における機能ブロックの配置例
 「CSIフィードバック向上」は、例えば、UE100からgNB200へフィードバックされるCSIに機械学習技術を適用した場合のユースケースを表している。CSIは、UE100とgNB200との間の下りリンクにおけるチャネル状態に関する情報である。CSIには、チャネル品質インジケータ(CQI:Channel Quality Indicator)、プリコーディング行列インジケータ(PMI:Precoding Matrix Indicator)、及びランクインジケータ(RI:Rank Indicator)のうち少なくとも1つが含まれる。gNB200は、UE100からCSIフィードバックに基づいて、例えば、下りリンクのスケジューリングを行う。
(1.1) Example of functional block arrangement in "CSI feedback improvement""CSI feedback improvement" represents a use case in which machine learning technology is applied to CSI fed back from UE100 to gNB200, for example. CSI is information on the channel state in the downlink between UE100 and gNB200. CSI includes at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and a rank indicator (RI). Based on the CSI feedback from UE100, gNB200 performs, for example, downlink scheduling.

 図8は、「CSIフィードバック向上」における各機能ブロックの配置例を表す図である。図8に示す「CSIフィードバック向上」の例では、データ収集部A1とモデル学習部A2とモデル推論部A3とが、UE100の制御部130に含まれる。一方、データ処理部A4は、gNB200の制御部230に含まれる。すなわち、UE100においてモデル学習とモデル推論とが行われる。図8は、送信エンティティTEがUE100であり、受信エンティティREがgNB200である例を表している。 Figure 8 is a diagram showing an example of the arrangement of each functional block in "CSI feedback improvement". In the example of "CSI feedback improvement" shown in Figure 8, a data collection unit A1, a model learning unit A2, and a model inference unit A3 are included in the control unit 130 of the UE 100. On the other hand, a data processing unit A4 is included in the control unit 230 of the gNB 200. In other words, model learning and model inference are performed in the UE 100. Figure 8 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.

 「CSIフィードバック向上」において、gNB200は、下りリンクのチャネル状態をUE100が推定するための参照信号を送信する。参照信号として、以下では、CSI参照信号(CSI-RS)を例にして説明するが、参照信号は復調参照信号(DMRS)であってもよい。 In "improved CSI feedback," the gNB 200 transmits a reference signal for the UE 100 to estimate the downlink channel state. In the following, the reference signal will be described using a CSI reference signal (CSI-RS) as an example, but the reference signal may also be a demodulation reference signal (DMRS).

 第1に、モデル学習において、UE100(受信部110)は、第1リソースを用いてgNB200からの第1参照信号を受信する。そして、UE100(モデル学習部A2)は、第1参照信号とCSIとを含む学習用データを用いて、参照信号からCSIを推論するための学習済みモデルを導出する。このような第1参照信号をフルCSI-RSと称することがある。 First, in model learning, UE100 (receiving unit 110) receives a first reference signal from gNB200 using a first resource. Then, UE100 (model learning unit A2) derives a learned model for inferring CSI from the reference signal using learning data including the first reference signal and CSI. Such a first reference signal may be referred to as a full CSI-RS.

 例えば、CSI生成部131は、受信部110が受信した受信信号(CSI-RS)を用いてチャネル推定を行い、CSIを生成する。送信部120は、生成されたCSIをgNB200に送信する。モデル学習部A2は、受信信号(CSI-RS)とCSIとのセットを学習用データとしてモデル学習を行い、受信信号(CSI-RS)からCSIを推論するための学習済みモデルを導出する。 For example, the CSI generation unit 131 performs channel estimation using the received signal (CSI-RS) received by the receiving unit 110, and generates CSI. The transmitting unit 120 transmits the generated CSI to the gNB 200. The model learning unit A2 performs model learning using a set of the received signal (CSI-RS) and CSI as learning data, and derives a learned model for inferring CSI from the received signal (CSI-RS).

 第2に、モデル推論において、受信部110は、第1リソースよりも少ない第2リソースを用いてgNB200からの第2参照信号を受信する。そして、モデル推論部A3は、学習済みモデルを用いて、第2参照信号を推論用データとして、推論結果データとしてCSIを推論する。以下では、このような第2参照信号を部分的なCSI-RS又はパンクチャされたCSI-RSと称することがある。 Secondly, in model inference, the receiving unit 110 receives a second reference signal from the gNB 200 using a second resource that is less than the first resource. Then, the model inference unit A3 uses the learned model to infer the CSI as inference result data using the second reference signal as inference data. Hereinafter, such a second reference signal may be referred to as a partial CSI-RS or a punctured CSI-RS.

 例えば、モデル推論部A3は、受信部110が受信した部分的なCSI-RSを推論用データとして学習済みモデルに入力させ、当該CSI-RSからCSIを推論する。送信部120は、推論されたCSIをgNB200に送信する。 For example, the model inference unit A3 inputs the partial CSI-RS received by the receiving unit 110 into the trained model as inference data, and infers CSI from the CSI-RS. The transmitting unit 120 transmits the inferred CSI to the gNB 200.

 これにより、UE100は、gNB200から受信した少ないCSI-RS(部分的なCSI-RS)から、正確な(完全な)CSIをgNB200にフィードバック(又は送信)することが可能になる。例えば、gNB200は、オーバーヘッド削減のために意図時にCSI-RSを削減(パンクチャ)可能になる。また、無線状況が悪化し、一部のCSI-RSが正常に受信できない状況にUE100が対応可能になる。 This enables UE100 to feed back (or transmit) accurate (complete) CSI to gNB200 from the small amount of CSI-RS (partial CSI-RS) received from gNB200. For example, gNB200 can reduce (puncture) CSI-RS when intended to reduce overhead. In addition, UE100 can respond to situations where the radio conditions deteriorate and some CSI-RS cannot be received normally.

 図9及び図10は、第1実施形態に係るCSI-RSを削減する例を表す図である。 FIGS. 9 and 10 are diagrams showing an example of reducing CSI-RS according to the first embodiment.

 図9では、CSI-RSを送信するアンテナポート数を削減することでCSI-RSを削減する例を表している。例えば、gNB200は、以下のような処理を行う。すなわち、gNB200は、UE100がモデル学習を行うモード(以下では、「学習モード」と称する場合がある。)のとき、アンテナパネルの全アンテナポートからCSI-RSを送信する。一方、gNB200は、UE100がモデル推論を行うモード(以下では、「推論モード」と称する場合がある。)のとき、CSI-RSを送信するアンテナポート数を削減し、アンテナパネルの半分のアンテナポートからCSI-RSを送信する。これにより、オーバーヘッドを削減し、アンテナポートの利用効率を改善するとともに、消費電力の削減効果を得ることができる。なお、アンテナポートはリソースの一例である。 FIG. 9 shows an example of reducing CSI-RS by reducing the number of antenna ports that transmit CSI-RS. For example, gNB200 performs the following process. That is, when UE100 is in a mode in which model learning is performed (hereinafter, may be referred to as "learning mode"), gNB200 transmits CSI-RS from all antenna ports of the antenna panel. On the other hand, when UE100 is in a mode in which model inference is performed (hereinafter, may be referred to as "inference mode"), gNB200 reduces the number of antenna ports that transmit CSI-RS and transmits CSI-RS from half the antenna ports of the antenna panel. This reduces overhead, improves the utilization efficiency of antenna ports, and can reduce power consumption. Note that antenna ports are an example of resources.

 一方、図10では、CSI-RSの送信に利用する無線リソース、具体的には、gNB200は、時間周波数リソースを削減する例を表している。例えば、gNB200は、以下のような処理を行う。すなわち、gNB200は、UE100が学習モードのとき、所定の時間周波数リソースを用いてCSI-RSを送信する。一方、gNB200は、UE100が推論モードのとき、所定の時間周波数リソースより少ない時間周波数リソースを用いてCSI-RSを送信する。これにより、オーバーヘッドを削減し、無線リソースの利用効率を改善するとともに、消費電力の削減効果を得ることができる。 On the other hand, FIG. 10 shows an example in which the radio resources used to transmit the CSI-RS, specifically, the gNB 200 reduces the time-frequency resources. For example, the gNB 200 performs the following process. That is, when the UE 100 is in the learning mode, the gNB 200 transmits the CSI-RS using a predetermined time-frequency resource. On the other hand, when the UE 100 is in the inference mode, the gNB 200 transmits the CSI-RS using a time-frequency resource that is less than the predetermined time-frequency resource. This reduces overhead, improves the utilization efficiency of the radio resources, and reduces power consumption.

 図9及び図10に示すように、gNB200は、所定量の第1リソースを用いてフルCSI-RSを送信し、第1リソースよりもリソース量が少ない第2リソースを用いて部分的なCSI-RSを送信する。 As shown in Figures 9 and 10, gNB200 transmits full CSI-RS using a predetermined amount of first resources, and transmits partial CSI-RS using second resources that have a smaller amount of resources than the first resources.

 図11は、第1実施形態に係る「CSIフィードバック向上」における動作例を表す図である。 FIG. 11 shows an example of the operation of "CSI feedback improvement" according to the first embodiment.

 図11に示すように、ステップS101において、gNB200は、推論モード時のCSI-RSの送信パターン(パンクチャパターン)を、制御データとしてUE100へ通知又は設定してもよい。例えば、gNB200は、推論モード時にCSI-RSを送信する又は送信しないアンテナポート及び/又は時間周波数リソースをUE100へ送信する。 As shown in FIG. 11, in step S101, gNB200 may notify or set the transmission pattern (puncture pattern) of CSI-RS in inference mode to UE100 as control data. For example, gNB200 transmits to UE100 the antenna port and/or time-frequency resource that transmits or does not transmit CSI-RS in inference mode.

 ステップS102において、gNB200は、UE100に対して学習モードを開始させるための切り替え通知を送信してもよい。 In step S102, gNB200 may send a switching notification to UE100 to start the learning mode.

 ステップS103において、UE100は、学習モードを開始する。 In step S103, UE100 starts the learning mode.

 ステップS104において、gNB200は、フルCSI-RSを送信する。UE100の受信部110は、フルCSI-RSを受信し、CSI生成部131は、当該フルCSI-RSに基づいてCSIを生成(又は推定)する。学習モードにおいて、データ収集部A1では、フルCSI-RSとCSIとを収集する。モデル学習部A2では、当該フルCSI-RSと当該CSIとを学習用データとして、学習済モデルを作成する。 In step S104, gNB200 transmits the full CSI-RS. The receiver 110 of UE100 receives the full CSI-RS, and the CSI generator 131 generates (or estimates) CSI based on the full CSI-RS. In the learning mode, the data collector A1 collects the full CSI-RS and CSI. The model learning unit A2 creates a learned model using the full CSI-RS and the CSI as learning data.

 ステップS105において、UE100は、生成したCSIをgNB200へ送信する。 In step S105, UE100 transmits the generated CSI to gNB200.

 その後、ステップS106において、UE100は、モデル学習が完了した際に、モデル学習が完了した旨の完了通知をgNB200へ送信する。UE100は、学習済モデルの作成が完了したときに完了通知を送信してもよい。 Then, in step S106, when the model learning is completed, the UE 100 transmits a completion notification to the gNB 200 indicating that the model learning is completed. The UE 100 may also transmit a completion notification when the creation of the trained model is completed.

 ステップS107において、gNB200は、完了通知を受信したことに応じて、UE100に対して学習モードから推論モードへ切り替えるための切り替え通知をUE100へ送信する。 In step S107, in response to receiving the completion notification, gNB200 transmits a switching notification to UE100 to cause UE100 to switch from learning mode to inference mode.

 ステップS108において、UE100は、切り替え通知を受信したことに応じて、学習モードから推論モードへ切り替える。 In step S108, in response to receiving the switching notification, UE 100 switches from the learning mode to the inference mode.

 ステップS109において、gNB200は、部分的なCSI-RSを送信する。UE100の受信部110は、部分的なCSI-RSを受信する。推論モードにおいて、データ収集部A1では、部分的なCSI-RSを収集する。モデル推論部A3では、部分的なCSI-RSを推論用データとして、学習済モデルに入力させ、推論結果としてCSIを得る。 In step S109, gNB200 transmits partial CSI-RS. Receiver 110 of UE100 receives the partial CSI-RS. In the inference mode, data collection unit A1 collects partial CSI-RS. Model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains CSI as the inference result.

 ステップS110において、UE100は、推論結果であるCSIを推論結果データとして、gNB200へフィードバック(又は送信)する。UE100では、学習モードの際にモデル学習を繰り返すことで、所定精度以上の学習済みモデルを生成することができる。そのように生成した学習済みモデルを用いた推論結果も所定精度以上になることが予想される。 In step S110, UE100 feeds back (or transmits) the CSI, which is the inference result, to gNB200 as inference result data. In UE100, by repeating model learning during the learning mode, a trained model with a predetermined accuracy or higher can be generated. It is expected that the inference result using the trained model thus generated will also have a predetermined accuracy or higher.

 なお、ステップS111において、UE100は、モデル学習が必要であると自身で判断した場合、モデル学習が必要である旨を表す通知を制御データとしてgNB200へ送信してもよい。 In addition, in step S111, if UE100 determines that model learning is necessary, it may transmit a notification indicating that model learning is necessary to gNB200 as control data.

 図11に示す例において、学習用データは「(フル)CSI-RS」及び「CSI」であり、推論用データは「(部分的な)CSI-RS」である例について説明した。以下では、学習用データ及び/又は推論用データを、「データセット」と称する場合がある。 In the example shown in FIG. 11, the training data is "(full) CSI-RS" and "CSI," and the inference data is "(partial) CSI-RS." Hereinafter, the training data and/or the inference data may be referred to as a "dataset."

 「CSIフィードバック向上」においては、データセットとして、「CSI-RS」及び「CSI」以外にも、例えば、以下に示すデータ又は情報の少なくともいずれかが用いられてもよい。 In "improving CSI feedback," in addition to "CSI-RS" and "CSI," at least one of the following data or information may be used as a data set:

 (X1)RSRP(Reference Signals Received Power)、RSRQ(Reference Signal Received Quality)、SINR(Signal-to-interference-plus-noise ratio)、又はADコンバータの出力波形(これらの測定対象は、CSI-RSでもよい。これらの測定対象は、gNB200から受信した他の受信信号でもよい。) (X1) RSRP (Reference Signals Received Power), RSRQ (Reference Signal Received Quality), SINR (Signal-to-interference-plus-noise ratio), or output waveform of an AD converter (These measurements may be CSI-RS. The measurements may be other received signals received from gNB200.)

 (X2)ビット誤り率(BER:Bit Error Rate)、又はブロック誤り率(BLER:Block Error Rate)(全送信ビット数(又は全送信ブロック数)を既知として、CSI-RSに基づいて、BER(又はBLER)が測定されてもよい。) (X2) Bit Error Rate (BER) or Block Error Rate (BLER) (BER (or BLER) may be measured based on the CSI-RS with the total number of transmitted bits (or total number of transmitted blocks) being known.)

 (X3)UE100の移動速度(UE100内の速度センサにより測定されてもよい。)
 機械学習に用いられるデータセットとして何が用いられるのかが設定されてもよい。例えば、以下のような処理が行われてもよい。すなわち、UE100は、どの種別の入力データを自身において機械学習において取り扱い可能かを示す能力情報を制御データとしてgNB200へ送信する。能力情報は、例えば、(X1)から(X3)に示すデータ又は情報のいずれかを表してもよい。能力情報は、学習用データと推論用データとが別々に指定された情報となっていてもよい。そして、gNB200は、データセットとして用いられるデータ種別情報を制御データとしてUE100へ送信する。データ種別情報は、例えば、(X1)から(X3)に示すデータ又は情報のいずれを表してもよい。また、データ種別情報は、学習用データとして用いられるデータ種別情報と、推論用データとして用いられるデータ種別情報とを別々に指定されてもよい。
(X3) Moving speed of UE 100 (may be measured by a speed sensor in UE 100)
It may be set as to what is used as the data set used for machine learning. For example, the following processing may be performed. That is, the UE 100 transmits capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data. The capability information may represent, for example, any of the data or information shown in (X1) to (X3). The capability information may be information in which the learning data and the inference data are separately specified. Then, the gNB 200 transmits the data type information used as the data set to the UE 100 as control data. The data type information may represent, for example, any of the data or information shown in (X1) to (X3). In addition, the data type information may be separately specified as the data type information used as the learning data and the data type information used as the inference data.

 (1.2)「ビーム管理」における機能ブロックの配置例
 次に、「ビーム管理」における機能ブロックの配置例について説明する。「ビーム管理」は、例えば、gNB200から送信されるビームの中で最適なビームはどのビームかを機械学習技術を用いて管理するユースケースを表している。
(1.2) Example of functional block arrangement in "beam management" Next, an example of functional block arrangement in "beam management" will be described. "Beam management" represents a use case in which, for example, machine learning technology is used to manage which beam is the optimal beam among the beams transmitted from gNB200.

 「ビーム管理」においては、gNB200が、指向性の異なるビームを順次送信する。各ビームには、例えば、参照信号が含まれる。UE100は、各ビームに含まれる参照信号を利用して各ビームの受信品質を測定する。UE100は、例えば、受信品質の最も良いビームを最適ビームに決定する。 In "beam management", gNB200 sequentially transmits beams with different directivities. Each beam includes, for example, a reference signal. UE100 measures the reception quality of each beam using the reference signal included in each beam. UE100 determines, for example, the beam with the best reception quality as the optimal beam.

 図12は、「ビーム管理」における各機能ブロックの配置例を表す図である。図12に示す「ビーム管理」の例では、データ収集部A1とモデル学習部A2とモデル推論部A3とが、UE100の制御部130に含まれる。一方、データ処理部A4は、gNB200の制御部230に含まれる。すなわち、図12は、UE100においてモデル学習とモデル推論とが行われる例を表している。図12では、送信エンティティTEがUE100であり、受信エンティティREがgNB200である例が示されている。 FIG. 12 is a diagram showing an example of the arrangement of each functional block in "beam management". In the example of "beam management" shown in FIG. 12, a data collection unit A1, a model learning unit A2, and a model inference unit A3 are included in the control unit 130 of the UE 100. On the other hand, a data processing unit A4 is included in the control unit 230 of the gNB 200. That is, FIG. 12 shows an example in which model learning and model inference are performed in the UE 100. FIG. 12 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.

 図12に示すように、UE100は、最適ビーム決定部132を有する。最適ビーム決定部132は、例えば、各ビームに含まれる参照信号に対する受信品質に基づいて最適ビームを決定する。参照信号は、「CSIフィードバック」と同様に、CSI-RSが用いられる例について説明するが、参照信号として復調参照信号(DMRS)が用いられてもよい。送信部120は、決定された最適ビームを表す情報を「最適ビーム」としてgNB200へ送信する。 As shown in FIG. 12, UE 100 has an optimal beam determination unit 132. The optimal beam determination unit 132 determines the optimal beam based on, for example, the reception quality for the reference signal included in each beam. As with "CSI feedback," an example will be described in which CSI-RS is used as the reference signal, but a demodulation reference signal (DMRS) may also be used as the reference signal. The transmission unit 120 transmits information representing the determined optimal beam to gNB 200 as the "optimal beam."

 「ビーム管理」における動作例は、図11において、「CSIフィードバック」を「最適ビーム」に置き換えることで実施可能である。 An example of "beam management" operation can be implemented by replacing "CSI feedback" with "optimal beam" in Figure 11.

 学習モード(ステップS103)において、gNB200は、指向性の異なるビームを順次、UE100へ送信する(ステップS104)。各ビームには、フルCSI-RSが含まれる。学習モードにおいて、UE100のデータ収集部A1では、フルCSI-RSと最適ビーム(を表す情報)とを収集する。モデル学習部A2では、CSI-RSと最適ビーム(を表す情報)とを学習用データとして、学習済モデルを作成する。フルCSI-RSは第1参照信号の一例であり、部分的なCSI-RSは第2参照信号の一例となっている。 In the learning mode (step S103), the gNB 200 sequentially transmits beams with different directivities to the UE 100 (step S104). Each beam includes a full CSI-RS. In the learning mode, the data collection unit A1 of the UE 100 collects the full CSI-RS and (information representing) the optimal beam. The model learning unit A2 creates a learned model using the CSI-RS and (information representing) the optimal beam as learning data. The full CSI-RS is an example of a first reference signal, and the partial CSI-RS is an example of a second reference signal.

 推論モード(ステップS108)において、gNB200は、指向性の異なるビームを順次送信する。各ビームには、部分的なCSI-RSが含まれる。推論モードにおいて、データ収集部A1では、部分的なCSI-RSを収集する。モデル推論部A3では、部分的なCSI-RSを推論用データとして、学習済みモデルに入力させ、推論結果として、最適ビーム(を表す情報)を得る。UE100は、推論結果(最適ビーム)を推論結果データとして、gNB200へ送信する。 In the inference mode (step S108), the gNB 200 sequentially transmits beams with different directivities. Each beam includes a partial CSI-RS. In the inference mode, the data collection unit A1 collects the partial CSI-RS. The model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains the optimal beam (information representing the optimal beam) as the inference result. The UE 100 transmits the inference result (optimal beam) to the gNB 200 as inference result data.

 「ビーム管理」においては、データセットに用いられるデータとして、「CSI-RS」及び「最適ビーム」以外にも、例えば、以下に示すデータ又は情報の少なくともいずれかが用いられてもよい。 In "beam management", in addition to "CSI-RS" and "optimum beam", at least one of the following data or information may be used as data in the data set.

 (Y1)gNB200から受信したSSB(Synchronization Signal Block) (Y1) SSB (Synchronization Signal Block) received from gNB200

 (Y2)RSRP、RSRQ、SINR、又はADコンバータの出力波形(これらの測定対象は、CSI-RSでもよい。これらの測定対象は、gNB200から受信した他の受信信号でもよい) (Y2) RSRP, RSRQ, SINR, or output waveform of the AD converter (the measurement target may be CSI-RS. The measurement target may be other received signals received from gNB200)

 (Y3)BER、又はBLER(全送信ビット数(又は全送信ブロック数)を既知として、CSI-RSに基づいて、BER(又はBLER)が測定されてもよい) (Y3) BER or BLER (BER (or BLER) may be measured based on CSI-RS with the total number of transmitted bits (or total number of transmitted blocks) known)

 (Y4)ビーム数、又はビームパターン (Y4) Number of beams or beam pattern

 (Y5)ビームの測定値(複数含む) (Y5) Beam measurement value (including multiple values)

 (Y6)UE100の移動速度(UE100内の速度センサにより測定されてもよい)
 UE100は、どの種別の入力データを自身において機械学習において取り扱い可能かを示す能力情報を制御データとしてgNB200へ送信してもよい。能力情報として、(Y1)から(Y6)のいずれかの情報又はデータが含まれてもよいし、学習用データと推論用データとを別にして(Y1)から(Y6)のいずれかの情報又はデータが含まれてもよい。また、gNB200は、データセットとして用いられるデータ種別情報を制御データとしてUE100へ送信してもよい。データ種別情報には、例えば、(Y1)から(Y6)に示すデータ又は情報のいずれが含まれてもよい。当該データ種別情報には、例えば、学習用データと推論用データとを別にして(Y1)から(Y6)のいずれかの情報又はデータが含まれてもよい。
(Y6) Moving speed of UE 100 (may be measured by a speed sensor in UE 100)
The UE 100 may transmit capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data. The capability information may include any of the information or data from (Y1) to (Y6), or may include any of the information or data from (Y1) to (Y6) separately from the learning data and the inference data. The gNB 200 may also transmit data type information used as a data set to the UE 100 as control data. The data type information may include, for example, any of the data or information shown in (Y1) to (Y6). The data type information may include, for example, any of the information or data from (Y1) to (Y6) separately from the learning data and the inference data.

 (1.3)「位置精度向上」における機能ブロックの配置例
 次に、「位置精度向上」における機能ブロックの配置例について説明する。「位置精度向上」は、例えば、UE100で測定される位置情報を、機械学習技術を利用してその精度を向上させるようにしたユースケースを表している。
(1.3) Example of Arrangement of Functional Blocks in "Improvement of Location Accuracy" Next, an example of arrangement of functional blocks in "Improvement of Location Accuracy" will be described. "Improvement of location accuracy" represents a use case in which, for example, the accuracy of location information measured by the UE 100 is improved by using machine learning technology.

 図13は、「位置精度向上」における各機能ブロックの配置例を表す図である。図9に示す「位置精度向上」の例では、データ収集部A1とモデル学習部A2とモデル推論部A3とが、UE100の制御部130に含まれる。一方、データ処理部A4は、gNB200の制御部230に含まれる。すなわち、図13は、UE100においてモデル学習とモデル推論とが行われる例を表している。図13では、送信エンティティTEがUE100であり、受信エンティティREがgNB200である例を表している。 FIG. 13 shows an example of the arrangement of each functional block in "improving location accuracy". In the example of "improving location accuracy" shown in FIG. 9, the data collection unit A1, model learning unit A2, and model inference unit A3 are included in the control unit 130 of the UE 100. On the other hand, the data processing unit A4 is included in the control unit 230 of the gNB 200. That is, FIG. 13 shows an example in which model learning and model inference are performed in the UE 100. FIG. 13 shows an example in which the transmitting entity TE is the UE 100, and the receiving entity RE is the gNB 200.

 図13に示すように、UE100は、位置情報生成部133を含む。UE100は、GNSS(Global Navigation Satellite System)受信機150を含んでもよい。位置情報生成部133は、gNB200から受信した位置参照信号(PRS:Positioning Reference Signal)(フルPRS又は部分的なPRS)に基づいて、UE100の位置データを生成する。位置情報生成部133は、GNSS受信機150が受信したGNSS信号(フルGNSS信号又は部分的なGNSS信号)を受け取り、当該GNSS信号に基づいて、UE100の位置データを生成してもよい。 As shown in FIG. 13, UE 100 includes a location information generation unit 133. UE 100 may include a Global Navigation Satellite System (GNSS) receiver 150. The location information generation unit 133 generates location data for UE 100 based on a Positioning Reference Signal (PRS) (full PRS or partial PRS) received from gNB 200. The location information generation unit 133 may receive a GNSS signal (full GNSS signal or partial GNSS signal) received by the GNSS receiver 150, and generate location data for UE 100 based on the GNSS signal.

 なお、gNB200は、フルCSI-RSと同様に、所定量の第1リソース(例えば、図9に示すように全アンテナポート、又は、図10に示すように所定量の時間周波数リソース)を用いてフルPRSを送信する。また、gNB200は、部分的なCSI-RSと同様に、第1リソースよりリソース量が少ない第2リソース(例えば第9に示すようにアンテナパネルにおける半分のアンテナポート、又は、図10に示すように所定量の半分の時間周波数リソース)を用いて部分的なPRSを送信する。 In addition, gNB200 transmits full PRS using a predetermined amount of first resources (e.g., all antenna ports as shown in FIG. 9, or a predetermined amount of time-frequency resources as shown in FIG. 10) in the same manner as full CSI-RS. Also, gNB200 transmits partial PRS using second resources having a smaller amount of resources than the first resources (e.g., half the antenna ports in an antenna panel as shown in FIG. 9, or half the predetermined amount of time-frequency resources as shown in FIG. 10) in the same manner as partial CSI-RS.

 また、フルGNSS信号は、GNSS受信機150が時間的に連続して受信したGNSS信号であってもよい。更に、部分的なGNSS信号は、GNSS受信機150が間欠的に受信したGNSS信号であってもよい。すなわち、フルGNSS信号は所定量の第1リソースが用いられ、部分的なGNSS信号は第1リソースよりもリソース量が少ない第2リソースが用いられればよい。 The full GNSS signal may be a GNSS signal received by the GNSS receiver 150 continuously over time. Furthermore, the partial GNSS signal may be a GNSS signal received by the GNSS receiver 150 intermittently. That is, a predetermined amount of first resources may be used for the full GNSS signal, and a second resource having a smaller amount than the first resources may be used for the partial GNSS signal.

 「位置精度向上」における動作例は、図11において、「フルCSI-RS」を「フルPRS」、「部分的なCSI-RS」を「部分的なPRS」、「CSIフィードバック」を「位置データ」に夫々置き換えることで実施可能である。 An example of the operation for "improving location accuracy" can be implemented by replacing "full CSI-RS" with "full PRS," "partial CSI-RS" with "partial PRS," and "CSI feedback" with "location data" in FIG. 11.

 学習モード(ステップS103)において、位置情報生成部133は、gNB200から受信したフルPRSに基づいて、UE100の位置データを生成する。位置情報生成部133は、GNSS受信機150が受信したフルGNSS信号を受け取り、当該フルGNSS信号に基づいて、UE100の位置データを生成してもよい。送信部120は、位置データをgNB200へフィードバック(又は送信)する。データ収集部A1では、フルPRS(又はフルGNSS信号)と位置データとを収集する。モデル学習部A2では、フルPRS(又はフルGNSS信号)と位置データとを学習用データとして、学習済みモデルを作成する。 In the learning mode (step S103), the location information generation unit 133 generates location data for the UE 100 based on the full PRS received from the gNB 200. The location information generation unit 133 may receive a full GNSS signal received by the GNSS receiver 150 and generate location data for the UE 100 based on the full GNSS signal. The transmission unit 120 feeds back (or transmits) the location data to the gNB 200. The data collection unit A1 collects the full PRS (or full GNSS signal) and location data. The model learning unit A2 creates a learned model using the full PRS (or full GNSS signal) and location data as learning data.

 推論モード(ステップS108)において、データ収集部A1では、受信部110が受信した部分的なPRS(又はGNSS受信機150が受信した部分的なGNSS信号)を収集する。モデル推論部A3では、部分的なPRS(又は部分てきなGNSS信号)を推論用データとして、学習済みモデルに入力させ、推論結果として、位置データを得る。UE100は、推論結果(位置データ)を推論結果データとして、gNB200へ送信する。 In the inference mode (step S108), the data collection unit A1 collects the partial PRS received by the receiving unit 110 (or the partial GNSS signal received by the GNSS receiver 150). The model inference unit A3 inputs the partial PRS (or the partial GNSS signal) as inference data into the trained model, and obtains location data as the inference result. The UE 100 transmits the inference result (location data) to the gNB 200 as inference result data.

 「位置精度向上」において、データセットに用いられるデータとして、「PRS」(又は「GNSS信号」)、及び「位置データ」以外にも、例えば、以下に示すデータ又は情報の少なくともいずれかが用いられてもよい。 In "improving position accuracy," in addition to "PRS" (or "GNSS signal") and "position data," the data used in the data set may include, for example, at least one of the following data or information:

 (Z1)RSRP、RSRQ、SINR(Signal-to-interference-plus-noise ratio)、又はADコンバータの出力波形(これらの測定対象は、PRSでもよい。これらの測定対象は、gNB200から受信した他の受信信号でもよい。) (Z1) RSRP, RSRQ, SINR (signal-to-interference-plus-noise ratio), or output waveform of an AD converter (these measurements may be PRS. The measurements may be other received signals received from gNB200.)

 (Z2)LOS(Line Of Sight)又はNLOS(Non Line Of Sight) (Z2) LOS (Line of Sight) or NLOS (Non Line of Sight)

 (Z3)測定タイミング、確度、尤度 (Z3) Measurement timing, accuracy, likelihood

 (Z4)RFフィンガープリント(RF fingerprint)(セルIDと、当該セルIDのセルにおける受信品質) (Z4) RF fingerprint (cell ID and reception quality in the cell with that cell ID)

 (Z5)受信信号の到来角(AOA:Angle of Arrival)、アンテナ毎の受信レベル、アンテナ毎の受信位相、アンテナ毎の受信時間差(OTDOA:Observed Time Difference Of Arrival) (Z5) Angle of arrival of received signal (AOA: Angle of Arrival), reception level for each antenna, reception phase for each antenna, reception time difference for each antenna (OTDOA: Observed Time Difference Of Arrival)

 (Z6)Wi-Fi(商標登録)などの無線LAN(Local Area Network)、又はブルートゥース(登録商標)などの近距離無線通信で用いられるビーコンの受信情報 (Z6) Received information from beacons used in wireless LANs (Local Area Networks) such as Wi-Fi (registered trademark) or short-range wireless communications such as Bluetooth (registered trademark)

 (Z7)UE100の移動速度(当該移動速度は、GNSS受信機150により測定されてもよい。当該移動速度は、UE100内の速度センサにより測定されてもよい。)
 UE100は、どの種別の入力データを自身において機械学習において取り扱い可能かを示す能力情報を制御データとしてgNB200へ送信してもよい。能力情報として、(Z1)から(Z7)のいずれかの情報又はデータが含まれてもよいし、学習用データと推論用データとを別にして(Z1)から(Z7)のいずれかの情報又はデータが含まれてもよい。また、gNB200は、データセットとして用いられるデータ種別情報を制御データとしてUE100へ送信してもよい。データ種別情報には、例えば、(Z1)から(Z7)に示すデータ又は情報のいずれが含まれてもよいし、学習用データと推論用データとを別にして(Z1)から(Z7)のいずれかの情報又はデータが含まれてもよい。
(Z7) Moving speed of UE 100 (The moving speed may be measured by GNSS receiver 150. The moving speed may be measured by a speed sensor in UE 100.)
The UE 100 may transmit capability information indicating which type of input data the UE 100 can handle in machine learning to the gNB 200 as control data. The capability information may include any of the information or data from (Z1) to (Z7), or may include any of the information or data from (Z1) to (Z7) separately from the learning data and the inference data. The gNB 200 may also transmit data type information used as a data set to the UE 100 as control data. The data type information may include, for example, any of the data or information shown in (Z1) to (Z7), or may include any of the information or data from (Z1) to (Z7) separately from the learning data and the inference data.

 (1.4)他の配置例
 次に、他の配置例について説明する。
(1.4) Other Arrangement Examples Next, other arrangement examples will be described.

 図14は、第1実施形態に係る「CSIフィードバック向上」の他の配置例を表す図である。図14では、データ収集部A1と、モデル学習部A2と、モデル推論部A3と、データ処理部A4とがgN200に含まれる例を表している。すなわち、図14は、モデル学習及びモデル推論がgNB200で行われる例である。図14では、送信エンティティTEがgNB200であり、受信エンティティREがUE100である例を表している。 FIG. 14 is a diagram showing another example of the arrangement of "CSI feedback improvement" according to the first embodiment. FIG. 14 shows an example in which a data collection unit A1, a model learning unit A2, a model inference unit A3, and a data processing unit A4 are included in a gN200. That is, FIG. 14 shows an example in which model learning and model inference are performed in a gNB200. FIG. 14 shows an example in which a transmitting entity TE is a gNB200, and a receiving entity RE is a UE100.

 図14では、gNB200がSRS(Sounding Reference Signal)に基づいて行うCSI推定にAI/ML技術が導入された例を表している。そのため、gNB200は、SRSに基づいてCSIを生成するCSI生成部231を有する。当該CSIは、UE100とgNB200との間の上りリンクのチャネル状態を示す情報である。gNB200(例えば、データ処理部A4)は、SRSに基づいて生成したCSIに基づいて例えば上りリンクスケジューリングを行う。 Figure 14 shows an example in which AI/ML technology is introduced into CSI estimation performed by gNB200 based on SRS (Sounding Reference Signal). Therefore, gNB200 has a CSI generation unit 231 that generates CSI based on SRS. The CSI is information indicating the channel state of the uplink between UE100 and gNB200. gNB200 (e.g., data processing unit A4) performs, for example, uplink scheduling based on the CSI generated based on SRS.

 図15は、第1実施形態に係る他の配置例における動作例を表す図である。 FIG. 15 shows an example of operation in another arrangement example according to the first embodiment.

 図15に示すように、ステップS201において、gNB200は、UE100に対してSRS送信設定を行う。SRS送信設定には、UE100が送信する参照信号の種別情報が含まれてもよい。 As shown in FIG. 15, in step S201, the gNB 200 performs SRS transmission configuration for the UE 100. The SRS transmission configuration may include type information of the reference signal transmitted by the UE 100.

 ステップS202において、gNB200は学習モードを開始する。 In step S202, gNB200 starts learning mode.

 ステップS203において、UE100は、SRS送信設定(ステップS201)に従って、フルSRSをgNB200へ送信する。gNB200の受信部220は、フルSRSを受信する。学習モードにおいて、CSI生成部231は、フルSRSに基づいてCSIを生成(又は推定)する。データ収集部A1は、フルSRSとCSIとを収集する。モデル学習部A2は、フルSRSとCSIとを学習用データとして、学習済モデルを作成する。 In step S203, UE 100 transmits the full SRS to gNB 200 according to the SRS transmission setting (step S201). The receiver 220 of gNB 200 receives the full SRS. In the learning mode, the CSI generator 231 generates (or estimates) CSI based on the full SRS. The data collector A1 collects the full SRS and CSI. The model learning unit A2 creates a learned model using the full SRS and CSI as learning data.

 ステップS204において、gNB200は、学習済モデルに推論用データとして入力するSRSの送信パターン(パンクチャパターン)を特定し、特定したSRS送信パターンをUE100に設定する。gNB200は、特定したSRS送信パターンを含むSRS送信設定をgNB200へ送信してもよい。 In step S204, gNB200 identifies an SRS transmission pattern (puncture pattern) to be input to the learned model as inference data, and sets the identified SRS transmission pattern to UE100. gNB200 may transmit an SRS transmission setting including the identified SRS transmission pattern to gNB200.

 ステップS205において、gNB200は、学習モードから推論モードへ切り替える。gNB200は、学習済モデルを用いたモデル推論を開始する。 In step S205, gNB200 switches from learning mode to inference mode. gNB200 starts model inference using the trained model.

 ステップS206において、UE100は、SRS送信設定(ステップS204)に従い、部分的なSRSを送信する。gNB200は、当該SRSを推論用データとして学習済みモデルに入力してチャネル推定結果を得ると、当該チャネル推定結果を用いてUE100の上りリンクスケジューリング(例えば、上りリンク送信ウェイト等の制御)を行う。なお、gNB200は、学習済みモデルによる推論精度が悪化した場合、フルSRSを送信するようUE100に再設定してもよい。 In step S206, UE100 transmits a partial SRS according to the SRS transmission setting (step S204). gNB200 inputs the SRS as inference data into the trained model to obtain a channel estimation result, and then uses the channel estimation result to perform uplink scheduling for UE100 (e.g., control of uplink transmission weight, etc.). Note that gNB200 may reconfigure UE100 to transmit a full SRS if the inference accuracy of the trained model deteriorates.

 (1.5)連合学習が行われる場合の配置例
 次に、連合学習(Federated learning)が行われる場合の各機能ブロックの配置例について説明する。連合学習とは、例えば、データ(又はデータセット)を集約せず分散した状態で機械学習が行われる機械学習の一手法である。連合学習では、各エンティティがデータを送信しなくてもよいため、各エンティティのセキュリティを確保することができる。また、連合学習は、従来の集中型の機械学習と同等の精度で学習結果を得ることができる、とされる。
(1.5) Example of Arrangement When Federated Learning is Performed Next, an example of the arrangement of each functional block when federated learning is performed will be described. Federated learning is, for example, a machine learning technique in which machine learning is performed in a distributed state without consolidating data (or a data set). In federated learning, each entity does not need to transmit data, so the security of each entity can be ensured. In addition, it is said that federated learning can obtain learning results with the same accuracy as conventional centralized machine learning.

 図16は、第1実施形態に係る連合学習が行われる場合の配置例を表す図である。図16に示す例は、連合学習を用いて、UE100の位置推定が行われる場合の例を表している。図16では、UE100がデータ収集部A1とモデル学習部A2とモデル推論部A3とを有する例である。すなわち、モデル学習とモデル推論とがUE100で行われる例を表している。図16では、UE100が送信エンティティTEとなり、gNB200及び/又は位置サーバ400が受信エンティティREとなる例を表している。 FIG. 16 is a diagram showing an example of a configuration in which federated learning according to the first embodiment is performed. The example shown in FIG. 16 shows an example in which location estimation of UE100 is performed using federated learning. FIG. 16 shows an example in which UE100 has a data collection unit A1, a model learning unit A2, and a model inference unit A3. That is, it shows an example in which model learning and model inference are performed in UE100. FIG. 16 shows an example in which UE100 is the transmitting entity TE, and gNB200 and/or location server 400 are the receiving entity RE.

 図16に示す連合学習は、例えば、以下の手順で行われる。 The federated learning shown in Figure 16 is carried out, for example, in the following steps.

 第1に、位置サーバ400は、モデル学習のベースとなるモデルをUE100へ送信する。 First, the location server 400 transmits the model that serves as the basis for model learning to the UE 100.

 第2に、UE100(モデル学習部A2)は、UE100に存在するデータを用いてモデル学習を行う。UE100に存在するデータは、例えば、gNB200から受信したPRS及び/又はGNSS受信機150の出力データ(GNSS信号)である。UE100に存在するデータは、PRSの受信結果及び/又はGNSS受信機150の出力データに基づいて位置情報生成部133が生成する位置データを含んでもよい。 Secondly, UE100 (model learning unit A2) performs model learning using data present in UE100. The data present in UE100 is, for example, the PRS received from gNB200 and/or output data (GNSS signal) of GNSS receiver 150. The data present in UE100 may include location data generated by location information generation unit 133 based on the reception result of PRS and/or output data of GNSS receiver 150.

 第3に、UE100は、学習結果である学習済モデルをモデル推論部A3で適用するとともに、学習済モデルに含まれる変数パラメータ(以下では、「学習済パラメータ」と称する場合がある。)を位置サーバ400へ送信する。上述した例では、最適化されたa(傾き)及びb(切片)が学習済パラメータに相当する。 Thirdly, UE100 applies the learned model, which is the result of learning, in model inference unit A3, and transmits variable parameters included in the learned model (hereinafter, sometimes referred to as "learned parameters") to location server 400. In the above example, the optimized a (slope) and b (intercept) correspond to the learned parameters.

 第4に、位置サーバ400(連合学習部A5)は、複数のUE100からの学習済パラメータを収集し、これらを統合する。位置サーバ500は、統合により得られた学習済モデルをUE100へ送信してもよい。位置サーバ400は、当該学習済モデルと、UE100からの測定報告とに基づいて、UE100の位置を推定できる。 Fourth, the location server 400 (associated learning unit A5) collects learned parameters from multiple UEs 100 and integrates them. The location server 500 may transmit the learned model obtained by the integration to the UE 100. The location server 400 can estimate the location of the UE 100 based on the learned model and the measurement report from the UE 100.

 図17は、第1実施形態に係る連合学習における動作例を表す図である。 FIG. 17 shows an example of operation in associative learning according to the first embodiment.

 図17に示すように、ステップS301において、gNB200は、UE100が学習するベースとなるモデルを通知してもよい。位置サーバ400が、gNB200を介して、当該モデルを通知してもよい。 As shown in FIG. 17, in step S301, gNB200 may notify UE100 of a model that serves as a basis for learning. Location server 400 may notify the model via gNB200.

 ステップS302において、gNB200は、UE100に対してモデル学習を指示する。gNB200は、学習済パラメータの報告タイミング(トリガ条件)を設定してもよい。報告タイミングは、周期的なタイミングでもよい。当該報告タイミングは、学習の習熟度が条件を満たしたことをトリガ(すなわち、イベントトリガ)とするタイミングでもよい。 In step S302, gNB200 instructs UE100 to learn the model. gNB200 may set the report timing (trigger condition) of the learned parameters. The report timing may be periodic. The report timing may be triggered by the learning proficiency satisfying a condition (i.e., an event trigger).

 ステップS303において、UE100は学習モードを開始する。UE100は、フルPRS(又はフルGNSS信号)と、位置情報生成部133で生成された位置データとを、学習用データとして、モデル学習を行う。 In step S303, UE 100 starts a learning mode. UE 100 performs model learning using the full PRS (or full GNSS signal) and the location data generated by location information generating unit 133 as learning data.

 ステップS304において、UE100は、報告タイミングの条件が満たされたとき、その時点での学習済パラメータをネットワーク(gNB200又は位置サーバ400)へ送信する。 In step S304, when the reporting timing condition is met, the UE 100 transmits the learned parameters at that time to the network (gNB 200 or location server 400).

 ステップS305において、位置サーバ400は、複数のUE100から報告された学習済パラメータを統合する。 In step S305, the location server 400 integrates the learned parameters reported from multiple UEs 100.

 (1.6)モデル転送例 (1.6) Model transfer example

 (1.1)から(1.5)において、AI/ML技術の各機能ブロックの配置例について説明した。以下では、モデルの転送例について説明する。転送対象となるモデルは、モデル推論で用いられる学習済モデルでもよい。当該モデルは、モデル学習で用いられる未学習(又は学習中)のモデルであってもよい。 In (1.1) to (1.5), examples of the layout of each functional block of AI/ML technology were explained. Below, an example of model transfer is explained. The model to be transferred may be a trained model used in model inference. The model may also be an untrained (or untrained) model used in model training.

 (1.6.1)モデル転送に関する第1動作パターン
 図18は、第1実施形態に係るモデル転送に関する第1動作パターンの動作例を表す図である。図18に示す例では、受信エンティティREが主としてUE100であるものとして説明するが、受信エンティティREはgNB200又はAMF300であってもよい。また、図18に示す例では、送信エンティティTEがgNB200であるものとして説明するが、送信エンティティTEはUE100又はAMF300であってもよい。
(1.6.1) First operation pattern related to model forwarding Figure 18 is a diagram showing an example of an operation of the first operation pattern related to model forwarding according to the first embodiment. In the example shown in Figure 18, the receiving entity RE is mainly described as the UE 100, but the receiving entity RE may be the gNB 200 or the AMF 300. In addition, in the example shown in Figure 18, the transmitting entity TE is described as the gNB 200, but the transmitting entity TE may be the UE 100 or the AMF 300.

 図18に示すように、ステップS401において、gNB200は、機械学習処理に関する実行能力を示す情報要素(IE)を含むメッセージの送信を要求するための能力問合せメッセージをUE100に送信する。UE100は、当該能力問合せメッセージを受信する。但し、gNB200は、機械学習処理の実行を行う場合(実行を行うと判断した場合)に、当該能力問い合わせメッセージを送信してもよい。 As shown in FIG. 18, in step S401, gNB200 transmits a capability inquiry message to UE100 to request transmission of a message including an information element (IE) indicating the execution capability for machine learning processing. UE100 receives the capability inquiry message. However, gNB200 may transmit the capability inquiry message when executing machine learning processing (when it has determined that the execution will be performed).

 ステップS402において、UE100は、機械学習処理に関する実行能力(別の観点では、機械学習処理に関する実行環境)を示す情報要素を含むメッセージをgNB200に送信する。gNB200は、当該メッセージを受信する。当該メッセージは、RRCメッセージ(例えば、「UE Capability」メッセージ、又は新たに規定されるメッセージ(例えば、「UE AI Capability」メッセージ等))であってもよい。或いは、送信エンティティTEがAMF300であって、当該メッセージがNASメッセージであってもよい。或いは、機械学習処理(AI/ML処理)を実行又は制御するための新たなレイヤが規定される場合、当該メッセージは、当該新たなレイヤのメッセージであってもよい。 In step S402, UE100 transmits a message including an information element indicating execution capability for machine learning processing (or, from another perspective, execution environment for machine learning processing) to gNB200. gNB200 receives the message. The message may be an RRC message (e.g., a "UE Capability" message, or a newly defined message (e.g., a "UE AI Capability" message, etc.)). Alternatively, the transmitting entity TE may be AMF300 and the message may be a NAS message. Alternatively, if a new layer for performing or controlling machine learning processing (AI/ML processing) is defined, the message may be a message of the new layer.

 機械学習処理に関する実行能力を示す情報要素は、機械学習処理を実行するためのプロセッサの能力を示す情報要素及び/又は機械学習処理を実行するためのメモリの能力を示す情報要素であってもよい。プロセッサの能力を示す情報要素として、具体的には、AIプロセッサの品番(又は型番)を表す情報要素であってもよい。また、メモリの能力を示す情報要素として、具体的には、メモリ容量を示す情報を情報要素であってもよい。 The information element indicating the execution capability for machine learning processing may be an information element indicating the capability of a processor for executing machine learning processing and/or an information element indicating the capability of a memory for executing machine learning processing. Specifically, the information element indicating the processor capability may be an information element indicating the product number (or model number) of the AI processor. Also, specifically, the information element indicating the memory capability may be an information element indicating the memory capacity.

 或いは、機械学習処理に関する実行能力を示す情報要素は、推論処理(モデル推論)の実行能力を示す情報要素であってもよい。推論処理の実行能力を示す情報要素は、具体的には、ディープニューラルネットワークモデルのサポート可否を示す情報要素でもよい。当該情報祖は、推論処理の実行に要する時間(又は応答時間)を示す情報要素でもよい。 Alternatively, the information element indicating the execution capability regarding machine learning processing may be an information element indicating the execution capability of inference processing (model inference). Specifically, the information element indicating the execution capability of inference processing may be an information element indicating whether or not a deep neural network model is supported. The information element may be an information element indicating the time (or response time) required to execute the inference processing.

 或いは、機械学習処理に関する実行能力を示す情報要素は、学習処理(モデル学習)の実行能力を示す情報要素であってもよい。学習処理の実行能力を示す情報要素は、具体的には、学習処理の同時実行数を示す情報要素でもよい。当該情報要素は、学習処理の処理容量を示す情報要素でもよい。 Alternatively, the information element indicating the execution capability related to machine learning processing may be an information element indicating the execution capability of learning processing (model learning). Specifically, the information element indicating the execution capability of learning processing may be an information element indicating the number of learning processing operations being executed simultaneously. The information element may be an information element indicating the processing capacity of the learning processing.

 ステップS403において、gNB200は、ステップS402で受信したメッセージに含まれる情報要素に基づいて、UE100に設定(又は配備)するモデルを決定する。 In step S403, gNB200 determines the model to be configured (or deployed) in UE100 based on the information elements contained in the message received in step S402.

 ステップS404において、gNB200は、ステップS403で決定したモデルを含むメッセージをUE100へ送信する。UE100は、当該メッセージを受信し、当該メッセージに含まれるモデルを用いて機械学習処理(すなわち、モデル学習処理及び/又はモデル推論処理)を行う。ステップS404の具体例は、次の第2動作パターンで説明する。 In step S404, gNB200 transmits a message including the model determined in step S403 to UE100. UE100 receives the message and performs machine learning processing (i.e., model learning processing and/or model inference processing) using the model included in the message. A specific example of step S404 will be described in the following second operation pattern.

 (1.6.2)モデル転送に関する第2動作パターン
 図19は、第1実施形態に係るモデル及び付加情報を含む設定メッセージの一例を表す図である。設定メッセージは、gNB200からUE100に送信されるRRCメッセージ(例えば、「RRC Reconfiguration」メッセージ、又は新たに規定されるメッセージ(例えば、「AI Deployment」メッセージ又は「AI Reconfiguration」メッセージ等))であってもよい。或いは、設定メッセージは、AMF300AからUE100に送信されるNASメッセージであってもよい。或いは、機械学習処理(AI/ML処理)を実行又は制御するための新たなレイヤが規定される場合、当該メッセージは、当該新たなレイヤのメッセージであってもよい。
(1.6.2) Second operation pattern regarding model transfer FIG. 19 is a diagram showing an example of a configuration message including a model and additional information according to the first embodiment. The configuration message may be an RRC message (e.g., an "RRC Reconfiguration" message, or a newly defined message (e.g., an "AI Deployment" message or an "AI Reconfiguration" message, etc.)) transmitted from the gNB 200 to the UE 100. Alternatively, the configuration message may be a NAS message transmitted from the AMF 300A to the UE 100. Alternatively, when a new layer for performing or controlling machine learning processing (AI / ML processing) is defined, the message may be a message of the new layer.

 図19の例では、設定メッセージは、3つのモデル(Model#1乃至#3)を含む。各モデルは、設定メッセージのコンテナとして含まれている。但し、設定メッセージは、1つのモデルのみを含んでもよい。設定メッセージは、付加情報として、3つのモデル(Model#1乃至#3)のそれぞれに対応して個別に設けられた3つの個別付加情報(Info#1乃至#3)と、3つのモデル(Model#1乃至#3)に共通に対応付けられた共通付加情報(Meta-Info)と、を更に含む。個別付加情報(Info#1乃至#3)のそれぞれは、対応するモデルに固有の情報を含む。共通付加情報(Meta-Info)は、設定メッセージ内のすべてのモデルに共通の情報を含む。 In the example of FIG. 19, the setting message includes three models (Model #1 to #3). Each model is included as a container in the setting message. However, the setting message may include only one model. The setting message further includes, as additional information, three individual additional information (Info #1 to #3) that is provided individually corresponding to each of the three models (Model #1 to #3), and common additional information (Meta-Info) that is commonly associated with the three models (Model #1 to #3). Each of the individual additional information (Info #1 to #3) includes information unique to the corresponding model. The common additional information (Meta-Info) includes information common to all models in the setting message.

 個別付加情報は、各モデルに付されるインデックス(インデックス番号)を表すモデルインデックスでもよい。当該個別付加情報は、モデルを適用(実行)するために必要な性能(例えば処理遅延)を示すモデル実行条件でもよい。 The individual additional information may be a model index that indicates an index (index number) assigned to each model. The individual additional information may be a model execution condition that indicates the performance (e.g., processing delay) required to apply (execute) the model.

 個別付加情報又は共通付加情報は、モデルを適用する機能(例えば、「CSIフィードバック」、「ビーム管理」、「位置測位」など)を指定するモデル用途であってもよい。当該個別付加情報又は当該共通付加情報は、指定された基準(例えば移動速度)が満たされたことに応じて対応するモデルを適用(実行)するモデル選択基準であってもよい。 The individual additional information or the common additional information may be a model application that specifies a function to which a model is to be applied (e.g., "CSI feedback," "beam management," "positioning," etc.). The individual additional information or the common additional information may be a model selection criterion that applies (executes) a corresponding model when a specified criterion (e.g., moving speed) is satisfied.

 (2)第1実施形態に係る通信制御方法
 次に、第1実施形態に係る通信制御方法について説明する。
(2) Communication Control Method According to First Embodiment Next, a communication control method according to the first embodiment will be described.

 上述した「CSIフィードバック」でも説明したように、UE100は、CQIについての値と、PMIについての値と、RIについての値とを、1つの組(当該組を、(CQI,PMI,RI)と称する場合がある。)にして、3つの値を1つのCSIとして、gNB200へフィードバック(又は送信)する。例えば、UE100は、あるタイミングで、(CQI#1,PMI#1,RI#1)をCSIとしてgNB200へフィードバックし、別のタイミングで、(CQI#2,PMI#2,RI#2)をCSIとしてgNB200へフィードバックする。 As explained in the above "CSI Feedback", UE100 groups the CQI value, the PMI value, and the RI value into one set (this set may be referred to as (CQI, PMI, RI)) and feeds back (or transmits) the three values as one CSI to gNB200. For example, UE100 feeds back (CQI#1, PMI#1, RI#1) as CSI to gNB200 at a certain timing, and feeds back (CQI#2, PMI#2, RI#2) as CSI to gNB200 at another timing.

 この際、UE100において、3つの値を1つのコードで表す場合を考える。例えば、(CQI#1,PMI#1,RI#1)をコード「1」として表し、(CQI#2,PMI#2,RI#2)をコード「2」として表す、などである。 In this case, consider the case where three values are represented by one code in UE 100. For example, (CQI#1, PMI#1, RI#1) is represented as code "1", (CQI#2, PMI#2, RI#2) is represented as code "2", etc.

 UE100は、CSI状態報告として、3つの値の組(以下では、「CSI」と称する場合がある。)を1つのコードで送信することができれば、CSIを送信する場合と比較して、UE100が1回のCSIで送信する情報量を少なくすることができる。すなわち、コード化により、コード化しない場合と比較して、情報の圧縮が可能となる。 If UE100 can transmit a set of three values (hereinafter, sometimes referred to as "CSI") as a CSI status report using one code, the amount of information transmitted by UE100 in one CSI transmission can be reduced compared to when transmitting CSI. In other words, coding makes it possible to compress information compared to when coding is not performed.

 図20は、第1実施形態に係るテーブル例を表す図である。図20に示すテーブルは、コードと、CSIとの対応関係を表す。例えば、(CQI#1,PMI#1,RI#1)がコード「1」に対応し、(CQI#1,PMI#1,RI#2)がコード「2」に対応し、(CQI#1,PMI#1,RI#3)がコード「3」に対応する。 FIG. 20 is a diagram showing an example of a table according to the first embodiment. The table shown in FIG. 20 shows the correspondence between codes and CSI. For example, (CQI#1, PMI#1, RI#1) corresponds to code "1", (CQI#1, PMI#1, RI#2) corresponds to code "2", and (CQI#1, PMI#1, RI#3) corresponds to code "3".

 このようなテーブルをUE100とgNB200とで共有することができれば、UE100は、コードをgNB200へフィードバック(又は送信)し、gNB200は当該コードからCSI状態報告の各値を取得することが可能である。 If such a table can be shared between UE100 and gNB200, UE100 can feed back (or transmit) a code to gNB200, and gNB200 can obtain each value of the CSI status report from the code.

 しかし、CSIが取り得る値は膨大である。CSIが取り得る値の全てに対して、テーブルを用いてコードを割り当てるのは現実的ではない。テーブルに含まれる情報量が、CSIが取り得る値の範囲に応じて、膨大になるからである。 However, the number of possible values for CSI is enormous. It is not realistic to use a table to assign codes to all possible values for CSI. This is because the amount of information contained in the table becomes enormous depending on the range of possible values for CSI.

 そこで、第1実施形態では、テーブルを用いる場合と比較して、情報量の削減を図ることを目的とする。 The first embodiment aims to reduce the amount of information compared to when a table is used.

 そのため、第1実施形態では、テーブルに代えて、機械学習技術による学習済モデルが用いられる。具体的には、第1に、送信エンティティ(例えばUE100)が、所定データ(例えばCSI)と当該所定データを表すコードとを学習用データとして学習済モデルを作成する。第2に、送信エンティティが、学習済モデルを受信エンティティ(例えばgNB200)へ送信する。第3に、送信エンティティが、学習済モデルを用いて所定データからコードを推論する。第4に、送信エンティティが、コードを受信エンティティへ送信する。第5に、受信エンティティが、学習済モデルを用いてコードから所定データを取得する。 Therefore, in the first embodiment, instead of a table, a learned model based on machine learning technology is used. Specifically, first, a transmitting entity (e.g., UE 100) creates a learned model using predetermined data (e.g., CSI) and a code representing the predetermined data as learning data. Second, the transmitting entity transmits the learned model to a receiving entity (e.g., gNB 200). Third, the transmitting entity infers a code from the predetermined data using the learned model. Fourth, the transmitting entity transmits the code to the receiving entity. Fifth, the receiving entity acquires the predetermined data from the code using the learned model.

 これにより、例えば、UE100は、学習済モデルを利用してコードを取得(又は推論)するようにしており、テーブルを用いる場合(すなわち、機械学習技術を用いない場合)と比較して、情報量の削減を図ることができる。また、UE100は、CSIを送信するのではなく、CSIを表すコードを送信するため、送信する情報量の削減を図ることができる。 As a result, for example, UE 100 uses a learned model to acquire (or infer) a code, which can reduce the amount of information compared to when a table is used (i.e., when machine learning technology is not used). In addition, since UE 100 does not transmit CSI but transmits a code representing the CSI, the amount of information transmitted can be reduced.

 (2.1)第1実施形態における各機能ブロックの配置例
 図21は、第1実施形態に係る各機能ブロックの配置例を表す図である。図21に示すように、UE100がデータ収集部A1と、モデル学習部A2と、モデル推論部A3とを有する。すなわち、図21に示す例は、UE100においてモデル学習及びモデル推論が行われる例である。送信エンティティTEはUE100であり、受信エンティティREはgNB200である例を表している。
(2.1) Example of Arrangement of Each Functional Block in the First Embodiment FIG. 21 is a diagram showing an example of arrangement of each functional block according to the first embodiment. As shown in FIG. 21, the UE 100 has a data collection unit A1, a model learning unit A2, and a model inference unit A3. That is, the example shown in FIG. 21 is an example in which model learning and model inference are performed in the UE 100. The example shows that the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.

 図21に示すように、UE100は、コード生成部135を有する。コード生成部135は、CSIを表すコードを生成する。コードは、各CSIに一対一に対応する。但し、コードは、CSIよりもビット数が少ない。コードとCSIとの対応関係は、図20で示されるものであってもよい。コード生成部135は、CSI生成部131がCSIを生成する毎にコードを生成してもよい。 As shown in FIG. 21, UE 100 has a code generation unit 135. The code generation unit 135 generates a code representing CSI. The code has a one-to-one correspondence with each CSI. However, the code has a smaller number of bits than the CSI. The correspondence between the code and the CSI may be as shown in FIG. 20. The code generation unit 135 may generate a code each time the CSI generation unit 131 generates CSI.

 第1に、学習モードのとき、データ収集部A1は、当該CSIと当該コードとを収集する。モデル学習部A2は、当該CSIと当該コードとを学習用データとしてモデル学習を行い、当該CSIから当該コードを推論するための学習済モデルを作成する。 First, in the learning mode, the data collection unit A1 collects the CSI and the code. The model learning unit A2 performs model learning using the CSI and the code as learning data, and creates a learned model for inferring the code from the CSI.

 モデル学習部A2では、地域毎に、学習済モデルを作成する。地域は、1又は複数のセルで表された地域でもよい。当該地域は、1又は複数のTA(Tracking Area)で表された地域でもよい。或いは、地域は、1又は複数のRA(Registration Area)で表された地域でもよい。当該地域は、1又は複数のPLMN(Public Land Mobile Network)で表された地域でもよい。なお、TAは、1又は複数のセルを含み、RRCアイドル状態のUE100がMMEを更新することなく移動可能なエリアを示す。また、RAは、1又は複数のセルを含み、TAの集合として規定される。更に、PLMNは、通信事業者がサービスを提供することが可能な範囲を示す。 The model learning unit A2 creates a learned model for each region. The region may be a region represented by one or more cells. The region may be a region represented by one or more Tracking Areas (TAs). Alternatively, the region may be a region represented by one or more Registration Areas (RAs). The region may be a region represented by one or more Public Land Mobile Networks (PLMNs). A TA includes one or more cells and indicates an area in which a UE 100 in an RRC idle state can move without updating the MME. An RA includes one or more cells and is defined as a collection of TAs. Furthermore, a PLMN indicates the range in which a telecommunications carrier can provide services.

 どの地域で学習済モデルを作成すべきかについては、例えば、制御データを利用して、gNB200から設定されてもよい。当該制御データには、地域を表す識別子(例えばセルID、TAI(Tracking Area Identity)、各RAを表す識別子、PLMN IDなど)により、学習済モデルが作成されるべき地域が指定されてもよい。 The region in which the trained model should be created may be set by gNB200 using control data, for example. The control data may specify the region in which the trained model should be created by an identifier representing the region (e.g., a cell ID, a Tracking Area Identity (TAI), an identifier representing each RA, a PLMN ID, etc.).

 このように、UE100において、地域毎に学習済モデルを作成することで、地域を考慮することなく学習済モデルを作成する場合と比較して、学習済モデルの情報量(又は規模)を少なくすることができる。また、UE100において、地域毎に学習済モデルを作成することで、地域を考慮することなく学習済モデルを作成する場合と比較して、UE100がgNB200へ送信する際のオーバーヘッドを少なくし、通信効率を向上させることが可能となる。 In this way, by creating a trained model for each region in UE100, the amount of information (or size) of the trained model can be reduced compared to creating a trained model without taking the region into consideration. Also, by creating a trained model for each region in UE100, the overhead when UE100 transmits to gNB200 can be reduced and communication efficiency can be improved compared to creating a trained model without taking the region into consideration.

 第2に、推論モードのとき、送信部120は、学習済モデルをgNB200へ送信する。送信部120は、学習モードのときに学習済モデルをgNB200へ送信してもよい。上述したように、送信部120は、RRCメッセージに当該学習済モデルを含めて送信してもよい。送信部120は、新規メッセージに当該学習済モデルを含めて送信してもよい。また、送信部120は、NASメッセージに当該学習済モデルを含めてAMF300へ送信してもよい。送信部120は、上述したように、更に、共通付加情報及び/又は個別付加情報を送信するメッセージに付加してもよい。 Secondly, in the inference mode, the transmitting unit 120 transmits the trained model to the gNB 200. In the learning mode, the transmitting unit 120 may transmit the trained model to the gNB 200. As described above, the transmitting unit 120 may include the trained model in an RRC message and transmit it. The transmitting unit 120 may include the trained model in a new message and transmit it. Furthermore, the transmitting unit 120 may include the trained model in a NAS message and transmit it to the AMF 300. As described above, the transmitting unit 120 may further add common additional information and/or individual additional information to the message to be transmitted.

 そして、推論モードのとき、データ収集部A1は、CSIを収集する。モデル推論部A3は、学習済モデルを用いて、当該CSIからコードを推論し、推論結果データとして、コードを出力する。送信部120は、当該コードをgNB200へ送信する。このとき、送信部120は、学習済モデルを作成した際の地域を識別する地域識別情報とともに、当該コードを送信する。地域識別情報は、各地域を表す識別子により表されてもよい。一方、gNB200(制御部230)は、学習済モデルを受信し、当該学習済モデルを利用して、UE100から受信したコードからCSIを取得する。 In the inference mode, the data collection unit A1 collects CSI. The model inference unit A3 uses the learned model to infer a code from the CSI, and outputs the code as inference result data. The transmission unit 120 transmits the code to the gNB 200. At this time, the transmission unit 120 transmits the code together with regional identification information that identifies the region when the learned model was created. The regional identification information may be represented by an identifier that represents each region. Meanwhile, the gNB 200 (control unit 230) receives the learned model, and uses the learned model to obtain CSI from the code received from the UE 100.

 (2.2)第1実施形態における動作例
 次に、第1実施形態に係る動作例を説明する。
(2.2) Example of Operation in First Embodiment Next, an example of operation in the first embodiment will be described.

 図22は、第1実施形態に係る動作例を表す図である。 FIG. 22 shows an example of operation according to the first embodiment.

 図22に示すように、ステップS501において、gNB200は、推論モード時のCSI-RSの送信パターン(パンクチャパターン)を、制御データとしてUE100へ通知又は設定してもよい。また、gNB200は、学習用データとして用いられるデータの種別を表すデータ種別情報(ここでは、CSIとコードとのデータセットを表すデータ種別情報)を、制御データとしてUE100へ通知又は設定してもよい。更に、gNB200は、UE100に対して学習モードを開始させるための切り替え通知を制御データとして送信してもよい。 As shown in FIG. 22, in step S501, gNB200 may notify or set the transmission pattern (puncture pattern) of CSI-RS in inference mode to UE100 as control data. In addition, gNB200 may notify or set data type information (here, data type information representing a data set of CSI and code) indicating the type of data used as learning data to UE100 as control data. Furthermore, gNB200 may transmit a switching notification to UE100 as control data to start the learning mode.

 ステップS502において、UE100は、学習モードを開始する。 In step S502, UE100 starts the learning mode.

 ステップS503において、gNB200は、フルCSI-RSを送信する。 In step S503, gNB200 transmits full CSI-RS.

 ステップS504において、UE100は、フルCSI-RSに基づいて、CSIを作成する。UE100は、フルCSI-RSに基づいて、CSI状態報告として、CQIとPMIとRIとの組(すなわち、CSI)を作成する。 In step S504, UE 100 creates CSI based on the full CSI-RS. UE 100 creates a set of CQI, PMI, and RI (i.e., CSI) as a CSI status report based on the full CSI-RS.

 ステップS505において、UE100は、CSIをgNB200へ送信する。 In step S505, UE100 transmits CSI to gNB200.

 ステップS506において、UE100は、CSIとコードとを学習用データとしてモデル学習を行って、学習済モデルを作成する。このとき、UE100は、地域毎に、学習済モデルを作成する。UE100は、学習済モデルを作成したことを表す完了通知を制御データとしてgNB200へ送信してもよい。 In step S506, UE100 performs model learning using the CSI and the code as learning data to create a learned model. At this time, UE100 creates a learned model for each region. UE100 may transmit a completion notification indicating that the learned model has been created to gNB200 as control data.

 ステップS507において、UE100は、学習モードから推論モードへ切り替える。UE100は、gNB200から受信した切り替え通知に従って、推論モードへの切り替えを行ってもよい。 In step S507, UE100 switches from the learning mode to the inference mode. UE100 may switch to the inference mode in accordance with the switching notification received from gNB200.

 ステップS508において、UE100は、ステップS506で作成した学習済モデルをgNB200へ送信する。上述したように、UE100は、学習済モデルを含むRRCメッセージ(又は新たに規定されたメッセージ)をgNB200へ送信してもよい。当該RRCメッセージには地域識別情報が含まれてもよい。gNB200は、当該学習済モデルを受信する。 In step S508, UE100 transmits the learned model created in step S506 to gNB200. As described above, UE100 may transmit an RRC message (or a newly defined message) including the learned model to gNB200. The RRC message may include regional identification information. gNB200 receives the learned model.

 ステップS509において、gNB200は、部分的なCSI-RS(又はフルCSI-RS)を送信する。gNB200は、UE100から受信した完了通知に応じて、部分的なCSI-RSを送信してもよい。 In step S509, gNB200 transmits partial CSI-RS (or full CSI-RS). gNB200 may transmit partial CSI-RS in response to a completion notification received from UE100.

 ステップS510において、UE100は、ステップS506で作成した学習済モデルからコードを推論する。例えば、CSI生成部131は、部分的なCSI-RSに基づいてCSIを生成し、モデル推論部A3は、当該CSIを、推論用データとして、学習済モデルに入力させることで、コードを推論する。 In step S510, UE 100 infers a code from the learned model created in step S506. For example, CSI generation unit 131 generates CSI based on partial CSI-RS, and model inference unit A3 infers a code by inputting the CSI as inference data into the learned model.

 ステップS511において、UE100は、コード及び地域識別情報をgNB200へ送信する。 In step S511, UE100 transmits the code and regional identification information to gNB200.

 ステップS512において、gNB200は、地域識別情報に対応する学習済モデルを利用して、CSIを取得する。学習済モデルは、CSIを入力し、コードを出力(又は推論)するモデルである。或いは、当該学習済モデルは、コードを入力し、CSIを出力(又は推論)するモデルとしてもよい。これにより、gNB200は、学習済モデルに対して、ステップS511で受信したコードを入力させ、UE100が報告を意図したCSIを得ることができる。一般に、学習済モデルは、CSIからコードを出力したり、コードからCSIを出力したり、双方向からの処理を行うことが可能である。 In step S512, gNB200 acquires CSI using a learned model corresponding to the regional identification information. The learned model is a model that inputs CSI and outputs (or infers) a code. Alternatively, the learned model may be a model that inputs a code and outputs (or infers) CSI. In this way, gNB200 can input the code received in step S511 to the learned model and obtain the CSI that UE100 intends to report. In general, a learned model is capable of outputting a code from CSI, outputting CSI from a code, and performing processing in both directions.

 或いは、gNB200は、学習済モデルに、ステップS511で受信したコードと同一のコードが出力されるまで、学習済モデルにいくつかのCSIを入力させ、ステップS511で受信したコードと同一のコードが出力される時点でのCSIを、UE100が報告したCSI(すなわち、CSI状態報告)としてもよい。或いは、ステップS508において、UE100は、学習済モデルの作成に用いたデータセットの少なくとも一部を、学習済モデルとともにgNB200へ送信する。そして、gNB200は当該データセットを利用して、学習済モデルにCSIを入力させ、ステップS511で受信したコードと同一のコードが出力される時点てのCSIを、UE100が報告したCSIとしてもよい。 Alternatively, gNB200 may input several CSIs into the learned model until the learned model outputs the same code as the code received in step S511, and may treat the CSI at the time when the same code as the code received in step S511 is output as the CSI reported by UE100 (i.e., CSI status report). Alternatively, in step S508, UE100 transmits at least a portion of the data set used to create the learned model to gNB200 together with the learned model. Then, gNB200 may use the data set to input CSI into the learned model, and may treat the CSI at the time when the same code as the code received in step S511 is output as the CSI reported by UE100.

 その後、gNB200は、取得したCSIを利用して、スケジューリング制御を行ったり、ビーム制御を行ったりしてもよい。 Then, gNB200 may use the acquired CSI to perform scheduling control or beam control.

 (2.3.1)第1実施形態に係る他の動作例1
 次に、第1実施形態に係る他の動作例について説明する。
(2.3.1) Another Operation Example 1 According to the First Embodiment
Next, another operation example according to the first embodiment will be described.

 複数のUE100が協力して学習済モデルを作成する場合がある。例えば、上述した連合学習もこのようなケースの一例である。 There are cases where multiple UEs 100 cooperate to create a trained model. For example, the above-mentioned federated learning is one such case.

 図23は、第1実施形態に係る他の動作例を表す図である。図23では、第1実施形態で用いた学習モデルが利用される場合の例を表している。例えば、以下のようなケースを想定する。 FIG. 23 is a diagram showing another example of operation according to the first embodiment. FIG. 23 shows an example in which the learning model used in the first embodiment is used. For example, the following case is assumed.

 すなわち、UE100-1が、コード#1とCSI#1とを学習用データとして用いてモデル学習を行い、モデル#1を作成する。モデル#1は、例えば、モデル学習により学習中のモデルである。その後、UE100-2が、モデルの精度を更に高めるため、モデル#1を用いてモデル学習を行う。この際、UE100-2が、UE100-1で用いた同一のコード#1と、UE100で用いたCSI#1とは異なるCSI#2(或いはコード#2)とを用いて、モデル学習を行う。 That is, UE100-1 performs model learning using code #1 and CSI #1 as learning data to create model #1. Model #1 is, for example, a model being learned by model learning. Thereafter, UE100-2 performs model learning using model #1 in order to further improve the accuracy of the model. At this time, UE100-2 performs model learning using the same code #1 used by UE100-1 and CSI #2 (or code #2) that is different from CSI #1 used by UE100.

 このような場合、UE100-1で用いた学習用データが、UE100-2でのモデル学習により、UE100-2で用いた学習用データに上書きされ、UE#1の学習結果が反映されない学習済モデルが作成される場合がある。例えば、UE100-1において、コード#1とCSI#1とを学習用データに用いてモデル学習を行っても、UE100-2において、コード#1とCSI#2とを学習用データに用いてモデル学習を行うため、CSI#1を入力してもコード#1が推論結果として出力されずに、CSI#2を入力して初めてコード#1が推論結果として出力される学習済モデルが作成される。或いは、コード#1を入力してもCSI#1が推論結果として出力されずに、CSI#2が推論結果として出力される学習済モデルが生成される。このように、学習用データの上書きによって、CSI#1からコード#1を推論する学習済モデルが作成されず、UE100-1の学習結果が反映されない学習済モデルが作成される。 In such a case, the learning data used in UE100-1 may be overwritten by the learning data used in UE100-2 through model learning in UE100-2, and a learned model that does not reflect the learning results of UE#1 may be created. For example, even if UE100-1 performs model learning using code #1 and CSI #1 as learning data, UE100-2 performs model learning using code #1 and CSI #2 as learning data, so that even if CSI #1 is input, code #1 is not output as an inference result, and a learned model is created in which code #1 is output as an inference result only after CSI #2 is input. Alternatively, even if code #1 is input, CSI #1 is not output as an inference result, and a learned model is generated in which CSI #2 is output as an inference result. In this way, by overwriting the learning data, a learned model that infers code #1 from CSI #1 is not created, and a learned model that does not reflect the learning results of UE100-1 is created.

 そこで、第1実施形態に係る他の例では、2つの解決策により、適切に学習済モデルを作成するようにしている。 In another example of the first embodiment, two solutions are used to create an appropriate trained model.

 第1の解決策は、コードに各UE100の識別情報を付加する例である。具体的には、コードは、ユーザ装置(例えばUE100)を識別するユーザ装置識別情報(例えば各UEの識別情報)を含む。 The first solution is an example in which identification information of each UE 100 is added to the code. Specifically, the code includes user equipment identification information (e.g., identification information of each UE) that identifies the user equipment (e.g., UE 100).

 図24は、各UE100の識別情報としてUEIDがコードに付加された場合のコードとCSIとの対応関係を表す図である。図24に示すように、UE100-1において、モデル学習を行うときに、UE100-1のUEIDとして、UEID#1がコードに付加される。このため、UE100-1では、自身のUEID(例えばUEID#1)を含むコードと、CSIとを学習用データとしてモデル学習を行う。一方、UE100-2では、自身のUEID(例えばUEID#2)を含むコードと、CSIとを学習用データとしてモデル学習を行う。 Figure 24 is a diagram showing the correspondence between codes and CSI when a UEID is added to the code as identification information for each UE 100. As shown in Figure 24, when model learning is performed in UE 100-1, UEID#1 is added to the code as the UEID of UE 100-1. Therefore, in UE 100-1, model learning is performed using a code including its own UEID (e.g., UEID#1) and CSI as learning data. On the other hand, in UE 100-2, model learning is performed using a code including its own UEID (e.g., UEID#2) and CSI as learning data.

 これにより、例えば、図23において、UE100-1ではコード#1に自身のUEIDを付加した学習用データを用い、UE100-2ではコード#1に自身のUEIDを付加した学習用データを用いることになる。そのため、異なるコードを用いた学習済モデルが作成される。例えば、図23の例では、UE100-1において「1_UEID#1」と「CSI#1」とを学習用データとしたモデルが作成され、UE100-2においては、「1_UEID#2」と「CSI#2」とを学習用データとしたモデルが作成されることになり、学習用データが上書きされることがなくなる。従って、適切な学習済モデルが作成可能となる。 As a result, for example, in FIG. 23, UE 100-1 uses learning data in which its own UEID is added to code #1, and UE 100-2 uses learning data in which its own UEID is added to code #1. Therefore, a learned model using different codes is created. For example, in the example of FIG. 23, a model is created in UE 100-1 using "1_UEID#1" and "CSI#1" as learning data, and a model is created in UE 100-2 using "1_UEID#2" and "CSI#2" as learning data, and the learning data is not overwritten. Therefore, an appropriate learned model can be created.

 その後、UE100-1では、CSIに代えてコードを送信することができるため、第1実施形態と同様に、テーブルを用いた場合と比較して、情報量の削減を図ることが可能となる。 Then, the UE 100-1 can transmit the code instead of the CSI, so that, as in the first embodiment, it is possible to reduce the amount of information compared to when a table is used.

 なお、コードに付加される(又はコードに含まれる)UEの識別情報は、ネットワークから(一時的に)割り当てられるものであってもよい。当該識別情報は、各UEに予め内蔵されている識別情報であってもよい。具体的には、識別情報は、IMSI(International Mobile Subscriber Identity)、SUCI(Subscription Concealed Identifier)、GUTI(Globally Unique Temporary UE Identity)、TMSI(Temporary Mobile Subscriber Identity)、RNTI(Radio Network Temporary Identifier)などであってもよい。 The UE identification information added to (or included in) the code may be (temporarily) assigned by the network. The identification information may be identification information pre-installed in each UE. Specifically, the identification information may be IMSI (International Mobile Subscriber Identity), SUCI (Subscription Concealed Identifier), GUTI (Globally Unique Temporary UE Identity), TMSI (Temporary Mobile Subscriber Identity), RNTI (Radio Network Temporary Identifier), etc.

 第2の解決策は、学習用データに用いられるコードの範囲をUE100間で異なるようにする例である。具体的には、第1に、基地局(例えばgNB200)が、コードに使用する範囲をユーザ装置(例えばUE100)に設定する。第2に、ユーザ装置が、所定データ(例えばCSI)と、設定された範囲内におけるコードとを学習用データとして学習済モデルを作成する。 The second solution is an example in which the range of codes used for training data is made different between UEs 100. Specifically, first, a base station (e.g., gNB 200) sets the range to be used for codes in a user device (e.g., UE 100). Second, the user device creates a trained model using predetermined data (e.g., CSI) and codes within the set range as training data.

 これにより、例えば、図23において、UE100-1がモデル学習を行う際に用いるコードの範囲(例えばコード「1」からコード「10」)と、UE100-2がモデル学習を行う際に用いるコードの範囲(例えばコード「11」からコード「20」)とが異なる範囲となる。そのため、学習用データとして用いられるコードがUE100-1とUE100-2とで異なるものとなるため、UE100-1で作成された学習用データがUE100-2において上書きされることはなくなる。よって、適切な学習済モデルが作成可能となる。 As a result, for example, in FIG. 23, the range of codes used by UE 100-1 when performing model learning (e.g., code "1" to code "10") is different from the range of codes used by UE 100-2 when performing model learning (e.g., code "11" to code "20"). Therefore, the codes used as learning data differ between UE 100-1 and UE 100-2, and therefore the learning data created by UE 100-1 is not overwritten by UE 100-2. Therefore, an appropriate learned model can be created.

 次に、第1実施形態に係る他の動作例の具体的な動作例について説明する。 Next, specific examples of other operation examples according to the first embodiment will be described.

 図25は、第1実施形態に係る他の動作例を表す図である。 FIG. 25 shows another example of operation according to the first embodiment.

 ステップS601において、gNB200は、コードの範囲を示す情報を含む制御データをUE100へ送信する。gNB200は、第1実施形態のステップS501と同様に、推論モード時のCSI-RSの送信パターン、学習用データとして用いられるデータの種別、及び学習モードへの切り替えを通知又は設定してもよい。 In step S601, the gNB 200 transmits control data including information indicating the code range to the UE 100. As in step S501 of the first embodiment, the gNB 200 may notify or set the CSI-RS transmission pattern in the inference mode, the type of data used as learning data, and switching to the learning mode.

 ステップS602において、UE100は、学習モードを開始する。 In step S602, UE100 starts the learning mode.

 ステップS603において、gNB200は、フルCSI-RSを送信する。 In step S603, gNB200 transmits full CSI-RS.

 ステップS604において、UE100は、フルCSI-RSに基づいてCSIを作成する。 In step S604, UE100 creates CSI based on the full CSI-RS.

 ステップS605において、UE100は、CSIをgNB200へ送信する。 In step S605, UE100 transmits CSI to gNB200.

 ステップS606において、UE100は、コードにUEIDを付加してモデル学習を行う。UE100は、コード範囲を示す情報がgNB200から設定された場合は、コードにUEIDを付加しなくてもよい。この場合、UE100は、gNB200から設定された範囲内のコードを用いて、モデル学習を行う。UE100は、第1実施形態と同様に、地域毎に、モデル学習を行って、学習済モデルを作成してもよい。 In step S606, UE100 performs model learning by adding a UEID to the code. If information indicating the code range is set by gNB200, UE100 does not need to add a UEID to the code. In this case, UE100 performs model learning using a code within the range set by gNB200. UE100 may perform model learning for each region to create a learned model, as in the first embodiment.

 以降は、第1実施形態と同様に動作する。  From then on, it operates in the same way as in the first embodiment.

 (2.3.2)第1実施形態に係る他の動作例2
 第1実施形態では、UE100が、地域毎に学習済モデルを作成する例について説明したがこれに限定されない。例えば、UE100は、時間毎に学習済モデルを作成したり、あるタイミングで学習済モデルを作成したりしてもよい。或いは、UE100は、UE100の移動速度に応じて学習済モデルを作成してもよい。このような場合、UE100は、地域識別情報(ステップS511)に代えて、時刻情報、タイミング情報、又はUE100の移動速度情報を、gNB200へ送信してもよい。
(2.3.2) Another Operation Example 2 According to the First Embodiment
In the first embodiment, an example in which the UE 100 creates a learned model for each region has been described, but the present invention is not limited thereto. For example, the UE 100 may create a learned model for each time period or at a certain timing. Alternatively, the UE 100 may create a learned model according to the moving speed of the UE 100. In such a case, the UE 100 may transmit time information, timing information, or moving speed information of the UE 100 to the gNB 200 instead of the region identification information (step S511).

 (2.3.3)第1実施形態に係る他の動作例3
 現在無線通信システムで採用されているコードブックを用いたCSIフィードバックでは、ダウンリンクにおけるCSI-RSを測定し、その測定結果(チャネル状態)をCQI、PMI、RIなどの指標毎に、予め決められたコードブックに従って、離散化され、符号化される。当該符号(デジタル値)をフィードバックすることによりデータ量を削減することができる一方、実際のチャネル状態(アナログ量)に対して、誤差を含むことが問題点である。当該問題について、機械学習モデルを用いることで解決できる。例えば、図8において、受信部110は、CSI-RSを受信し、当該チャネル状態の測定結果をデータ収集部A1へ転送する。モデル推論部A3は、当該測定結果を推論データ(入力データ)とし、推論結果(出力データ)として、gNB200において前記チャネル状態を再生可能とするための再生可能情報を出力する。送信部120は当該再生可能情報を受信部220へ送信(フィードバック)する。gNB200は、当該再生可能情報を自身のモデル推論部(例えばデータ処理部A4)へ入力し、当該モデル推論部は前記チャネル状態を推定(再生)する。これにより、UE100が測定したチャネル状態(チャネル推定)を、gNB200においてアナログ量(もしくはこれに近い解像度)で再現することができる。gNB200は当該チャネル状態を基に、MCS、ビーム/アンテナ重み付け、MIMOランク等を適切に(誤差を抑えて)決定できる。なお、当該手法は、ここで記述したチャネル状態の再生だけではなく、前記指標(例えばPMIもしくはビーム/アンテナ重み付け)のそれぞれに対して(個々に独立して)実施してもよい。
(2.3.3) Other Operation Example 3 According to First Embodiment
In the CSI feedback using a codebook currently adopted in wireless communication systems, the CSI-RS in the downlink is measured, and the measurement result (channel state) is discretized and coded for each index such as CQI, PMI, and RI according to a predetermined codebook. While the amount of data can be reduced by feeding back the code (digital value), the problem is that it contains an error with respect to the actual channel state (analog amount). This problem can be solved by using a machine learning model. For example, in FIG. 8, the receiving unit 110 receives the CSI-RS and transfers the measurement result of the channel state to the data collecting unit A1. The model inference unit A3 outputs reproducible information for making the channel state reproducible in the gNB 200 as the measurement result (input data) and the inference result (output data). The transmitting unit 120 transmits (feeds back) the reproducible information to the receiving unit 220. The gNB 200 inputs the reproducible information to its own model inference unit (e.g., data processing unit A4), and the model inference unit estimates (reproduces) the channel state. This allows the channel state (channel estimation) measured by the UE 100 to be reproduced in analog quantities (or a resolution close to this) in the gNB 200. The gNB 200 can appropriately (with reduced error) determine the MCS, beam/antenna weighting, MIMO rank, etc. based on the channel state. Note that this method may be implemented not only for the reproduction of the channel state described here, but also for each of the indicators (e.g., PMI or beam/antenna weighting) (individually and independently).

 (2.3.4)第1実施形態に係る他の動作例4
 第1実施形態において、gNB200は、学習済みモデルの推論結果から得られたCSIの精度が、UE100から報告されたCSI又は直近のCSIとの比較により、精度が悪いことを検出するケースが想定される。つまり、gNB200が学習済みモデルと現実の動作に乖離を検出するケースが想定される。このようなケースにおいて、gNB200では、救済措置として、UE100に対し当該学習済みモデルの使用を停止してもよい。或いは、gNB200は、救済措置として、使用中の学習済モデルとは別の学習済みモデルの利用を開始するために、当該別の学習済みモデルの送信を開始してもよい。
(2.3.4) Other Operation Example 4 According to First Embodiment
In the first embodiment, the gNB 200 is assumed to detect a case in which the accuracy of the CSI obtained from the inference result of the trained model is poor by comparing it with the CSI reported from the UE 100 or the most recent CSI. That is, a case is assumed in which the gNB 200 detects a deviation between the trained model and actual operation. In such a case, the gNB 200 may stop the use of the trained model for the UE 100 as a relief measure. Alternatively, the gNB 200 may start transmitting a trained model other than the trained model being used as a relief measure in order to start using the trained model other than the trained model being used.

 [第2実施形態]
 次に、第2実施形態について説明する。第2実施形態は、第1実施形態との相違点を中心に説明する。
[Second embodiment]
Next, a second embodiment will be described, focusing on the differences from the first embodiment.

 第1実施形態では、コードに対応するデータとしてCSIを例にして説明した。第2実施形態では、コードに対応するデータとして、データの送信タイミングを例にして説明する。 In the first embodiment, CSI is used as an example of data corresponding to a code. In the second embodiment, data transmission timing is used as an example of data corresponding to a code.

 無線通信ではDRX(Discontinuous Reception:間欠受信)が用いられる場合がある。UE100に対してDRXが設定されると、UE100は、DRXサイクルのOn-duration期間において、ウェイクアップモードとなってネットワークからのPDCCHを監視し、On-duration期間以外の期間では、スリープモードとなってUE100の一部の機能をオフにしてネットワークからのデータの受信を試みる必要がなくなる。スリープモードとウェイクアップモードとが周期的に繰り返されることを、例えば、DRXと呼ぶ。DRXにより、常にウェイクアップモードとして動作するUE100と比較して、UE100の消費電力削減を図ることができる。 DRX (Discontinuous Reception) may be used in wireless communication. When DRX is set for UE100, UE100 goes into wake-up mode during the On-duration period of the DRX cycle to monitor the PDCCH from the network, and goes into sleep mode during periods other than the On-duration period to turn off some of the functions of UE100 so that there is no need to attempt to receive data from the network. The periodic repetition of sleep mode and wake-up mode is called DRX, for example. DRX can reduce the power consumption of UE100 compared to UE100 that always operates in wake-up mode.

 なお、DRXでは、UE100がRRCコネクティッド状態でDRX動作を行うコネクティッドモードDRX(C-DRX)と、UE100がRRCアイドル状態又はRRCインアクティブ状態でDRX動作を行うアイドルモードDRX(I-DRX)とがある。上述した動作はC-DRXの際の動作である。I-DRXの場合、UE100とgNB200とは、UE100の識別子(IMSI:International Mobile Subscriber Identity)を用いて、ページングメッセージが送信されるサブフレームであるページング機会(Paging Occasion:PO)と、POを含む無線フレームであるページングフレーム(Paging Frame:PF)とを計算する。I-DRXでは、gNB200が周期的なPFにおいてページングメッセージを送信し、UE100はページングメッセージを受信することで、間欠受信を行うようにしている。 Note that DRX includes connected mode DRX (C-DRX) in which UE100 performs DRX operation in an RRC connected state, and idle mode DRX (I-DRX) in which UE100 performs DRX operation in an RRC idle state or an RRC inactive state. The above-mentioned operation is the operation in C-DRX. In the case of I-DRX, UE100 and gNB200 use the UE100 identifier (IMSI: International Mobile Subscriber Identity) to calculate a paging occasion (PO), which is a subframe in which a paging message is transmitted, and a paging frame (PF), which is a radio frame containing the PO. In I-DRX, the gNB 200 transmits a paging message in a periodic PF, and the UE 100 receives the paging message, thereby performing discontinuous reception.

 C-DRXでは、DRX設定(drx-Config)は、RRCメッセージ(RRC接続再設定(RRCConnectionReconfiguration)メッセージ、又はRRC接続セットアップ(RRCConnectionSetup)など)を利用して、gNB200からUE100に設定される。一方、I-DRXでは、計算で用いられるパラメータなどはSIBを用いて報知される。UE100は、報知されたパラメータなどを用いて、POとPFとを計算することができる。 In C-DRX, the DRX setting (drx-Config) is set in the UE 100 from the gNB 200 using an RRC message (such as an RRCConnectionReconfiguration message or an RRCConnectionSetup message). On the other hand, in I-DRX, parameters used in the calculation are notified using SIB. The UE 100 can calculate the PO and PF using the notified parameters.

 なお、以下の説明では、C-DRXの例で説明するが、I-DRXに適用してもよい。 Note that the following explanation uses the example of C-DRX, but it can also be applied to I-DRX.

 図26は、第2実施形態に係るDLデータの送信タイミングと受信タイミングとの例を表す図である。例えば、gNB200は、DRXのOn Duration期間のあるタイミング(つまり、図26におけるUEの受信タイミング)において、DLデータの送信を行い、UE100は、当該On Duration期間における同一のタイミングにおいて、当該DLデータの受信を行う。 FIG. 26 is a diagram showing an example of the transmission timing and reception timing of DL data according to the second embodiment. For example, the gNB 200 transmits DL data at a certain timing during the DRX On Duration period (i.e., the reception timing of the UE in FIG. 26), and the UE 100 receives the DL data at the same timing during the On Duration period.

 ここで、UE100は、テーブルを用いてDLデータの受信タイミングを決定するケースを想定する。例えば、テーブルには、午前0時から、一日分の時間をサブフレーム単位に表した各タイミングにおいて、DLデータの送信有無が記憶されている。UE100は、当該テーブルを用いることで、DLデータの送信タイミングを正確に把握することができる。 Here, we consider a case in which UE 100 uses a table to determine the timing of receiving DL data. For example, the table stores whether or not DL data is to be transmitted at each timing, which represents the time of a day in subframe units starting from midnight. UE 100 can accurately determine the timing of transmitting DL data by using the table.

 しかし、一日分の時間をサブフレーム単位に表したテーブルを用いることは、必ずしも現実的とは言えない。テーブルに記憶される情報量が膨大になるからである。 However, using a table that shows the time of a day in subframe units is not necessarily realistic, as the amount of information that would be stored in the table would be enormous.

 そこで、第2実施形態では、AI/ML技術を用いて、DLデータの受信タイミングを決定する。具体的には、第1に、基地局(例えばgNB200)が、間欠受信におけるデータの送信タイミングを推論するための学習済モデルを作成する。第2に、基地局が、学習済モデルをユーザ装置(例えばUE100)へ送信する。第3に、ユーザ装置が、学習済モデルを用いてデータの送信タイミングを推論する。第4に、ユーザ装置が、データの送信タイミングで、データの受信処理を行う。 In the second embodiment, therefore, the timing of receiving DL data is determined using AI/ML technology. Specifically, first, a base station (e.g., gNB200) creates a trained model for inferring the timing of transmitting data in discontinuous reception. Second, the base station transmits the trained model to a user device (e.g., UE100). Third, the user device infers the timing of transmitting data using the trained model. Fourth, the user device performs a data reception process at the timing of transmitting data.

 これにより、例えば、UE100では、テーブルを用いて受信処理を行うことがないため、テーブルを用いる場合と比較して、情報量を削減させることが可能となる。また、UE100では、DLデータの送信タイミングを推論することも可能となり、gNB200の送信タイミングと同期してDLデータを受信することが可能となる。 As a result, for example, in the UE 100, since the reception process is not performed using a table, it is possible to reduce the amount of information compared to the case where a table is used. In addition, in the UE 100, it is also possible to infer the transmission timing of the DL data, and it is possible to receive the DL data in synchronization with the transmission timing of the gNB 200.

 図27は、第2実施形態に係るAI/MLモデルの配置例を表す図である。図27では、gNB200がモデル学習及びモデル推論を行うため、gNB200が送信エンティティTEとなり、UE100が受信エンティティREとなる。 FIG. 27 is a diagram showing an example of the arrangement of an AI/ML model according to the second embodiment. In FIG. 27, the gNB 200 performs model learning and model inference, so the gNB 200 becomes the transmitting entity TE, and the UE 100 becomes the receiving entity RE.

 図27に示すように、gNB200は、ユーザデータ生成部240とタイミング生成部236とを有する。 As shown in FIG. 27, the gNB 200 has a user data generation unit 240 and a timing generation unit 236.

 ユーザデータ生成部240は、UE100宛てのユーザデータ(DLデータ)を生成する。送信部210は、当該ユーザデータをUE100へ送信する。 The user data generation unit 240 generates user data (DL data) addressed to the UE 100. The transmission unit 210 transmits the user data to the UE 100.

 タイミング生成部236は、基準時間からの経過時間を表すタイミング(又はコード)を生成する。基準時間は、毎年の1月1日、毎月の1日、又は毎日の0:00でもよい。経過時間は、所定の時間単位で表されてもよい。具体的には、経過時間は、サブフレーム単位で表されてもよい。当該経過時間は、スロット単位で表されてもよい。当該経過時間は、無線フレーム単位又は秒単位で表されてもよい。 The timing generation unit 236 generates timing (or code) that indicates the elapsed time from a reference time. The reference time may be January 1st of every year, the 1st of every month, or 0:00 every day. The elapsed time may be expressed in a predetermined time unit. Specifically, the elapsed time may be expressed in subframe units. The elapsed time may be expressed in slot units. The elapsed time may be expressed in radio frame units or seconds.

 また、タイミング生成部236は、ユーザデータ生成部240からユーザデータを受け取ると、受け取ったタイミングで、ユーザデータの送信実行を表す情報をデータ収集部A1へ出力する。更に、タイミング生成部236は、ユーザデータの送信実行が行われるユーザデータの送信タイミングをデータ収集部A1へ出力する。送信タイミングは、コードにより表されてもよい。タイミング生成部236は、ユーザデータを受け取ったタイミング以外のタイミングでは、ユーザデータの送信が実行されないことを表す情報をデータ収集部A1へ出力してもよい。 Furthermore, when the timing generation unit 236 receives user data from the user data generation unit 240, it outputs information indicating the execution of transmission of the user data to the data collection unit A1 at the timing of receipt. Furthermore, the timing generation unit 236 outputs the user data transmission timing at which the transmission of the user data is executed to the data collection unit A1. The transmission timing may be represented by a code. The timing generation unit 236 may output information indicating that the transmission of the user data will not be executed at any timing other than the timing at which the user data is received to the data collection unit A1.

 モデル学習部A2で用いられる学習用データは、ユーザデータの送信実行を表す情報(又は所定データ)と、当該ユーザデータの送信タイミング(又はコード)とである。モデル学習部A2では、ユーザデータの送信実行を表す情報を入力すると、当該ユーザデータの送信タイミングを推論する学習済モデルを作成する。或いは、モデル学習部A2では、タイミング(又はコード)を表す情報を入力すると、送信タイミングかどうかを推論する学習済モデルを作成する。例えば、当該学習済モデルでは、「10時40分30秒」(タイミング)を入力すると、「送信タイミング」又は「非送信タイミング」(送信タイミングがどうか)を推論することができる。学習済モデルは、ユーザデータの送信タイミングを推論するためのモデルであってもよい。送信部210は、学習済モデルをUE100へ送信する。 The learning data used by the model learning unit A2 is information (or predetermined data) indicating the execution of transmission of user data, and the transmission timing (or code) of the user data. When the model learning unit A2 receives information indicating the execution of transmission of user data, it creates a learned model that infers the transmission timing of the user data. Alternatively, when the model learning unit A2 receives information indicating a timing (or code), it creates a learned model that infers whether it is the transmission timing. For example, when the learned model receives "10:40:30" (timing), it can infer "transmission timing" or "non-transmission timing" (whether it is the transmission timing). The learned model may be a model for inferring the transmission timing of user data. The transmission unit 210 transmits the learned model to the UE 100.

 (第2実施形態に係る動作例)
 次に、第2実施形態に係る動作例を説明する。
(Operation example according to the second embodiment)
Next, an operation example according to the second embodiment will be described.

 図28は、第2実施形態に係る動作例を表す図である。 FIG. 28 shows an example of operation according to the second embodiment.

 図28に示すように、ステップS701において、gNB200は、UE100に対してDRX設定を行う。DRX設定はRRCメッセージを用いて設定される。gNB200は、自身が学習済モデルの作成を行うことを示す情報をUE100へ通知してもよい。 As shown in FIG. 28, in step S701, gNB200 performs DRX configuration for UE100. The DRX configuration is configured using an RRC message. gNB200 may notify UE100 of information indicating that it will create a trained model.

 ステップS702において、gNB200は学習モードを開始する。 In step S702, gNB200 starts learning mode.

 ステップS703において、gNB200は、学習済モデルを作成する。gNB200は、上述したように、ユーザデータの送信実行を表す情報を推論用データとし、当該ユーザデータの送信タイミング(又はコード)を推論結果とする学習済モデルを作成する。或いは、gNB200は、タイミング(又はコード)を表す情報を入力すると、送信タイミングかどうかを推論する学習済モデルを作成する。例えば、当該学習済モデルでは、「10時40分30秒」(タイミング)を入力すると、「送信タイミング」又は「非送信タイミング」(送信タイミングがどうか)を推論することができる。 In step S703, gNB200 creates a trained model. As described above, gNB200 creates a trained model in which information representing the execution of transmission of user data is used as inference data and the transmission timing (or code) of the user data is the inference result. Alternatively, gNB200 creates a trained model that infers whether it is a transmission timing when information representing a timing (or code) is input. For example, with this trained model, when "10:40:30" (timing) is input, it can infer "transmission timing" or "non-transmission timing" (whether it is a transmission timing).

 ステップS704において、gNB200は、学習モードから推論モードへの切り替えを行う。 In step S704, gNB200 switches from learning mode to inference mode.

 ステップS705において、gNB200は、ステップS703で作成した学習済モデルをUE100へ送信する。 In step S705, gNB200 transmits the trained model created in step S703 to UE100.

 ステップS706において、gNB200は、制御データを用いて、現在時刻が基準タイミングからのどの位置にあるのかを示す情報をUE100へ送信する。当該情報は、現在時刻を表す情報であってもよい。 In step S706, the gNB 200 uses control data to transmit information indicating where the current time is from the reference timing to the UE 100. The information may be information indicating the current time.

 ステップS707において、gNB200からのDRX設定(ステップS701)に従って、間欠受信を開始する。 In step S707, discontinuous reception is started according to the DRX setting from gNB200 (step S701).

 ステップS708において、UE100は、ステップS705で受信した学習済モデルを利用して、ユーザデータの送信タイミングを推論する。UE100は、当該送信タイミングを受信タイミングとしてもよい。UE100は、現在が基準タイミングからのどの位置にあるのかを示す情報(ステップS706)に基づいて、当該受信タイミング(又は当該送信タイミング)を決定する。UE100は、送信タイミングに対してマージン時間を確保した時間を受信タイミングとしてもよい。図29は、送信タイミングに対して受信タイミングにマージン時間が確保された例を表している。マージン時間は、ステップS701において、制御データを利用して、gNB200がUE100へ設定してもよい。 In step S708, UE100 infers the transmission timing of user data using the learned model received in step S705. UE100 may set the transmission timing as the reception timing. UE100 determines the reception timing (or the transmission timing) based on information indicating the current position from the reference timing (step S706). UE100 may set the reception timing to a time with a margin time secured for the transmission timing. Figure 29 shows an example in which a margin time is secured for the reception timing with respect to the transmission timing. The margin time may be set to UE100 by gNB200 using control data in step S701.

 図28に戻り、ステップS709において、UE100は、ステップS708で推論した送信タイミング(又は受信タイミング)で、DLデータを受信する。 Returning to FIG. 28, in step S709, UE 100 receives DL data at the transmission timing (or reception timing) inferred in step S708.

 (第2実施形態に係る他の例1)
 第2実施形態では、gNB200がDLデータの送信タイミングを推論する学習済モデルを作成する例を説明した。例えば、UE100がULデータの送信タイミングを推論する学習済モデルを作成してもよい。この場合、UE100が、ユーザデータの送信実行を表す情報と、当該ユーザデータの送信タイミング(又はコード)を学習用データとして用いて、モデル学習を行う。UE100は、モデル学習により、ユーザデータ(ULデータ)の送信実行を表す情報を推論用データとして、当該ユーザデータの送信タイミングを推論結果とする学習済モデルを作成する。UE100は、当該学習済モデルをgNB200へ送信し、gNB200では、当該学習済モデルを用いて、ULデータの送信タイミングを推論する。gNB200は、推論した送信タイミング(又は受信タイミング)においてUE100からのULデータを受信する。
(Another Example 1 According to the Second Embodiment)
In the second embodiment, an example was described in which the gNB 200 creates a trained model that infers the transmission timing of DL data. For example, the UE 100 may create a trained model that infers the transmission timing of UL data. In this case, the UE 100 performs model learning using information representing the execution of transmission of user data and the transmission timing (or code) of the user data as learning data. The UE 100 creates a trained model in which the transmission timing of the user data is the inference result, using information representing the execution of transmission of user data (UL data) as inference data, by model learning. The UE 100 transmits the trained model to the gNB 200, and the gNB 200 infers the transmission timing of the UL data using the trained model. The gNB 200 receives the UL data from the UE 100 at the inferred transmission timing (or reception timing).

 (第2実施形態に係る他の例2)
 第2実施形態において、gNB200は、実際の上りデータと学習済みモデルに相違があることを検出するケースが想定される。また、相違があることにより、gNB200からUE100への下りデータが送信出来ず送信用のバッファが溜まっていくことが検出されるケースも想定される。つまり、学習済みモデルと現実の動作に乖離が検出されることが想定される。このようなケースにおいて、gNB200は、救済措置として、UE100に対して次のUE100の受信タイミングにおいて、当該学習済みモデルの使用を停止してもよい。或いは、gNB200は、救済措置として、現在使用中の学習済モデルとは別の学習済みモデルの利用を開始するために、当該別の学習済みモデルの送信を開始してもよい。
(Another Example 2 According to the Second Embodiment)
In the second embodiment, it is assumed that the gNB 200 detects a difference between the actual uplink data and the trained model. It is also assumed that a case is detected in which downlink data cannot be transmitted from the gNB 200 to the UE 100 due to the difference, and the transmission buffer accumulates. In other words, it is assumed that a deviation is detected between the trained model and the actual operation. In such a case, as a relief measure, the gNB 200 may stop using the trained model at the next reception timing of the UE 100 for the UE 100. Alternatively, as a relief measure, the gNB 200 may start transmitting another trained model to start using the trained model other than the currently used trained model.

 また、第2実施形態に係る他の例1において、UE100が自身の受信タイミングに下りデータを受信しない事が続くことを検出すケースも想定される。このようなケースにおいても、救済置として、UE100は、gNB200にその旨を通知し、学習済みモデルの使用停止、あるいは別の学習済みモデル送信の要求を行っても良い。当該通知及び当該要求は、RRCメッセージ(又は新規に規定されたメッセージ)などを利用して送信されてもよい。また、gNB200は、UE100に対して、所定回数(例えば5回)の受信タイミングで下りデータが無い場合はその旨を、gNB200へ通知するなど、予め当該通知の実施条件を指定しておいてもよい。当該通知及当該実施条件の指定も、RRCメッセージ(又は新規に規定されたメッセージ)などを利用して送信されてもよい。 In addition, in another example 1 according to the second embodiment, a case is also assumed in which UE100 detects that it continues not to receive downlink data at its own reception timing. Even in such a case, as a rescue measure, UE100 may notify gNB200 of the situation and request the suspension of use of the trained model or the transmission of another trained model. The notification and the request may be transmitted using an RRC message (or a newly defined message) or the like. In addition, gNB200 may specify in advance the implementation conditions of the notification, such as notifying gNB200 of the absence of downlink data at a predetermined number of reception timings (e.g., five times) for UE100. The notification and the specification of the implementation conditions may also be transmitted using an RRC message (or a newly defined message) or the like.

 [その他の実施形態]
 上述した第1実施形態乃至第2実施形態では、主に、教師あり学習について説明したがこれに限定されない。例えば、第1実施形態乃至第3実施形態は、教師なし学習又は強化学習に適用されてもよい。
[Other embodiments]
In the above-described first and second embodiments, supervised learning has been mainly described, but the present invention is not limited thereto. For example, the first to third embodiments may be applied to unsupervised learning or reinforcement learning.

 上述の各動作フローは、別個独立に実施する場合に限らず、2以上の動作フローを組み合わせて実施可能である。例えば、1つの動作フローの一部のステップを他の動作フローに追加してもよいし、1つの動作フローの一部のステップを他の動作フローの一部のステップと置換してもよい。各フローにおいて、必ずしもすべてのステップを実行する必要は無く、一部のステップのみを実行してもよい。 Each of the above-mentioned operation flows can be implemented not only separately but also by combining two or more operation flows. For example, some steps of one operation flow can be added to another operation flow, or some steps of one operation flow can be replaced with some steps of another operation flow. In each flow, it is not necessary to execute all steps, and only some of the steps can be executed.

 上述の実施形態及び実施例において、基地局がNR基地局(gNB)である一例について説明したが基地局がLTE基地局(eNB)又は6G基地局であってもよい。また、基地局は、IAB(Integrated Access and Backhaul)ノード等の中継ノードであってもよい。基地局は、IABノードのDUであってもよい。また、UE100は、IABノードのMT(Mobile Termination)であってもよい。 In the above-mentioned embodiment and example, an example in which the base station is an NR base station (gNB) has been described, but the base station may be an LTE base station (eNB) or a 6G base station. The base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node. The base station may be a DU of an IAB node. The UE 100 may also be an MT (Mobile Termination) of an IAB node.

 また、用語「ネットワークノード」は、主として基地局を意味するが、コアネットワークの装置又は基地局の一部(CU、DU、又はRU)を意味してもよい。また、ネットワークノードは、コアネットワークの装置の少なくとも一部と基地局の少なくとも一部との組み合わせにより構成されてもよい。 The term "network node" primarily refers to a base station, but may also refer to a core network device or part of a base station (CU, DU, or RU). A network node may also be composed of a combination of at least part of a core network device and at least part of a base station.

 また、上述した実施形態に係る各処理又は各機能をコンピュータに実行させるプログラム(例えば情報処理プログラム)が提供されてもよい。又は、上述した実施形態に係る各処理又は各機能を移動通信システム1に実行させるプログラム(例えば移動通信プログラム)が提供されてもよい。プログラムは、コンピュータ読取り可能媒体に記録されていてもよい。コンピュータ読取り可能媒体を用いれば、コンピュータにプログラムをインストールすることが可能である。ここで、プログラムが記録されたコンピュータ読取り可能媒体は、非一過性の記録媒体であってもよい。非一過性の記録媒体は、特に限定されるものではないが、例えば、CD-ROM又はDVD-ROM等の記録媒体であってもよい。このような記録媒体は、UE100及びgNB200に含まれるメモリであってもよい。また、UE100又はgNB200が行う各処理を実行する回路を集積化し、UE100又はgNB200の少なくとも一部を半導体集積回路(チップセット、SoC:System on a chip)として構成してもよい。 Also, a program (e.g., an information processing program) that causes a computer to execute each process or each function according to the above-mentioned embodiment may be provided. Or, a program (e.g., a mobile communication program) that causes the mobile communication system 1 to execute each process or each function according to the above-mentioned embodiment may be provided. The program may be recorded in a computer-readable medium. Using a computer-readable medium, it is possible to install the program in a computer. Here, the computer-readable medium on which the program is recorded may be a non-transient recording medium. The non-transient recording medium is not particularly limited, and may be, for example, a recording medium such as a CD-ROM or a DVD-ROM. Such a recording medium may be a memory included in the UE 100 and the gNB 200. Also, a circuit that executes each process performed by the UE 100 or the gNB 200 may be integrated, and at least a part of the UE 100 or the gNB 200 may be configured as a semiconductor integrated circuit (chip set, SoC: System on a chip).

 UE100又はgNB200(ネットワークノード)により実現される機能は、当該記載された機能を実現するようにプログラムされた、汎用プロセッサ、特定用途プロセッサ、集積回路、ASICs(Application Specific Integrated Circuits)、CPU(a Central Processing Unit)、従来型の回路、及び/又はそれらの組合せを含む、circuitry又はprocessing circuitryにおいて実装されてもよい。プロセッサは、トランジスタやその他の回路を含み、circuitry又はprocessing circuitryとみなされる。プロセッサは、メモリに格納されたプログラムを実行する、programmed processorであってもよい。本明細書において、circuitry、ユニット、手段は、記載された機能を実現するようにプログラムされたハードウェア、又は実行するハードウェアである。当該ハードウェアは、本明細書に開示されているあらゆるハードウェア、又は、当該記載された機能を実現するようにプログラムされた、又は、実行するものとして知られているあらゆるハードウェアであってもよい。当該ハードウェアがcircuitryのタイプであるとみなされるプロセッサである場合、当該circuitry、手段、又はユニットは、ハードウェアと、当該ハードウェア及び又はプロセッサを構成する為に用いられるソフトウェアの組合せである。 The functions realized by UE100 or gNB200 (network node) may be implemented in circuitry or processing circuitry, including general-purpose processors, application-specific processors, integrated circuits, ASICs (Application Specific Integrated Circuits), CPUs (Central Processing Units), conventional circuits, and/or combinations thereof, programmed to realize the described functions. A processor includes transistors and other circuits and is considered to be circuitry or processing circuitry. A processor may be a programmed processor that executes a program stored in a memory. In this specification, circuitry, unit, and means are hardware that is programmed to realize the described functions or hardware that executes them. The hardware may be any hardware disclosed herein or any hardware known to be programmed or capable of performing the described functions. If the hardware is a processor considered to be a type of circuitry, the circuitry, means, or unit is a combination of hardware and software used to configure the hardware and/or processor.

 本開示で使用されている「に基づいて(based on)」、「に応じて(depending on/in response to)」という記載は、別段に明記されていない限り、「のみに基づいて」、「のみに応じて」を意味しない。「に基づいて」という記載は、「のみに基づいて」及び「に少なくとも部分的に基づいて」の両方を意味する。同様に、「に応じて」という記載は、「のみに応じて」及び「に少なくとも部分的に応じて」の両方を意味する。「含む(include)」、「備える(comprise)」、及びそれらの変形の用語は、列挙する項目のみを含むことを意味せず、列挙する項目のみを含んでもよいし、列挙する項目に加えてさらなる項目を含んでもよいことを意味する。また、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。さらに、本開示で使用されている「第1」、「第2」等の呼称を使用した要素へのいかなる参照も、それらの要素の量又は順序を全般的に限定するものではない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本明細書で使用され得る。したがって、第1及び第2の要素への参照は、2つの要素のみがそこで採用され得ること、又は何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。本開示において、例えば、英語でのa,an,及びtheのように、翻訳により冠詞が追加された場合、これらの冠詞は、文脈から明らかにそうではないことが示されていなければ、複数のものを含むものとする。 As used in this disclosure, the terms "based on" and "depending on/in response to" do not mean "based only on" or "only in response to" unless otherwise specified. The term "based on" means both "based only on" and "based at least in part on". Similarly, the term "in response to" means both "only in response to" and "at least in part on". The terms "include", "comprise", and variations thereof do not mean including only the recited items, but may include only the recited items or may include additional items in addition to the recited items. In addition, the term "or" as used in this disclosure is not intended to mean an exclusive or. Furthermore, any reference to elements using designations such as "first", "second", etc. as used in this disclosure is not intended to generally limit the quantity or order of those elements. These designations may be used herein as a convenient way to distinguish between two or more elements. Thus, a reference to a first and second element does not imply that only two elements may be employed therein, or that the first element must precede the second element in some manner. In this disclosure, where articles are added by translation, such as, for example, a, an, and the in English, these articles are intended to include the plural unless the context clearly indicates otherwise.

 以上、図面を参照して実施形態について詳しく説明したが、具体的な構成は上述のものに限られることはなく、要旨を逸脱しない範囲内において様々な設計変更等をすることが可能である。また、矛盾しない範囲で、各実施形態、各動作例、又は各処理などを組み合わせることも可能である。 The above describes the embodiments in detail with reference to the drawings, but the specific configuration is not limited to the above, and various design changes can be made without departing from the gist of the invention. Furthermore, it is also possible to combine the various embodiments, operation examples, or processes, etc., as long as they are not inconsistent.

 本願は、日本国特許出願第2023-016449号(2023年2月6日出願)の優先権を主張し、その内容の全てが本願明細書に組み込まれている。 This application claims priority from Japanese Patent Application No. 2023-016449 (filed February 6, 2023), the entire contents of which are incorporated herein by reference.

 (付記)
 (付記1)
 移動通信システムにおける通信制御方法であって、
 送信エンティティが、所定データと当該所定データを表すコードとを学習用データとして学習済モデルを作成するステップと、
 前記送信エンティティが、前記学習済モデルを受信エンティティへ送信するステップと、
 前記送信エンティティが、前記学習済モデルを用いて前記所定データから前記コードを推論するステップと、
 前記送信エンティティが、前記コードを前記受信エンティティへ送信するステップと、
 前記受信エンティティが、前記学習済モデルを用いて前記コードから前記所定データを取得するステップと、を有する
 通信制御方法。
(Additional Note)
(Appendix 1)
A communication control method in a mobile communication system, comprising:
A transmitting entity creates a trained model using predetermined data and a code representing the predetermined data as training data;
the transmitting entity transmitting the trained model to a receiving entity;
the transmitting entity inferring the code from the given data using the trained model;
the transmitting entity transmitting the code to the receiving entity;
The receiving entity obtains the predetermined data from the code using the learned model.

 (付記2)
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記所定データは、CSI(Channel State Information)状態報告で用いられるCQI(Channel Quality Indicator)とPMI(Precoding Matrix Indicator)とRI(Rank Indicator)との組である
 付記1記載の通信制御方法。
(Appendix 2)
the transmitting entity is a user equipment and the receiving entity is a network node;
The communication control method according to claim 1, wherein the predetermined data is a set of a Channel Quality Indicator (CQI), a Precoding Matrix Indicator (PMI), and a Rank Indicator (RI) used in a Channel State Information (CSI) status report.

 (付記3)
 前記作成するステップは、前記ユーザ装置が、地域毎に前記学習済モデルを作成するステップを含む
 付記1又は付記2に記載の通信制御方法。
(Appendix 3)
The communication control method according to claim 1 or 2, wherein the creating step includes a step in which the user device creates the trained model for each region.

 (付記4)
 前記コードを送信するステップは、前記ユーザ装置が、前記コードと、前記地域を識別する地域識別情報とを前記ネットワークノードへ送信するステップを含む
 付記1乃至付記3のいずれかに記載の通信制御方法。
(Appendix 4)
The communication control method according to any one of Supplementary Note 1 to Supplementary Note 3, wherein the step of transmitting the code includes a step of the user equipment transmitting the code and area identification information identifying the area to the network node.

 (付記5)
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記作成するステップは、前記ユーザ装置が、ユーザ装置識別情報を含む前記コードを用いて前記学習済モデルを作成するステップを含む
 付記1乃至付記4のいずれかに記載の通信制御方法。
(Appendix 5)
the transmitting entity is a user equipment and the receiving entity is a network node;
The communication control method according to any one of Supplementary Note 1 to Supplementary Note 4, wherein the creating step includes a step in which the user device creates the trained model using the code including user device identification information.

 (付記6)
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記ネットワークノードが、前記コードに使用する範囲を前記ユーザ装置に設定するステップを更に有し、
 前記作成するステップは、前記ユーザ装置が、前記所定データと、前記設定された範囲内における前記コードとを前記学習用データとして前記学習済モデルを作成するステップを含む
 付記1乃至付記5のいずれかに記載の通信制御方法。
(Appendix 6)
the transmitting entity is a user equipment and the receiving entity is a network node;
The method further comprises the step of: the network node configuring the user equipment with a range to be used for the code;
The communication control method according to any one of Supplementary Note 1 to Supplementary Note 5, wherein the creating step includes a step in which the user device creates the trained model using the specified data and the code within the set range as the training data.

 (付記7)
 移動通信システムにおける通信制御方法であって、
 ネットワークノードが、間欠受信におけるデータの送信タイミングを推論するための学習済モデルを作成するステップと、
 前記ネットワークノードが、前記学習済モデルをユーザ装置へ送信するステップと、
 前記ユーザ装置が、前記学習済モデルを用いて前記データの送信タイミングを推論するステップと、
 前記ユーザ装置が、前記データの送信タイミングで、前記データの受信処理を行うステップと、を有する通信制御方法。
(Appendix 7)
A communication control method in a mobile communication system, comprising:
A network node creates a trained model for inferring data transmission timing in discontinuous reception;
The network node transmits the trained model to a user equipment;
The user equipment infers a transmission timing of the data using the trained model;
The user device performs a receiving process of the data at a timing to transmit the data.

 (付記8)
 前記送信タイミングは、基準タイミングからの経過時間を表し、
 前記ネットワークノードが、現在時刻が前記基準タイミングからどの位置にあるかを示す情報を前記ユーザ装置へ送信するステップ、を更に有し
 前記推論するステップは、ユーザ装置が、前記情報に基づいて、前記送信タイミングを決定するステップを含む
 付記7記載の通信制御方法。
(Appendix 8)
the transmission timing represents an elapsed time from a reference timing,
The communication control method according to claim 7, further comprising a step of the network node transmitting to the user equipment information indicating a position of a current time from the reference timing, and the inferring step includes a step of the user equipment determining the transmission timing based on the information.

1:移動通信システム
20:5GC(CN)
100:UE
110:受信部
120:送信部
130:制御部
131:CSI生成部
132:最適ビーム決定部
133:位置情報生成部
135:コード生成部
150:GNSS受信機
200:gNB
210:送信部
220:受信部
230:制御部
231:CSI生成部
236:タイミング生成部
240:ユーザデータ生成部
A1:データ収集部
A2:モデル学習部
A3:モデル推論部
A4:データ処理部
TE:送信エンティティ
RE:受信エンティティ
1: Mobile communication system 20: 5GC (CN)
100: UE
110: Receiving unit 120: Transmitting unit 130: Control unit 131: CSI generating unit 132: Optimal beam determining unit 133: Position information generating unit 135: Code generating unit 150: GNSS receiver 200: gNB
210: Transmitter 220: Receiver 230: Controller 231: CSI generator 236: Timing generator 240: User data generator A1: Data collector A2: Model learning unit A3: Model inference unit A4: Data processor TE: Transmitting entity RE: Receiving entity

Claims (8)

 移動通信システムにおける通信制御方法であって、
 送信エンティティが、所定データと当該所定データを表すコードとを学習用データとして学習済モデルを作成することと、
 前記送信エンティティが、前記学習済モデルを受信エンティティへ送信することと、
 前記送信エンティティが、前記学習済モデルを用いて前記所定データから前記コードを推論することと、
 前記送信エンティティが、前記コードを前記受信エンティティへ送信することと、
 前記受信エンティティが、前記学習済モデルを用いて前記コードから前記所定データを取得することと、を有する
 通信制御方法。
A communication control method in a mobile communication system, comprising:
A transmitting entity creates a trained model using predetermined data and a code representing the predetermined data as training data;
the transmitting entity transmitting the trained model to a receiving entity;
the transmitting entity inferring the code from the given data using the trained model;
the transmitting entity transmitting the code to the receiving entity;
The receiving entity obtains the predetermined data from the code using the learned model.
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記所定データは、CSI(Channel State Information)状態報告で用いられるCQI(Channel Quality Indicator)とPMI(Precoding Matrix Indicator)とRI(Rank Indicator)との組である
 請求項1記載の通信制御方法。
the transmitting entity is a user equipment and the receiving entity is a network node;
The communication control method according to claim 1 , wherein the predetermined data is a set of a Channel Quality Indicator (CQI), a Precoding Matrix Indicator (PMI), and a Rank Indicator (RI) used in a Channel State Information (CSI) status report.
 前記作成することは、前記ユーザ装置が、地域毎に前記学習済モデルを作成することを含む
 請求項2記載の通信制御方法。
The communication control method according to claim 2 , wherein the creating step includes the user device creating the trained model for each region.
 前記コードを送信することは、前記ユーザ装置が、前記コードと、前記地域を識別する地域識別情報とを前記ネットワークノードへ送信することを含む
 請求項3記載の通信制御方法。
The communication control method according to claim 3 , wherein transmitting the code includes the user equipment transmitting the code and area identification information that identifies the area to the network node.
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記作成することは、前記ユーザ装置が、ユーザ装置識別情報を含む前記コードを用いて前記学習済モデルを作成すること、を含む
 請求項1記載の通信制御方法。
the transmitting entity is a user equipment and the receiving entity is a network node;
The communication control method according to claim 1 , wherein the creating step includes the user device creating the trained model using the code including user device identification information.
 前記送信エンティティはユーザ装置であって、前記受信エンティティはネットワークノードであり、
 前記ネットワークノードが、前記コードに使用する範囲を前記ユーザ装置に設定することを更に有し、
 前記作成することは、前記ユーザ装置が、前記所定データと、前記設定された範囲内における前記コードとを前記学習用データとして前記学習済モデルを作成することを含む
 請求項1記載の通信制御方法。
the transmitting entity is a user equipment and the receiving entity is a network node;
The method further comprises the network node configuring a range for use with the code in the user equipment;
The communication control method according to claim 1 , wherein the creating step includes the user device creating the learned model using the specified data and the code within the set range as the learning data.
 移動通信システムにおける通信制御方法であって、
 ネットワークノードが、間欠受信におけるデータの送信タイミングを推論するための学習済モデルを作成することと、
 前記ネットワークノードが、前記学習済モデルをユーザ装置へ送信することと、
 前記ユーザ装置が、前記学習済モデルを用いて前記データの送信タイミングを推論することと、
 前記ユーザ装置が、前記データの送信タイミングで、前記データの受信処理を行うことと、を有する通信制御方法。
A communication control method in a mobile communication system, comprising:
A network node creates a trained model for inferring data transmission timing in discontinuous reception;
The network node transmits the trained model to a user equipment; and
The user equipment infers a transmission timing of the data using the trained model;
The user device performs a receiving process of the data at a timing for transmitting the data.
 前記送信タイミングは、基準タイミングからの経過時間を表し、
 前記ネットワークノードが、現在時刻が前記基準タイミングからどの位置にあるかを示す情報を前記ユーザ装置へ送信すること、を更に有し
 前記推論することは、ユーザ装置が、前記情報に基づいて、前記送信タイミングを決定することを含む
 請求項7記載の通信制御方法。
the transmission timing represents an elapsed time from a reference timing,
8. The method of claim 7, further comprising the network node transmitting to the user equipment information indicating where a current time is from the reference timing, and the inferring includes the user equipment determining the transmission timing based on the information.
PCT/JP2024/003200 2023-02-06 2024-02-01 Communication control method Ceased WO2024166779A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-016449 2023-02-06
JP2023016449 2023-02-06

Publications (1)

Publication Number Publication Date
WO2024166779A1 true WO2024166779A1 (en) 2024-08-15

Family

ID=92262503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/003200 Ceased WO2024166779A1 (en) 2023-02-06 2024-02-01 Communication control method

Country Status (1)

Country Link
WO (1) WO2024166779A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022133022A (en) * 2021-03-01 2022-09-13 国立大学法人静岡大学 Information processing system, teacher data generation method, learned model generation method, and information processing program
WO2022260105A1 (en) * 2021-06-10 2022-12-15 株式会社デンソー Communication device, base station, and communication method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022133022A (en) * 2021-03-01 2022-09-13 国立大学法人静岡大学 Information processing system, teacher data generation method, learned model generation method, and information processing program
WO2022260105A1 (en) * 2021-06-10 2022-12-15 株式会社デンソー Communication device, base station, and communication method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KEETH JAYASINGHE, NOKIA, NOKIA SHANGHAI BELL: "Other aspects on ML for CSI feedback enhancement", 3GPP TSG RAN WG1 #111 R1-2212328, 7 November 2022 (2022-11-07), XP052222886 *
WEI ZENG, APPLE: "Discussion on other aspects of AI/ML for CSI enhancement", 3GPP TSG RAN WG 1 #111 R1-2211806, 7 November 2022 (2022-11-07), XP052222371 *

Similar Documents

Publication Publication Date Title
US20250168706A1 (en) Communication method
US20250168663A1 (en) Communication method and communication apparatus
WO2024166779A1 (en) Communication control method
US20250374192A1 (en) Communication control method and user equipment
US20250365668A1 (en) Communication control method and network node
US20250374088A1 (en) Communication control method, network node and user equipment
US20250365634A1 (en) Communication control method, network node and user equipment
US20250374087A1 (en) Communication control method, network node and user equipment
WO2025047813A1 (en) Communication control method and user device
US20260032469A1 (en) Communication control method
US20240421926A1 (en) Communication control method and communication apparatus
US20260032468A1 (en) Communication control method
WO2024232433A1 (en) Communication control method and user device
WO2025047742A1 (en) Communication control method and user device
WO2025070694A1 (en) Communication control method, network device, and user device
WO2024232434A1 (en) Communication control method and user device
US20250261004A1 (en) Communication method
WO2025070698A1 (en) Communication control method and user device
WO2024166863A1 (en) Communication control method
US20250168651A1 (en) Communication method
WO2024166864A1 (en) Communication control method
WO2025047814A1 (en) Communication control method, base station, and user equipment
WO2025070696A1 (en) Communication control method and network device
WO2025033515A1 (en) Communication control method and user device
WO2025234454A1 (en) Communication control method, network device, and user device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24753222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 24753222

Country of ref document: EP

Kind code of ref document: A1