[go: up one dir, main page]

WO2025131654A1 - Intratmp lic extended template and probing - Google Patents

Intratmp lic extended template and probing Download PDF

Info

Publication number
WO2025131654A1
WO2025131654A1 PCT/EP2024/084362 EP2024084362W WO2025131654A1 WO 2025131654 A1 WO2025131654 A1 WO 2025131654A1 EP 2024084362 W EP2024084362 W EP 2024084362W WO 2025131654 A1 WO2025131654 A1 WO 2025131654A1
Authority
WO
WIPO (PCT)
Prior art keywords
template
lic
intratmp
samples
current block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/084362
Other languages
French (fr)
Inventor
Fabrice Le Leannec
Karam NASER
Thierry DUMAS
Ya CHEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital CE Patent Holdings SAS
Original Assignee
InterDigital CE Patent Holdings SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InterDigital CE Patent Holdings SAS filed Critical InterDigital CE Patent Holdings SAS
Publication of WO2025131654A1 publication Critical patent/WO2025131654A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • Video coding systems may be used to compress digital video signals, e.g., to reduce the storage and/or transmission bandwidth needed for such signals.
  • Video coding systems may include, for example, block-based, wavelet-based, and/or object-based systems.
  • a video decoding/encoding device may include a processor.
  • the device may be configured to obtain an intra template matching prediction (IntraTM P) template associated with a current block.
  • the device may determine local illumination compensation (LIC) model parameters based on a block vector of a set of samples of the IntraTMP template. The at least one sample of the set of samples may be non- adjacent to the current block.
  • the device may decode/encode the current block based on the determined LIC model parameters.
  • the device may include one or more features. For example, the device may determine a template matching cost based on applying the LIC model parameters to a probing area.
  • the probing area may include a subset of samples of the IntraTMP template.
  • the device may, based on a condition that the template matching cost satisfies a threshold, determine to decode the current block based on the LIC model parameters.
  • the device may determine a template matching cost based on applying the LIC model parameters to a probing area.
  • the probing area may include a subset of samples of the IntraTMP template.
  • the device may, based on a condition that the template matching cost satisfies a threshold, determine to encode the current block based on the LIC model parameters.
  • the at least one sample of the set of samples of the IntraTMP template may be different than the subset of samples of the IntraTMP template.
  • the set of samples may include the IntraTMP template.
  • the device may receive, in video data, an indication of a template area type.
  • the set of samples of the IntraTMP template may be obtained based on the template area type.
  • the device may determine a relevance metric for using the LIC model parameters to decode the current block based on the template matching cost.
  • the device may obtain an LIC prediction indication in video data.
  • the device may, based on a condition that the relevance metric and the LIC prediction indication satisfy a threshold, determine to apply the LIC model parameters to the current block.
  • a video decoding device may obtain an IntraTMP template associated with a current block.
  • the video decoding device may determine LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template.
  • One or more samples of the set of samples may be non-adjacent to the current block.
  • the set of samples may comprise the full IntraTMP template.
  • the video decoding device may derive an LIC model based on the determined LIC model parameters.
  • Decoding the current block may be further based on the LIC model.
  • the video decoding device may decode the current block based on the determined LIC model parameters and/or the LIC model.
  • the video decoding device may determine a template matching cost based on applying the LIC model to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template.
  • the video decoding device may determine whether to apply the LIC model for the current block based on the template matching cost.
  • the video decoding device of claim may determine a relevance metric of using the LIC model to decode the current block based on the template matching cost.
  • the relevance metric of using the LIC model to decode the current block may satisfy a LIC usage condition.
  • the video decoding device may obtain a LIC prediction indication in video data and may determine whether to apply the LIC model for the current block based on the relevance metric and/or the LIC prediction indication.
  • the video decoding device may obtain an indication of an area type associated with the probing area in video data and/or an indication of an area type associated with the set of samples of the IntraTMP template, the set of samples for LIC model derivation may be determined based on the indication(s).
  • a video encoding device may obtain an IntraTMP template associated with a current block.
  • the video encoding device may determine LIC model parameters based on a block vector of a set of samples of the IntraTMP template. One or more samples of the set of samples may be non-adjacent to the current block.
  • the video encoding device may encode the current block based on the determined LIC model parameters.
  • the video decoding device may send an indication of an area type associated with the probing area in video data and/or an indication of an area type associated with the set of samples of the IntraTMP template, the set of samples for LIC model derivation may be determined based on the indication(s).
  • Systems, methods, and instrumentalities described herein may involve a decoder.
  • the systems, methods, and instrumentalities described herein may involve an encoder.
  • the systems, methods, and instrumentalities described herein may involve a signal (e.g., from an encoder and/or received by a decoder).
  • a computer-readable medium may include instructions for causing one or more processors to perform methods described herein.
  • a computer program product may include instructions which, when the program is executed by one or more processors, may cause the one or more processors to carry out the methods described herein.
  • FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
  • FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • WTRU wireless transmit/receive unit
  • FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
  • RAN radio access network
  • CN core network
  • FIG. 1 D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
  • FIG. 2 illustrates an example video encoder
  • FIG. 3 illustrates an example video decoder.
  • FIG. 4 illustrates an example of a system in which various aspects and examples may be implemented.
  • FIG. 5 illustrates an example reference region of Intra block copy (IBC) mode.
  • IBC Intra block copy
  • FIG. 8 illustrates an example intra template matching search area used.
  • FIG. 10 depicts an example reference area used to derive filter coefficients.
  • FIG. 11 depicts an example template and probing line for Intra template matching prediction (IntraTMP).
  • FIG. 12 depicts example neighboring samples used to compute a local illumination compensation (LIC) model.
  • LIC local illumination compensation
  • FIG. 13 depicts an example extended set of neighboring samples used to compute an LIC model.
  • FIG. 14 depicts an example extended set of neighboring samples used to compute an LIC model.
  • FIG. 19 depicts an example CU decoding and reconstruction process.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (U E), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • U E user equipment
  • PDA personal digital assistant
  • smartphone a laptop
  • a netbook a personal
  • FIG. 1 B is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.
  • GPS global positioning system
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
  • FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks.
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGS. 1A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic.
  • the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an "ad-hoc” mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two noncontiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams.
  • Inverse Fast Fourier Transform (IFFT) processing, and time domain processing may be done on each stream separately.
  • IFFT Inverse Fast Fourier Transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.11 af and 802.11ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11ah relative to those used in 802.11n, and 802.11ac.
  • 802.11af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum.
  • 802.11 ah may support Meter Type Control/Machine- Type Communications, such as MTC devices in a macro coverage area.
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths.
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as 802.11 n, 802.11 ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
  • the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
  • STAs e.g., MTC type devices
  • NAV Network Allocation Vector
  • the available frequency bands which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
  • FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
  • the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 113 may also be in communication with the CN 115.
  • the RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band.
  • WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPF User Plane Function
  • AMF Access and Mobility Management Function
  • the CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • SMF Session Management Function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like.
  • Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like.
  • URLLC ultra-reliable low latency
  • eMBB enhanced massive mobile broadband
  • MTC machine type communication
  • a PDU session type may be IP-based, non-IP based, Ethernet-based, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • FIGS. 5-21 described herein may provide some examples, but other examples are contemplated.
  • the discussion of FIGS. 5-21 does not limit the breadth of the implementations.
  • At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
  • These and other aspects may be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
  • modules for example, decoding modules, of a video encoder 200 and decoder 300 as shown in FIG. 2 and FIG. 3.
  • the subject matter disclosed herein may be applied, for example, to any type, format or version of video coding, whether described in a standard or a recommendation, whether pre-existing or future- developed, and extensions of any such standards and recommendations. Unless indicated otherwise, or technically precluded, the aspects described in this application may be used individually or in combination.
  • a picture is encoded by the encoder elements as described below.
  • the picture to be encoded is partitioned (202) and processed in units of, for example, coding units (CUs).
  • Each unit is encoded using, for example, either an intra or inter mode.
  • intra prediction 260
  • inter mode motion estimation (275) and compensation (270) are performed.
  • the encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag.
  • Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block.
  • the encoder decodes an encoded block to provide a reference for further predictions.
  • the quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals.
  • In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts.
  • the filtered image is stored at a reference picture buffer (280).
  • FIG. 3 is a diagram showing an example of a video decoder.
  • a bitstream is decoded by the decoder elements as described below.
  • Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG. 2.
  • the encoder 200 also generally performs video decoding as part of encoding video data.
  • the input of the decoder includes a video bitstream, which may be generated by video encoder 200.
  • the bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information.
  • the picture partition information indicates how the picture is partitioned.
  • the decoder may therefore divide (335) the picture according to the decoded picture partitioning information.
  • the transform coefficients are de-quantized (340) and inverse transformed (350) to decode the prediction residuals.
  • Combining (355) the decoded prediction residuals and the predicted block an image block is reconstructed.
  • the predicted block may be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375).
  • In-loop filters (365) are applied to the reconstructed image.
  • the filtered image is stored at a reference picture buffer (380).
  • FIG. 4 is a diagram showing an example of a system in which various aspects and examples described herein may be implemented.
  • System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers.
  • Elements of system 400, singly or in combination may be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components.
  • the processing and encoder/decoder elements of system 400 are distributed across multiple ICs and/or discrete components.
  • system 400 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
  • system 400 is configured to implement one or more of the aspects described in this document.
  • the system 400 includes at least one processor 410 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document.
  • Processor 410 can include embedded memory, input output interface, and various other circuitries as known in the art.
  • the system 400 includes at least one memory 420 (e.g., a volatile memory device, and/or a non-volatile memory device).
  • System 400 includes a storage device 440, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive.
  • the storage device 440 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
  • System 400 includes an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 can include its own processor and memory.
  • the encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
  • Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this document may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410.
  • processor 410, memory 420, storage device 440, and encoder/decoder module 430 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
  • memory inside of the processor 410 and/or the encoder/decoder module 430 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
  • a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions.
  • the external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory.
  • an external non-volatile flash memory is used to store the operating system of, for example, a television.
  • a fast external dynamic volatile memory such as a RAM is used as working memory for video encoding and decoding operations.
  • the input to the elements of system 400 may be provided through various input devices as indicated in block 445.
  • Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (ill) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
  • RF radio frequency
  • COMP Component
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the input devices of block 445 have associated respective input processing elements as known in the art.
  • the RF portion may be associated with elements suitable for (I) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (ill) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data packets.
  • the RF portion of various examples includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
  • the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a nearbaseband frequency) or to baseband.
  • the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
  • Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog- to-digital converter.
  • the RF portion includes an antenna.
  • the USB and/or HDMI terminals can include respective interface processors for connecting system 400 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 410 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 410 as necessary.
  • the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 410, and encoder/decoder 430 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
  • connection arrangement 425 for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards.
  • the system 400 includes communication interface 450 that enables communication with other devices via communication channel 460.
  • the communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460.
  • the communication interface 450 can include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium.
  • Wi-Fi Wireless Fidelity
  • IEEE 802.11 IEEE refers to the Institute of Electrical and Electronics Engineers
  • the Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications.
  • the communications channel 460 of these examples is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
  • Other examples provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445.
  • Still other examples provide streamed data to the system 400 using the RF connection of the input block 445.
  • various examples provide data in a non-streaming manner.
  • various examples use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth® network.
  • the system 400 can provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495.
  • the display 475 of various examples includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display.
  • the display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device.
  • the display 475 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
  • the other peripheral devices 495 include, in various examples, one or more of a stand-alone digital video disc (or digital versatile disc) (DVD, for both terms), a disk player, a stereo system, and/or a lighting system.
  • Various examples use one or more peripheral devices 495 that provide a function based on the output of the system 400. For example, a disk player performs the function of playing the output of the system 400.
  • control signals are communicated between the system 400 and the display 475, speakers 485, or other peripheral devices 495 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention.
  • the output devices may be communicatively coupled to system 400 via dedicated connections through respective interfaces 470, 480, and 490. Alternatively, the output devices may be connected to system 400 using the communications channel 460 via the communications interface 450.
  • the display 475 and speakers 485 may be integrated in a single unit with the other components of system 400 in an electronic device such as, for example, a television.
  • the display interface 470 includes a display driver, such as, for example, a timing controller (T Con) chip.
  • the display 475 and speakers 485 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box.
  • the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
  • the examples may be carried out by computer software implemented by the processor 410 or by hardware, or by a combination of hardware and software. As a non-limiting example, the examples may be implemented by one or more integrated circuits.
  • the memory 420 may be of any type appropriate to the technical environment and may be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples.
  • the processor 410 may be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
  • Decoding can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display.
  • processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
  • such processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, obtaining an IntraTMP template associated with a current block, determining LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template, decoding the current block based on the determined LIC model parameters, etc.
  • decoding refers only to entropy decoding
  • decoding refers only to differential decoding
  • decoding refers to a combination of entropy decoding and differential decoding.
  • encoding can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream.
  • processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
  • such processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, obtaining an IntraTMP template associated with a current block, determining LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template, encode the current block based on the determined LIC model parameters, etc.
  • encoding refers only to entropy encoding
  • encoding refers only to differential encoding
  • encoding refers to a combination of differential encoding and entropy encoding.
  • syntax elements as used herein, for example, coding syntax on which weight derivation method may be used are descriptive terms. As such, they do not preclude the use of other syntax element names.
  • the implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • references to "one example” or “an example” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the example is included in at least one example.
  • the appearances of the phrase “in one example” or “in an example” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same example.
  • this application may refer to "determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Obtaining may include receiving, retrieving, constructing, generating, and/or determining.
  • Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • this application may refer to "receiving” various pieces of information.
  • Receiving is, as with “accessing”, intended to be a broad term.
  • Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
  • “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
  • signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various examples. While the preceding relates to the verb form of the word "signal”, the word “signal” can also be used herein as a noun. [0135] As will be evident to one of ordinary skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information can include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry the bitstream of a described example.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on, or accessed or received from, a processor- readable medium.
  • features described herein may be implemented in a bitstream or signal that includes information generated as described herein. The information may allow a decoder to decode a bitstream, the encoder, bitstream, and/or decoder according to any of the embodiments described.
  • features described herein may be implemented by creating and/or transmitting and/or receiving and/or decoding a bitstream or signal.
  • features described herein may be implemented a method, process, apparatus, medium storing instructions, medium storing data, or signal.
  • An IBC-coded CU may be treated as the third prediction mode (e.g., other than intra or inter prediction modes).
  • the IBC mode may be applicable to the CUs.
  • IBC mode may be applicable to CUs with width and/or height smaller than or equal to 64 luma samples.
  • IBC mode may be signaled with a flag, and it may be signaled as IBC AMVP mode or IBC skip/merge mode.
  • a merge candidate index may be used to indicate which of the block vectors in the list from neighboring candidate IBC coded blocks may be used to predict the current block.
  • the merge list may include spatial, HMVP, and pairwise candidates.
  • a block vector difference may be coded (e.g., as a motion vector difference may be coded).
  • the block vector prediction method may use two candidates as predictors, for example one from left neighbor and one from above neighbor (e.g., if IBC coded).
  • a neighbor e.g., either neighbor
  • a default block vector may be used as a predictor.
  • a flag may be signaled to indicate the block vector predictor index.
  • the current block may refer to the reference samples in the bottom-right 64x64 blocks of the left CTU (e.g., in addition to the already reconstructed samples in the current CTU), for example using CPR mode.
  • the current block may refer (e.g., also refer) to the reference samples in the bottom-left 64x64 block of the left CTU and the reference samples in the top-right 64x64 block of the left CTU, for example using CPR mode.
  • the current block may refer to the reference samples in the bottom-left 64x64 block and bottom-right 64x64 block of the left CTU (e.g., in addition to the already reconstructed samples in the current CTU), for example using CPR mode; otherwise, the current block may refer (e.g., also refer) to reference samples in bottom-right 64x64 block of the left CTU.
  • a current block falls into the bottom-right 64x64 block of the current CTU, it may refer (e.g., only refer) to the already reconstructed samples in the current CTU, for example using CPR mode.
  • This restriction may allow the IBC mode to be implemented using local on-chip memory for hardware implementations.
  • IBC merge/AMVP list construction may be performed.
  • the IBC merge/AMVP list construction may be modified, for example by one or more of the following. If an IBC merge/AMVP candidate is valid (e.g., only if the IBC merge/AMVP candidate is valid), the IBC merge/AMVP candidate may be inserted into the IBC merge/AMVP candidate list. Above-right, bottomleft, and/or above-left spatial candidates and one pairwise average candidate may be added into the IBC merge/AMVP candidate list. Template based adaptive reordering (ARMC-TM) may be applied to IBC merge list.
  • ARMC-TM Template based adaptive reordering
  • the HMVP table size for IBC may be increased to 25 entries. After IBC merge candidates (e.g., up to 20 IBC merge candidates) are derived with full pruning, they may be reordered together. After reordering, a number of first candidates (e.g., the first 6 candidates) with the lowest template matching costs may be selected as the final candidates in the IBC merge list.
  • the zero vectors' candidates to pad the IBC Merge/AMVP list may be replaced with a set of BVP candidates located in the IBC reference region.
  • a zero vector may be invalid as a block vector in IBC merge mode, so, for example, it may be discarded as BVP in the IBC candidate list.
  • FIG. 6 depicts example padding candidates for the replacement of the zero-vector in the IBC list.
  • Three candidates may be located on the nearest corners of the reference region, and three additional candidates may be determined in the middle of the three sub-regions (A, B, and C), whose coordinates may be determined by the width and height of the current block and/or the AX and AY parameters, as shown in FIG. 6.
  • the reference for IBC may be extended to two CTU rows above the CTU being processed by the encoder or the decoder.
  • FIG. 7 illustrates an example extended reference area for coding CTU (m,n) (e.g., for IBC).
  • the reference area may include CTUs with index (m-2,n- 2)...(W,n-2),(0,n-1)... (W,n-1),(0,n)...(m,n), where W denotes the maximum horizontal index within the current tile, slice or picture.
  • the per-sample block vector search (e.g., which may be referred to as local search) range may be limited to [-(C « 1), C » 2] horizontally and/or [-C, C » 2] vertically, for example to adapt to the reference area extension, where C denotes the CTU size.
  • IBC with template matching may be performed.
  • template matching based motion search and refinement may be applied to the case of IBC.
  • An IBC-TM merge mode may be used.
  • the IBC-TM merge mode may involve a merge candidate list for block vector (BV) prediction (e.g., a different merge candidate list than the one used by regular IBC merge mode).
  • the candidates may be selected according to a pruning method with a motion distance between the candidates as in the regular TM merge mode.
  • the zero motion candidates may be replaced by (-W, 0), (0, -H), (-W, -H) MVs.
  • the selected candidates may be refined with the template matching method.
  • the TM-merge flag may be signaled to indicate the template matching merge IBC mode.
  • candidates e.g., up to 3 candidates
  • Each of those candidates may be refined according to the usual template matching method and/or may be sorted according to their resulting TM cost.
  • TM refinement may be performed at integer pel position (e.g., when used for IBC).
  • IBC-TM AMVP mode TM refinement may be performed at integer or 4-pel precision (e.g., depending on the AMVR value). The refinement may be done within the existed IBC reference area.
  • IBC may interact with other inter coding tools.
  • the interaction between IBC mode and other inter coding tools such as pairwise merge candidate, history-based motion vector predictor (HMVP), combined intra/inter prediction mode (CIIP), merge mode with motion vector difference (MMVD), and/or geometric partitioning mode (GPM) may include one or more of the following.
  • HMVP history-based motion vector predictor
  • CIIP combined intra/inter prediction mode
  • MMVD merge mode with motion vector difference
  • GPS geometric partitioning mode
  • IBC may be used with pairwise merge candidate and/or HMVP.
  • a new pairwise IBC merge candidate may be generated by averaging two IBC merge candidates.
  • IBC motion may be inserted into history buffer for future referencing.
  • IBC may be used in combination with CIIP, MMVD, and GPM.
  • IBC may not be used in combination with affine motion.
  • IBC may share the same process as in regular MV merge (e.g., including with pairwise merge candidate and history-based motion predictor) but may disallow TMVP and zero vector (e.g., because TMVP and zero vector may be invalid for IBC mode).
  • HMVP buffer e.g., 5 candidates each
  • IBC I-V Physical Broadcast Control
  • Block vector constraints may be implemented.
  • block vector constraints may be implemented in the form of bitstream conformance constraint, the encoder may (e.g., need to) ensure that no invalid vectors are present in the bit-stream, and merge may not be used if the merge candidate may be invalid (e.g., out of range or 0).
  • bitstream conformance constraint may be expressed in terms of a virtual buffer (e.g., as described herein).
  • IBC may be handled as inter mode.
  • AMVR may be signaled to indicate (e.g., only indicate) whether MV is inter-pel or 4 integer-pel (e.g., AMVR may not use quarter-pel).
  • the number of IBC merge candidates may be signaled in the slice header separately from the numbers of regular, subblock, and/or geometric merge candidates.
  • IBC and local illumination compensation may be used jointly.
  • LIC is an inter prediction enhancement tool.
  • LIC is an inter prediction technique that may be used to model local illumination variation between a current block and its prediction block, for example as a function of local illumination variation between a current block template and a reference block template.
  • the parameters of the function may be denoted by a scale a and an offset p, which may form a linear equation, that is, o*p[x]+p to compensate for illumination changes, where p[x] may be a reference sample pointed to by MV at a location x on reference picture.
  • the MV may be clipped with wrap around offset taken into consideration. Since a and p may be derived based on a current block template and a reference block template, no signaling overhead is required for them, except that an LIC flag may be signaled for AMVP mode to indicate the use of LIC.
  • LIC compensation may be used for uni-prediction inter CUs with the following modifications.
  • Intra neighbor samples may be used in LIC parameter derivation.
  • LIC may be disabled for blocks with less than 32 luma samples.
  • LIC parameter derivation may be performed based on the template block samples corresponding to the current CU (e.g., instead of partial template block samples corresponding to first top-left 16x16 unit).
  • Samples of the reference block template may be generated by using MC with the block MV without rounding it to integer-pel precision.
  • Intra block copy with local illumination compensation may aim at compensating the local illumination variation within a picture between the CU coded with IBC and its prediction block with a linear equation.
  • the parameters of the linear equation may be derived same as LIC for inter prediction except that the reference template may be generated using block vector in IBC-LIC.
  • IBC-LIC may be applied to IBC AMVP mode and IBC merge mode.
  • IBC AMVP mode an IBC-LIC flag may be signaled to indicate the use of IBC-LIC.
  • IBC-LIC flag may be inferred from the merge candidate.
  • Intra Template Matching Prediction (IntraTMP or ITMP) prediction mode may be performed.
  • Intra TMP may be a special intra prediction mode that may copy the best prediction block from the reconstructed part of the current frame, whose L-shaped template matches the current template.
  • the encoder may search for the most similar template to the current template in a reconstructed part of the current frame and may use the corresponding block as a prediction block.
  • the encoder may (e.g., then) signal the usage of this mode, and the same prediction operation may be performed at the decoder side.
  • Sum of absolute differences may be used as a cost function.
  • a given search order of the 6 regions may be utilized, i.e., R4, R5, R6, R1 , R2, and R3.
  • the decoder may construct a candidate list of template matching block vectors (e.g., up to "19” template matching block vectors) that may be ranked (e.g., in ascending order) according to the template cost (SAD).
  • SAD template cost
  • One or more of the following modes may be supported: single predictor, fusion of multiple predictors, sub-pel precision, or linear filter model.
  • a single predictor may be selected from the candidate list.
  • multiple predictors may be blended multiple to derive the final prediction block.
  • the blending weights may be either computed from the template matching cost of each predictor, or with a Wiener-filter based weight derivation method.
  • sub-pel precision may be used with 1/2-pel precision, 1/4-pel precision, and/or 3/4-pel precision, each with 8 possible directions.
  • a linear filter may be learned between the reference template and current template and may be applied the linear model to reference block.
  • a linear filter model mode may be used for single predictor when sub-pel precision may be not used.
  • ‘a’ may be a constant that controls the gain/complexity trade-off. In examples, ‘a’ may be equal to 5.
  • the Intra template matching tool may be enabled for CUs (e.g., for CUs with size less than or equal to 64 in width and height).
  • the maximum CU size for Intra template matching may be configurable.
  • the Intra template matching prediction mode may be signaled at CU level, for example through a dedicated flag when DIMD is not used for current CU.
  • the bias term B may represent a scalar offset between the input and output and may be set to middle luma value (e.g., 512 for 10-bit content).
  • FIG. 10 depicts an example reference area used to derive filter coefficients.
  • the filter coefficients ci may be calculated by minimizing the MSE between the reference template and current template, as shown in FIG. 10.
  • Template size and shapes may be same as in Intra TMP.
  • the template size used for training may be 4 lines above and to the left of the current block depending on their availability.
  • the extensions to the area shown in blue may support (e.g., be needed to support) the side samples of the plus shaped spatial filter and/or may be padded (e.g., when in unavailable areas).
  • Usage of the IntraTMP-LFM mode may be signaled coded CU level flag.
  • IntraTMP- LFM may be considered a sub-mode of IntraTMP. That is, a IntraTMP-LFM flag may be only signaled if IntraTMP flag is true.
  • the above filtering method may be used to apply the linear filter model to IBC predicted blocks.
  • This filtered mode may be used as an additional mode for non-merge IBC blocks.
  • this mode may not be applied together with IBC-LIC, IBC-CI IP, or RR-IBC.
  • this filtering mode may be inherited when a merge mode list is constructed (e.g., so there is no extra signaling).
  • the adaptive usage of the LIC of linear filter model for an IBC predicted block together with blocklevel signaling of the prediction mode used, may lead to increased compression performances.
  • the combined used of IntraTMP with the linear filter model may (e.g., only) be considered.
  • Adaptive templates for LIC linear model computation may be used, as well as multiple linear models, for example to further increase the compression efficiency (e.g., for screen content coding).
  • a probing method may be performed to infer IntraTMP fusion mode.
  • IntraTMP fusion mode multiple IntraTMP candidates may be linearly combined according to the fusion weights derived from the template matching costs or derived by MSE minimization method.
  • a flag, i ntrajm p_fusion_weig ht_ty pe, may be signaled to indicate which weight derivation method may be used.
  • An index, intra_tmp_fusion_idx may be signaled to indicate which candidate set may be selected for IntraTMP fusion.
  • An IntraTMP fusion probing method may be used to select a fusion candidate from a fusion candidate list (e.g., with minimum probing cost).
  • FIG. 11 depicts an example template and probing line for IntraTMP.
  • a flag indicating whether this fusion probing mode is enabled may be signaled.
  • a fusion candidate with minimum probing cost may be selected from a fusion candidate list without signaling the intra_tmp_fusion_weight_type and/or intra_tmp_fusion_idx syntax elements.
  • the probing cost may be derived as the SAD between the pixels in the probing line of the fused template and current block's template.
  • the fusion weights derivation may exclude the pixels in the probing line.
  • the template matching cost of the probing line may be weighted (e.g., by 2) in the IntraTMP candidate search process.
  • Feature(s) associated with improving blocks coded in IntraTMP mode and employing LIC-based prediction enhancement are provided herein.
  • different templates may be used to estimate the LIC linear parameters.
  • Intra template matching may employ a template area of size 4 around a current block (or coding unit, or CU) to process.
  • a template area different from the closest above and left lines of samples next to the CU may (e.g., also) be employed for LIC parameters computation.
  • a part of the IntraTMP template area may be employed to compute a LIC linear model, and another part of the IntraTMP template may be employed for probing based LIC usage derivation.
  • LIC parameters may be computed over an area inside the IntraTMP template area (e.g., which may be different from the closest above and or left lines of neighboring samples around considered CU).
  • the 2, 3, or 4 first lines of samples may be on the left and top of the current CU and may be considered to compute LIC linear model parameters.
  • the set of samples around the current CU may be used to compute a LIC model.
  • the set of samples may include the left, top, and top-left neighboring samples of the current CU.
  • a template matching cost may be estimated on the probing sub-part (e.g., included in the IntraTMP template area).
  • LIC usage at CU-level may be inferred based on a template-matching (TM) cost comparison on the probing zone between LIC usage and no-LIC usage.
  • TM template-matching
  • the CU-level LIC usage may be conditionally signaled to the TM cost comparison between LIC on and LIC off cases.
  • the signaled flag may indicate if the probing based LIC usage prediction provides the correct result or not.
  • the LIC parameters may be estimated based on the above and left lines of samples neighboring the current CU.
  • the probing zone may include the left and above lines next to current CU's boundaries.
  • the LIC parameters may be estimated based on above and left neighboring samples of the current CU.
  • the probing zone may include a left and an above line of neighboring samples (e.g., 1 pel away from current CU's boundaries).
  • a LIC mode syntax element may be explicitly signaled to indicate which part of the template area is used to derive LIC parameters.
  • a line index with the template may be signaled to indicate the line of samples used to compute LIC linear model parameters.
  • a LIC linear model may be computed over different sub-areas of the Intra- TMP template.
  • the LIC parameter for an IntraTMP block may be estimated over a set of samples around the current CU, which may be different from the left and/or above lines of samples neighboring the considered CU.
  • a template size (e.g., a template size of four) may be considered to compute the block vector of an IntraTMP coding unit.
  • the same template area may be used for LIC model estimation (e.g., without significantly increasing complexity and/or without any additional need for memory access than that which may be used in the standard IntraTMP coding/decoding process).
  • FIG. 12 depicts example neighboring samples used to a compute an LIC model.
  • FIG. 12 shows the overall template for IntraTMP predictor searches and the template used for LIC model estimation.
  • FIG. 13 depicts an example extended set of neighboring samples used to compute (e.g., estimate) an LIC model.
  • template size for LIC model computation may be extended (e.g., to 4 lines above and on the left of current block being processed).
  • FIG. 14 depicts an example extended set of neighboring samples used to compute an LIC model.
  • the LIC model may be estimated over the full IntraTMP template (e.g., the same set of reconstructed surrounding samples as those used to search for IntraTMP predictors).
  • Extending templates for LIC model computation may increase accuracy of the LIC mode (e.g., due to a richer set of samples), which may lead to increased video compression performances.
  • LIC usage (e.g., whether to apply LIC to a block) may be determined based on a probing condition (e.g., criteria).
  • FIG. 15 depicts an example approach for IntraTMP-LIC usage probing.
  • the IntraTMP template area may go beyond the LIC computation template lines, for example to perform LIC usage probing.
  • the LIC model may be estimated on a subset of samples inside the overall IntraTMP template area, as shown by FIG. 15.
  • the estimated LIC model may be used to compute a template matching cost (e.g., a distortion like the sum of absolute differences (SAD)), over a probing area inside the overall IntraTMP template, which may be different from the area used to compute the LIC model.
  • a template matching cost e.g., a distortion like the sum of absolute differences (SAD)
  • the relevance of using LIC for the considered block may be estimated according to the template matching cost reduction obtained by applying LIC, calculated on the probing template area. If the cost reduction is sufficiently high (e.g., exceeds a threshold value), it may indicate LIC usage is appropriate to code/decode the considered block efficiently. Otherwise, LIC for coding/decoding the block may be determined to be skipped or bypassed.
  • FIG. 15 depicts an example LIC model estimation template area and LIC probing area inside an IntraTMP template.
  • the second line and column of samples away from the block boundaries may be used for LIC model estimation, and the first line and column of samples around the block may be used as LIC probing lines.
  • FIGS. 16A-D depict example areas for LIC model estimation and LIC probing.
  • the LIC model may be computed by means of the closest samples lines to the block boundaries, while the probing lines of samples may be further from the block.
  • FIG. 17 depicts an example decoding process of a CU in IntraTMP mode.
  • the probing technique may be used to infer the usage of LIC for a given CU coded in IntraTMP mode (e.g., at the decoder side and the encoder side).
  • the input to the process may be a CU to decode in IntraTMP mode.
  • the predictor of a current CU may be determined based on a template matching based search.
  • FIG. 19 depicts an example CU decoding and reconstruction process.
  • the LIC usage determination may represent a difference from the example decoding process shown in FIG. 17.
  • the probing cost may be computed the same way as in previous method described herein.
  • the probing cost based LIC usage criteria may be used to compute the LIC flag predictor (e.g., instead of the LIC usage itself as in the decoding process shown in FIG. 17).
  • LIC usage flag of a considered CU may be determined based on this probing based LIC flag predictor and/or the parsed LIC prediction flag (e.g., as issued from the process of FIG. 18) as follows:
  • FIG. 20 depicts an example encoder search process (e.g., an encoder side choice between various intra TMP prediction modes) associated with the example decoding process shown in FIG. 19.
  • the encoding process for a CU in IntraTMP may begin by a template matching search of a set of IntraTMP prediction candidates. For each candidate, its associated LIC model may be computed, together with the associated probing cost and the LIC flag predictor based on the LIC probing cost.
  • the best IntraTMP predictor, together with its optimal LIC flag, may be searched based on a rate distortion cost minimization process.
  • the best IntraTMP coding mode for current CU may be chosen as the IntraTMP coding mode with minimum rate distortion cost among all IntraTMP prediction candidates, and the LIC usage, LFM usage, or no LIC nor LFM usage.
  • the optimal IntraTMP mode for the considered CU may be put in rate distortion competition with other intra coding mode(s) (not shown in FIG. 20).
  • the IntraTMP template part used for LIC model computation, and optionally for LIC usage probing, may be explicitly signaled.
  • the type of template area used for LIC model determination for a given CU in IntraTMP mode, and optionally the template area used for probing based LIC usage prediction may be explicitly signaled.
  • FIG. 21 depicts an example decoder parsing process (e.g., with modified signaling).
  • Modifying signaling as shown in FIG. 21 may bring some diversity in the set of possible LIC models that may be employed for the coding of a CU in IntraTM P-LIC mode. This increased diversity may lead to more variety of choices in the rate distortion search of the encoder, which may increase compression efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Systems, methods, and instrumentalities are disclosed for performing various aspects associated with Intra template matching prediction (IntraTMP), local illumination compensation (LIC) extended templates and/or probing. An encoding/decoding device may include a processor. The device may obtain an intra template matching prediction (IntraTMP) template associated with a current block. The device may determine local illumination compensation (LIC) model parameters based on a block vector of a set of samples of the IntraTMP template. At least one sample of the set of samples may be non-adjacent to the current block. The device may encode/decode the current block based on the determined LIC model parameters.

Description

INTRATMP LIC EXTENDED TEMPLATE AND PROBING
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of European Provisional Patent Application No. 23307344.4, filed December 22, 2023, the contents of which are hereby incorporated by reference herein.
BACKGROUND
[0002] Video coding systems may be used to compress digital video signals, e.g., to reduce the storage and/or transmission bandwidth needed for such signals. Video coding systems may include, for example, block-based, wavelet-based, and/or object-based systems.
SUMMARY
[0003] Systems, methods, and instrumentalities are disclosed for performing Intra template matching prediction (IntraTMP)-local illumination compensation (LIC) extended template and probing. For example, templates may be extended to estimate LIC linear parameters. For example, the LIC linear model may be computed based on an extended template area. A template matching cost may be determined for LIC usage. [0004] A video decoding/encoding device may include a processor. The device may be configured to obtain an intra template matching prediction (IntraTM P) template associated with a current block. The device may determine local illumination compensation (LIC) model parameters based on a block vector of a set of samples of the IntraTMP template. The at least one sample of the set of samples may be non- adjacent to the current block. The device may decode/encode the current block based on the determined LIC model parameters.
[0005] The device may include one or more features. For example, the device may determine a template matching cost based on applying the LIC model parameters to a probing area. The probing area may include a subset of samples of the IntraTMP template. The device may, based on a condition that the template matching cost satisfies a threshold, determine to decode the current block based on the LIC model parameters. The device may determine a template matching cost based on applying the LIC model parameters to a probing area. The probing area may include a subset of samples of the IntraTMP template. The device may, based on a condition that the template matching cost satisfies a threshold, determine to encode the current block based on the LIC model parameters. The at least one sample of the set of samples of the IntraTMP template may be different than the subset of samples of the IntraTMP template. [0006] The set of samples may include the IntraTMP template. The device may receive, in video data, an indication of a template area type. The set of samples of the IntraTMP template may be obtained based on the template area type. The device may determine a relevance metric for using the LIC model parameters to decode the current block based on the template matching cost. The device may obtain an LIC prediction indication in video data. The device may, based on a condition that the relevance metric and the LIC prediction indication satisfy a threshold, determine to apply the LIC model parameters to the current block.
[0007] A video decoding device may obtain an IntraTMP template associated with a current block. The video decoding device may determine LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template. One or more samples of the set of samples may be non-adjacent to the current block. For example, the set of samples may comprise the full IntraTMP template. The video decoding device may derive an LIC model based on the determined LIC model parameters. Decoding the current block may be further based on the LIC model. The video decoding device may decode the current block based on the determined LIC model parameters and/or the LIC model.
[0008] The video decoding device may determine a template matching cost based on applying the LIC model to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template. The video decoding device may determine whether to apply the LIC model for the current block based on the template matching cost.
[0009] The video decoding device of claim may determine a relevance metric of using the LIC model to decode the current block based on the template matching cost. The relevance metric of using the LIC model to decode the current block may satisfy a LIC usage condition. The video decoding device may obtain a LIC prediction indication in video data and may determine whether to apply the LIC model for the current block based on the relevance metric and/or the LIC prediction indication.
[0010] The video decoding device may obtain an indication of an area type associated with the probing area in video data and/or an indication of an area type associated with the set of samples of the IntraTMP template, the set of samples for LIC model derivation may be determined based on the indication(s).
[0011] A video encoding device may obtain an IntraTMP template associated with a current block. The video encoding device may determine LIC model parameters based on a block vector of a set of samples of the IntraTMP template. One or more samples of the set of samples may be non-adjacent to the current block. The video encoding device may encode the current block based on the determined LIC model parameters. [0012] The video decoding device may send an indication of an area type associated with the probing area in video data and/or an indication of an area type associated with the set of samples of the IntraTMP template, the set of samples for LIC model derivation may be determined based on the indication(s).
[0013] Systems, methods, and instrumentalities described herein may involve a decoder. In some examples, the systems, methods, and instrumentalities described herein may involve an encoder. In some examples, the systems, methods, and instrumentalities described herein may involve a signal (e.g., from an encoder and/or received by a decoder). A computer-readable medium may include instructions for causing one or more processors to perform methods described herein. A computer program product may include instructions which, when the program is executed by one or more processors, may cause the one or more processors to carry out the methods described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 A is a system diagram illustrating an example communications system in which one or more disclosed embodiments may be implemented.
[0015] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
[0016] FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A according to an embodiment.
[0017] FIG. 1 D is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1 A according to an embodiment.
[0018] FIG. 2 illustrates an example video encoder.
[0019] FIG. 3 illustrates an example video decoder.
[0020] FIG. 4 illustrates an example of a a system in which various aspects and examples may be implemented.
[0021] FIG. 5 illustrates an example reference region of Intra block copy (IBC) mode.
[0022] FIG. 6 depicts example padding candidates for the replacement of a zero-vector in an IBC list.
[0023] FIG. 7 illustrates an example extended reference area for coding a coding tree unit (CTU).
[0024] FIG. 8 illustrates an example intra template matching search area used.
[0025] FIG. 9 depicts an example spatial part of a filter.
[0026] FIG. 10 depicts an example reference area used to derive filter coefficients.
[0027] FIG. 11 depicts an example template and probing line for Intra template matching prediction (IntraTMP). [0028] FIG. 12 depicts example neighboring samples used to compute a local illumination compensation (LIC) model.
[0029] FIG. 13 depicts an example extended set of neighboring samples used to compute an LIC model.
[0030] FIG. 14 depicts an example extended set of neighboring samples used to compute an LIC model.
[0031] FIG. 15 depicts an example LIC model estimation template area and LIC probing area inside an
IntraTMP template.
[0032] FIGS. 16A-D depict example areas for LIC model estimation and LIC probing.
[0033] FIG. 17 depicts an example decoding process of a coding unit (CU) in IntraTMP mode.
[0034] FIG. 18 depicts an example decoder side parsing process.
[0035] FIG. 19 depicts an example CU decoding and reconstruction process.
[0036] FIG. 20 depicts an example encoder side choice between various IntraTMP prediction modes.
[0037] FIG. 21 depicts an example decoder parsing process.
DETAILED DESCRIPTION
[0038] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings.
[0039] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0040] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a "station” and/or a "STA”, may be configured to transmit and/or receive wireless signals and may include a user equipment (U E), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0041] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements. [0042] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0043] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0044] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
[0045] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0046] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR). [0047] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0048] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0049] The base station 114b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 1 A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106/115.
[0050] The RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0051] The CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
[0052] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0053] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0054] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0055] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0056] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0057] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
[0058] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0059] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0060] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
[0061] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0062] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WRTU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0063] FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0064] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
[0065] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 1 C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
[0066] The CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0067] The MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
[0068] The SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The SGW 164 may perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like. [0069] The SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
[0070] The CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. [0071] Although the WTRU is described in FIGS. 1A-1 D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
[0072] In representative embodiments, the other network 112 may be a WLAN.
[0073] A WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP. The AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS. Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs. Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations. Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA. The traffic between STAs within a BSS may be considered and/or referred to as peer-to- peer traffic. The peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS). In certain representative embodiments, the DLS may use an 802.11e DLS or an 802.11z tunneled DLS (TDLS). A WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other. The IBSS mode of communication may sometimes be referred to herein as an "ad-hoc” mode of communication.
[0074] When using the 802.11ac infrastructure mode of operation or a similar mode of operations, the AP may transmit a beacon on a fixed channel, such as a primary channel. The primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling. The primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP. In certain representative embodiments, Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems. For CSMA/CA, the STAs (e.g., every STA), including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off. One STA (e.g., only one station) may transmit at any given time in a given BSS.
[0075] High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
[0076] Very High Throughput (VHT) STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels. The 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels. A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two noncontiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams. Inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately. The streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA. At the receiver of the receiving STA, the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
[0077] Sub 1 GHz modes of operation are supported by 802.11 af and 802.11ah. The channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11ah relative to those used in 802.11n, and 802.11ac. 802.11af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum, and 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum. According to a representative embodiment, 802.11 ah may support Meter Type Control/Machine- Type Communications, such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths. The MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
[0078] WLAN systems, which may support multiple channels, and channel bandwidths, such as 802.11 n, 802.11 ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel. The primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode. In the example of 802.11 ah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes. Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
[0079] In the United States, the available frequency bands, which may be used by 802.11 ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
[0080] FIG. 1 D is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment. As noted above, the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 113 may also be in communication with the CN 115.
[0081] The RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment. The gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the gNBs 180a, 180b, 180c may implement MIMO technology. For example, gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c. Thus, the gNB 180a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a. In an embodiment, the gNBs 180a, 180b, 180c may implement carrier aggregation technology. For example, the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum. In an embodiment, the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology. For example, WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
[0082] The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
[0083] The gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non-standalone configuration WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c. For example, WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non-standalone configuration, eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
[0084] Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 1 D, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
[0085] The CN 115 shown in FIG. 1 D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0086] The AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node. For example, the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like. Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c. For example, different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like. The AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
[0087] The SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface. The SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface. The SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b. The SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like. A PDU session type may be IP-based, non-IP based, Ethernet-based, and the like. [0088] The UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
[0089] The CN 115 may facilitate communications with other networks. For example, the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108. In addition, the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. In one embodiment, the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
[0090] In view of Figures 1A-1 D, and the corresponding description of Figures 1A-1 D, one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode- B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a- b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0091] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0092] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0093] This application describes a variety of aspects, including tools, features, examples, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the application or scope of those aspects. Indeed, all of the different aspects may be combined and interchanged to provide further aspects. Moreover, the aspects may be combined and interchanged with aspects described in earlier filings as well.
[0094] The aspects described and contemplated in this application may be implemented in many different forms. FIGS. 5-21 described herein may provide some examples, but other examples are contemplated. The discussion of FIGS. 5-21 does not limit the breadth of the implementations. At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded. These and other aspects may be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
[0095] In the present application, the terms "reconstructed” and "decoded” may be used interchangeably, the terms "pixel” and "sample” may be used interchangeably, the terms "image,” "picture” and "frame” may be used interchangeably.
[0096] Various methods are described herein, and each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as "first”, "second”, etc. may be used in various examples to modify an element, component, step, operation, etc., such as, for example, a "first decoding” and a "second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
[0097] Various methods and other aspects described in this application may be used to modify modules, for example, decoding modules, of a video encoder 200 and decoder 300 as shown in FIG. 2 and FIG. 3. Moreover, the subject matter disclosed herein may be applied, for example, to any type, format or version of video coding, whether described in a standard or a recommendation, whether pre-existing or future- developed, and extensions of any such standards and recommendations. Unless indicated otherwise, or technically precluded, the aspects described in this application may be used individually or in combination.
[0098] Various numeric values are used in examples described the present application, such as a template of size four being used to compute a block vector of an IntraTMP CU, etc. These and other specific values are for purposes of describing examples and the aspects described are not limited to these specific values.
[0099] FIG. 2 is a diagram showing an example video encoder. Variations of example encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
[0100] Before being encoded, the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components). Metadata may be associated with the pre-processing, and attached to the bitstream.
[0101] In the encoder 200, a picture is encoded by the encoder elements as described below. The picture to be encoded is partitioned (202) and processed in units of, for example, coding units (CUs). Each unit is encoded using, for example, either an intra or inter mode. When a unit is encoded in an intra mode, it performs intra prediction (260). In an inter mode, motion estimation (275) and compensation (270) are performed. The encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag. Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block.
[0102] The prediction residuals are then transformed (225) and quantized (230). The quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (245) to output a bitstream. The encoder can skip the transform and apply quantization directly to the non-transformed residual signal. The encoder can bypass both transform and quantization, i.e., the residual is coded directly without the application of the transform or quantization processes.
[0103] The encoder decodes an encoded block to provide a reference for further predictions. The quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals. Combining (255) the decoded prediction residuals and the predicted block, an image block is reconstructed. In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts. The filtered image is stored at a reference picture buffer (280).
[0104] FIG. 3 is a diagram showing an example of a video decoder. In example decoder 300, a bitstream is decoded by the decoder elements as described below. Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG. 2. The encoder 200 also generally performs video decoding as part of encoding video data.
[0105] In particular, the input of the decoder includes a video bitstream, which may be generated by video encoder 200. The bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information. The picture partition information indicates how the picture is partitioned. The decoder may therefore divide (335) the picture according to the decoded picture partitioning information. The transform coefficients are de-quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed. The predicted block may be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375). In-loop filters (365) are applied to the reconstructed image. The filtered image is stored at a reference picture buffer (380).
[0106] The decoded picture can further go through post-decoding processing (385), for example, an inverse color transform (e.g. conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201). The post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream. In an example, the decoded images (e.g., after application of the in-loop filters (365) and/or after post-decoding processing (385), if post-decoding processing is used) may be sent to a display device for rendering to a user.
[0107] FIG. 4 is a diagram showing an example of a system in which various aspects and examples described herein may be implemented. System 400 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 400, singly or in combination, may be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components. For example, in at least one example, the processing and encoder/decoder elements of system 400 are distributed across multiple ICs and/or discrete components. In various examples, the system 400 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports. In various examples, the system 400 is configured to implement one or more of the aspects described in this document.
[0108] The system 400 includes at least one processor 410 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document. Processor 410 can include embedded memory, input output interface, and various other circuitries as known in the art. The system 400 includes at least one memory 420 (e.g., a volatile memory device, and/or a non-volatile memory device). System 400 includes a storage device 440, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive. The storage device 440 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
[0109] System 400 includes an encoder/decoder module 430 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 430 can include its own processor and memory. The encoder/decoder module 430 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 430 may be implemented as a separate element of system 400 or may be incorporated within processor 410 as a combination of hardware and software as known to those skilled in the art.
[0110] Program code to be loaded onto processor 410 or encoder/decoder 430 to perform the various aspects described in this document may be stored in storage device 440 and subsequently loaded onto memory 420 for execution by processor 410. In accordance with various examples, one or more of processor 410, memory 420, storage device 440, and encoder/decoder module 430 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
[0111] In some examples, memory inside of the processor 410 and/or the encoder/decoder module 430 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding. In other examples, however, a memory external to the processing device (for example, the processing device may be either the processor 410 or the encoder/decoder module 430) is used for one or more of these functions. The external memory may be the memory 420 and/or the storage device 440, for example, a dynamic volatile memory and/or a non-volatile flash memory. In several examples, an external non-volatile flash memory is used to store the operating system of, for example, a television. In at least one example, a fast external dynamic volatile memory such as a RAM is used as working memory for video encoding and decoding operations.
[0112] The input to the elements of system 400 may be provided through various input devices as indicated in block 445. Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (ill) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal. Other examples, not shown in FIG. 4, include composite video.
[0113] In various examples, the input devices of block 445 have associated respective input processing elements as known in the art. For example, the RF portion may be associated with elements suitable for (I) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (ill) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain examples, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and/or (vi) demultiplexing to select the desired stream of data packets. The RF portion of various examples includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers. The RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a nearbaseband frequency) or to baseband. In one set-top box example, the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band. Various examples rearrange the order of the above-described (and other) elements, remove some of these elements, and/or add other elements performing similar or different functions. Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog- to-digital converter. In various examples, the RF portion includes an antenna.
[0114] The USB and/or HDMI terminals can include respective interface processors for connecting system 400 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 410 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 410 as necessary. The demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 410, and encoder/decoder 430 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
[0115] Various elements of system 400 may be provided within an integrated housing, Within the integrated housing, the various elements may be interconnected and transmit data therebetween using suitable connection arrangement 425, for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards. [0116] The system 400 includes communication interface 450 that enables communication with other devices via communication channel 460. The communication interface 450 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 460. The communication interface 450 can include, but is not limited to, a modem or network card and the communication channel 460 may be implemented, for example, within a wired and/or a wireless medium.
[0117] Data is streamed, or otherwise provided, to the system 400, in various examples, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers). The Wi-Fi signal of these examples is received over the communications channel 460 and the communications interface 450 which are adapted for Wi-Fi communications. The communications channel 460 of these examples is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications. Other examples provide streamed data to the system 400 using a set-top box that delivers the data over the HDMI connection of the input block 445. Still other examples provide streamed data to the system 400 using the RF connection of the input block 445. As indicated above, various examples provide data in a non-streaming manner. Additionally, various examples use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth® network.
[0118] The system 400 can provide an output signal to various output devices, including a display 475, speakers 485, and other peripheral devices 495. The display 475 of various examples includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display. The display 475 may be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device. The display 475 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop). The other peripheral devices 495 include, in various examples, one or more of a stand-alone digital video disc (or digital versatile disc) (DVD, for both terms), a disk player, a stereo system, and/or a lighting system. Various examples use one or more peripheral devices 495 that provide a function based on the output of the system 400. For example, a disk player performs the function of playing the output of the system 400.
[0119] In various examples, control signals are communicated between the system 400 and the display 475, speakers 485, or other peripheral devices 495 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention. The output devices may be communicatively coupled to system 400 via dedicated connections through respective interfaces 470, 480, and 490. Alternatively, the output devices may be connected to system 400 using the communications channel 460 via the communications interface 450. The display 475 and speakers 485 may be integrated in a single unit with the other components of system 400 in an electronic device such as, for example, a television. In various examples, the display interface 470 includes a display driver, such as, for example, a timing controller (T Con) chip.
[0120] The display 475 and speakers 485 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 445 is part of a separate set-top box. In various examples in which the display 475 and speakers 485 are external components, the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
[0121] The examples may be carried out by computer software implemented by the processor 410 or by hardware, or by a combination of hardware and software. As a non-limiting example, the examples may be implemented by one or more integrated circuits. The memory 420 may be of any type appropriate to the technical environment and may be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples. The processor 410 may be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
[0122] Various implementations involve decoding. "Decoding”, as used in this application, can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display. In various examples, such processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding. In various examples, such processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, obtaining an IntraTMP template associated with a current block, determining LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template, decoding the current block based on the determined LIC model parameters, etc.
[0123] As further examples, in one example "decoding” refers only to entropy decoding, in another example "decoding” refers only to differential decoding, and in another example "decoding” refers to a combination of entropy decoding and differential decoding. Whether the phrase "decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art.
[0124] Various implementations involve encoding. In an analogous way to the above discussion about "decoding”, "encoding” as used in this application can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream. In various examples, such processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding. In various examples, such processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, obtaining an IntraTMP template associated with a current block, determining LIC model parameter(s) based on a block vector of a set of samples of the IntraTMP template, encode the current block based on the determined LIC model parameters, etc.
[0125] As further examples, in one example "encoding” refers only to entropy encoding, in another example "encoding” refers only to differential encoding, and in another example "encoding” refers to a combination of differential encoding and entropy encoding. Whether the phrase "encoding process” is intended to refer specifically to a subset of operations or generally to the broader encoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art.
[0126] Note that syntax elements as used herein, for example, coding syntax on which weight derivation method may be used (e.g., intra_tmp_fusion_weight_type), etc., are descriptive terms. As such, they do not preclude the use of other syntax element names.
[0127] When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process.
[0128] The implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
[0129] Reference to "one example” or "an example” or "one implementation” or "an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the example is included in at least one example. Thus, the appearances of the phrase "in one example” or "in an example” or "in one implementation” or "in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same example.
[0130] Additionally, this application may refer to "determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Obtaining may include receiving, retrieving, constructing, generating, and/or determining.
[0131] Further, this application may refer to "accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0132] Additionally, this application may refer to "receiving” various pieces of information. Receiving is, as with "accessing”, intended to be a broad term. Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, "receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0133] It is to be appreciated that the use of any of the following “/”, "and/or”, and "at least one of, for example, in the cases of “A/B”, "A and/or B” and "at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of "A, B, and/or C” and "at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
[0134] Also, as used herein, the word "signal” refers to, among other things, indicating something to a corresponding decoder. Encoder signals may include, for example, template area types, etc. In this way, in an example the same parameter is used at both the encoder side and the decoder side. Thus, for example, an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter. Conversely, if the decoder already has the particular parameter as well as others, then signaling may be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various examples. It is to be appreciated that signaling may be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various examples. While the preceding relates to the verb form of the word "signal”, the word "signal” can also be used herein as a noun. [0135] As will be evident to one of ordinary skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information can include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry the bitstream of a described example. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on, or accessed or received from, a processor- readable medium.
[0136] Many examples are described herein. Features of examples may be provided alone or in any combination, across various claim categories and types. Further, examples may include one or more of the features, devices, or aspects described herein, alone or in any combination, across various claim categories and types. For example, features described herein may be implemented in a bitstream or signal that includes information generated as described herein. The information may allow a decoder to decode a bitstream, the encoder, bitstream, and/or decoder according to any of the embodiments described. For example, features described herein may be implemented by creating and/or transmitting and/or receiving and/or decoding a bitstream or signal. For example, features described herein may be implemented a method, process, apparatus, medium storing instructions, medium storing data, or signal. For example, features described herein may be implemented by a TV, set-top box, cell phone, tablet, or other electronic device that performs decoding. The TV, set-top box, cell phone, tablet, or other electronic device may display (e.g. using a monitor, screen, or other type of display) a resulting image (e.g., an image from residual reconstruction of the video bitstream). The TV, set-top box, cell phone, tablet, or other electronic device may receive a signal including an encoded image and perform decoding.
[0137] Intra block copy coding may be performed. Intra block copy (IBC) may be a tool used for screen content coding. IBC may improve the coding efficiency of screen content materials. Since IBC mode may be implemented as a block level coding mode, block matching (BM) may be performed at the encoder to find the optimal block vector or motion vector for each CU. A block vector may indicate the displacement from the current block to a reference block, which may be already reconstructed inside the current picture. The luma block vector of an IBC-coded CU may be in integer precision. The chroma block vector may round to integer precision (e.g., as well). When combined with AMVR, the IBC mode may switch between 1 -pel and 4-pel motion vector precisions. An IBC-coded CU may be treated as the third prediction mode (e.g., other than intra or inter prediction modes). The IBC mode may be applicable to the CUs. For example, IBC mode may be applicable to CUs with width and/or height smaller than or equal to 64 luma samples.
[0138] At CU level, IBC mode may be signaled with a flag, and it may be signaled as IBC AMVP mode or IBC skip/merge mode.
[0139] For IBC skip/merge mode a merge candidate index may be used to indicate which of the block vectors in the list from neighboring candidate IBC coded blocks may be used to predict the current block. The merge list may include spatial, HMVP, and pairwise candidates.
[0140] For IBC AMVP mode a block vector difference may be coded (e.g., as a motion vector difference may be coded). The block vector prediction method may use two candidates as predictors, for example one from left neighbor and one from above neighbor (e.g., if IBC coded). When a neighbor (e.g., either neighbor) may be unavailable, a default block vector may be used as a predictor. A flag may be signaled to indicate the block vector predictor index.
[0141] Features associated with IBC reference region(s) are provided herein. To limit memory consumption and decoder complexity, the IBC may allow (e.g., only allow) the reconstructed portion of the predefined area to include the region of current CTU and/or some region of the left CTU. FIG. 5 illustrates an example reference region of IBC Mode, where each block represents 64x64 luma sample unit. A current CTU processing order and its available reference samples in current and left CTU are shown in FIG. 5.
[0142] Depending on the location of the current coding CU location within the current CTU, one or more of the following may apply.
[0143] If a current block falls into the top-left 64x64 block of the current CTU, then the current block may refer to the reference samples in the bottom-right 64x64 blocks of the left CTU (e.g., in addition to the already reconstructed samples in the current CTU), for example using CPR mode. The current block may refer (e.g., also refer) to the reference samples in the bottom-left 64x64 block of the left CTU and the reference samples in the top-right 64x64 block of the left CTU, for example using CPR mode.
[0144] If a current block falls into the top-right 64x64 block of the current CTU, then, if luma location (0, 64) relative to the current CTU has not yet been reconstructed, the current block may refer to the reference samples in the bottom-left 64x64 block and bottom-right 64x64 block of the left CTU (e.g., in addition to the already reconstructed samples in the current CTU), for example using CPR mode; otherwise, the current block may refer (e.g., also refer) to reference samples in bottom-right 64x64 block of the left CTU.
[0145] If a current block falls into the bottom-left 64x64 block of the current CTU, then, if luma location (64, 0) relative to the current CTU has not yet been reconstructed, the current block also refer to the reference samples in the top-right 64x64 block and bottom-right 64x64 block of the left CTU (e.g., in addition to the already reconstructed samples in the current CTU), for example using CPR mode. Otherwise, the current block may refer (e.g., also refer) to the reference samples in the bottom-right 64x64 block of the left CTU, for example using CPR mode.
[0146] If a current block falls into the bottom-right 64x64 block of the current CTU, it may refer (e.g., only refer) to the already reconstructed samples in the current CTU, for example using CPR mode.
[0147] This restriction may allow the IBC mode to be implemented using local on-chip memory for hardware implementations.
[0148] IBC merge/AMVP list construction may be performed.
[0149] The IBC merge/AMVP list construction may be modified, for example by one or more of the following. If an IBC merge/AMVP candidate is valid (e.g., only if the IBC merge/AMVP candidate is valid), the IBC merge/AMVP candidate may be inserted into the IBC merge/AMVP candidate list. Above-right, bottomleft, and/or above-left spatial candidates and one pairwise average candidate may be added into the IBC merge/AMVP candidate list. Template based adaptive reordering (ARMC-TM) may be applied to IBC merge list.
[0150] The HMVP table size for IBC may be increased to 25 entries. After IBC merge candidates (e.g., up to 20 IBC merge candidates) are derived with full pruning, they may be reordered together. After reordering, a number of first candidates (e.g., the first 6 candidates) with the lowest template matching costs may be selected as the final candidates in the IBC merge list.
[0151] The zero vectors' candidates to pad the IBC Merge/AMVP list may be replaced with a set of BVP candidates located in the IBC reference region. A zero vector may be invalid as a block vector in IBC merge mode, so, for example, it may be discarded as BVP in the IBC candidate list.
[0152] FIG. 6 depicts example padding candidates for the replacement of the zero-vector in the IBC list. Three candidates may be located on the nearest corners of the reference region, and three additional candidates may be determined in the middle of the three sub-regions (A, B, and C), whose coordinates may be determined by the width and height of the current block and/or the AX and AY parameters, as shown in FIG. 6.
[0153] Features associated with IBC reference regions are provided herein.
[0154] The reference for IBC may be extended to two CTU rows above the CTU being processed by the encoder or the decoder. FIG. 7 illustrates an example extended reference area for coding CTU (m,n) (e.g., for IBC). Specifically, for CTU (m,n) to be coded, the reference area may include CTUs with index (m-2,n- 2)...(W,n-2),(0,n-1)... (W,n-1),(0,n)...(m,n), where W denotes the maximum horizontal index within the current tile, slice or picture. The per-sample block vector search (e.g., which may be referred to as local search) range may be limited to [-(C « 1), C » 2] horizontally and/or [-C, C » 2] vertically, for example to adapt to the reference area extension, where C denotes the CTU size.
[0155] IBC with template matching (TM) may be performed. For example, template matching based motion search and refinement may be applied to the case of IBC.
[0156] An IBC-TM merge mode may be used. The IBC-TM merge mode may involve a merge candidate list for block vector (BV) prediction (e.g., a different merge candidate list than the one used by regular IBC merge mode). The candidates may be selected according to a pruning method with a motion distance between the candidates as in the regular TM merge mode. The zero motion candidates may be replaced by (-W, 0), (0, -H), (-W, -H) MVs.
[0157] In the IBC-TM merge mode, the selected candidates may be refined with the template matching method. The TM-merge flag may be signaled to indicate the template matching merge IBC mode.
[0158] In the IBC-TM AMVP mode, candidates (e.g., up to 3 candidates) may be selected from the IBC- TM merge list. Each of those candidates may be refined according to the usual template matching method and/or may be sorted according to their resulting TM cost.
[0159] TM refinement may be performed at integer pel position (e.g., when used for IBC). In IBC-TM AMVP mode, TM refinement may be performed at integer or 4-pel precision (e.g., depending on the AMVR value). The refinement may be done within the existed IBC reference area.
[0160] IBC may interact with other inter coding tools. The interaction between IBC mode and other inter coding tools, such as pairwise merge candidate, history-based motion vector predictor (HMVP), combined intra/inter prediction mode (CIIP), merge mode with motion vector difference (MMVD), and/or geometric partitioning mode (GPM) may include one or more of the following.
[0161] IBC may be used with pairwise merge candidate and/or HMVP. A new pairwise IBC merge candidate may be generated by averaging two IBC merge candidates. For HMVP, IBC motion may be inserted into history buffer for future referencing.
[0162] IBC may be used in combination with CIIP, MMVD, and GPM.
[0163] IBC may not be used in combination with affine motion.
[0164] IBC may be not allowed for the chroma coding blocks (e.g., if/when DUAL_TREE partition is used). [0165] The current picture may be not included as one of the reference pictures in the reference picture list 0 for IBC prediction. The derivation process of motion vectors for IBC mode may exclude neighboring blocks (e.g., all neighboring blocks) in inter mode and vice versa.
[0166] One or more of the following IBC design aspects may be applied. [0167] IBC may share the same process as in regular MV merge (e.g., including with pairwise merge candidate and history-based motion predictor) but may disallow TMVP and zero vector (e.g., because TMVP and zero vector may be invalid for IBC mode).
[0168] Separate HMVP buffer (e.g., 5 candidates each) may be used for conventional MV and IBC.
[0169] Block vector constraints may be implemented. For example, block vector constraints may be implemented in the form of bitstream conformance constraint, the encoder may (e.g., need to) ensure that no invalid vectors are present in the bit-stream, and merge may not be used if the merge candidate may be invalid (e.g., out of range or 0). Such bitstream conformance constraint may be expressed in terms of a virtual buffer (e.g., as described herein).
[0170] For deblocking, IBC may be handled as inter mode.
[0171] If the current block is coded using IBC prediction mode, AMVR may be signaled to indicate (e.g., only indicate) whether MV is inter-pel or 4 integer-pel (e.g., AMVR may not use quarter-pel).
[0172] The number of IBC merge candidates may be signaled in the slice header separately from the numbers of regular, subblock, and/or geometric merge candidates.
[0173] IBC and local illumination compensation (LIC) may be used jointly. LIC is an inter prediction enhancement tool.
[0174] LIC is an inter prediction technique that may be used to model local illumination variation between a current block and its prediction block, for example as a function of local illumination variation between a current block template and a reference block template. The parameters of the function may be denoted by a scale a and an offset p, which may form a linear equation, that is, o*p[x]+p to compensate for illumination changes, where p[x] may be a reference sample pointed to by MV at a location x on reference picture. When wrap around motion compensation is enabled, the MV may be clipped with wrap around offset taken into consideration. Since a and p may be derived based on a current block template and a reference block template, no signaling overhead is required for them, except that an LIC flag may be signaled for AMVP mode to indicate the use of LIC.
[0175] LIC compensation may be used for uni-prediction inter CUs with the following modifications.
[0176] Intra neighbor samples may be used in LIC parameter derivation.
[0177] LIC may be disabled for blocks with less than 32 luma samples.
[0178] For non-subblock and affine modes, LIC parameter derivation may be performed based on the template block samples corresponding to the current CU (e.g., instead of partial template block samples corresponding to first top-left 16x16 unit). [0179] Samples of the reference block template may be generated by using MC with the block MV without rounding it to integer-pel precision.
[0180] Intra block copy with local illumination compensation (IBC-LIC) may aim at compensating the local illumination variation within a picture between the CU coded with IBC and its prediction block with a linear equation. The parameters of the linear equation may be derived same as LIC for inter prediction except that the reference template may be generated using block vector in IBC-LIC. IBC-LIC may be applied to IBC AMVP mode and IBC merge mode. For IBC AMVP mode, an IBC-LIC flag may be signaled to indicate the use of IBC-LIC. For IBC merge mode, the IBC-LIC flag may be inferred from the merge candidate.
[0181] Intra Template Matching Prediction (IntraTMP or ITMP) prediction mode may be performed. Intra TMP may be a special intra prediction mode that may copy the best prediction block from the reconstructed part of the current frame, whose L-shaped template matches the current template. For a predefined search range, the encoder may search for the most similar template to the current template in a reconstructed part of the current frame and may use the corresponding block as a prediction block. The encoder may (e.g., then) signal the usage of this mode, and the same prediction operation may be performed at the decoder side.
[0182] Sum of absolute differences (SAD) may be used as a cost function.
[0183] A given search order of the 6 regions may be utilized, i.e., R4, R5, R6, R1 , R2, and R3. Within each region, the decoder may construct a candidate list of template matching block vectors (e.g., up to "19” template matching block vectors) that may be ranked (e.g., in ascending order) according to the template cost (SAD). One or more of the following modes may be supported: single predictor, fusion of multiple predictors, sub-pel precision, or linear filter model.
[0184] For single predictor, a single predictor may be selected from the candidate list.
[0185] For fusion of multiple predictors, multiple predictors may be blended multiple to derive the final prediction block. The blending weights may be either computed from the template matching cost of each predictor, or with a Wiener-filter based weight derivation method.
[0186] When single predictor is used, sub-pel precision may be used with 1/2-pel precision, 1/4-pel precision, and/or 3/4-pel precision, each with 8 possible directions.
[0187] For linear filter model, a linear filter may be learned between the reference template and current template and may be applied the linear model to reference block. In examples, a linear filter model mode may be used for single predictor when sub-pel precision may be not used.
[0188] The dimensions of all regions (SearchRange_w, SearchRange_h) may be set proportional to the block dimension (BlkW, BlkH) to have a fixed number of SAD comparisons per pixel. That is: SearchRange_w = min(64,a * BlkW)
SearchRange_h = min(64,a * BlkH)
[0189] Where ‘a’ may be a constant that controls the gain/complexity trade-off. In examples, ‘a’ may be equal to 5.
[0190] FIG. 8 illustrates an example intra template matching search area used.
[0191] The search range of all search regions may be subsampled by a factor of 3 (e.g., to speed-up the template matching process). A refinement process may be performed, for example after finding the best match. The refinement may be done via a second template matching search around the best match with a reduced range.
[0192] The Intra template matching tool may be enabled for CUs (e.g., for CUs with size less than or equal to 64 in width and height). The maximum CU size for Intra template matching may be configurable.
[0193] The Intra template matching prediction mode may be signaled at CU level, for example through a dedicated flag when DIMD is not used for current CU.
[0194] IntraTMP may be performed based on a linear filter model (IntraTMP-LFM). For example, IntraTMP may be used in combination with a convolution filtering method as follows.
[0195] A 6-tap filter, including a 5-tap plus sign shape spatial component and a bias term, may be adaptively used to enhance the IntraTMP block prediction. The input to the spatial 5-tap component of the filter may include a center C sample in the reference block (e.g., which is at corresponding locations with the sample in the current block to be predicted) and its above/north (N), below/south (S), left/west (W) and right/east (E) neighbors as illustrated in FIG. 9. FIG. 9 depicts an example spatial part of a filter.
[0196] The bias term B may represent a scalar offset between the input and output and may be set to middle luma value (e.g., 512 for 10-bit content).
[0197] Output of the filter may be calculated as follows: predLumaVal = cOC + d N + c2S + c3E + c4W + c5B
[0198] FIG. 10 depicts an example reference area used to derive filter coefficients. The filter coefficients ci may be calculated by minimizing the MSE between the reference template and current template, as shown in FIG. 10. Template size and shapes may be same as in Intra TMP. For example, the template size used for training may be 4 lines above and to the left of the current block depending on their availability. The extensions to the area shown in blue may support (e.g., be needed to support) the side samples of the plus shaped spatial filter and/or may be padded (e.g., when in unavailable areas). [0199] Usage of the IntraTMP-LFM mode may be signaled coded CU level flag. Specifically, IntraTMP- LFM may be considered a sub-mode of IntraTMP. That is, a IntraTMP-LFM flag may be only signaled if IntraTMP flag is true.
[0200] In examples, the above filtering method may be used to apply the linear filter model to IBC predicted blocks.
[0201] This filtered mode may be used as an additional mode for non-merge IBC blocks. For non-merge blocks, this mode may not be applied together with IBC-LIC, IBC-CI IP, or RR-IBC. For IBC merge modes, this filtering mode may be inherited when a merge mode list is constructed (e.g., so there is no extra signaling).
[0202] The adaptive usage of the LIC of linear filter model for an IBC predicted block together with blocklevel signaling of the prediction mode used, may lead to increased compression performances. However, with respect to the IntraTMP prediction mode, the combined used of IntraTMP with the linear filter model may (e.g., only) be considered.
[0203] IntraTMP using LIC may be performed. For example, LIC may be applied to blocks in IntraTMP mode. The LIC usage for an IntraTMP mode may be signaled through a CU-level flag. Usages of LIC and LFM (e.g., CCCM-like filtering) may be mutually exclusive for a given CU. Usages of LIC and fusion in intra TMP may be mutually exclusive (e.g., similar to LFM).
[0204] Adaptive templates for LIC linear model computation may be used, as well as multiple linear models, for example to further increase the compression efficiency (e.g., for screen content coding).
[0205] A probing method may be performed to infer IntraTMP fusion mode.
[0206] In IntraTMP fusion mode, multiple IntraTMP candidates may be linearly combined according to the fusion weights derived from the template matching costs or derived by MSE minimization method. A flag, i ntrajm p_fusion_weig ht_ty pe, may be signaled to indicate which weight derivation method may be used. An index, intra_tmp_fusion_idx, may be signaled to indicate which candidate set may be selected for IntraTMP fusion.
[0207] An IntraTMP fusion probing method may be used to select a fusion candidate from a fusion candidate list (e.g., with minimum probing cost). FIG. 11 depicts an example template and probing line for IntraTMP.
[0208] For a CU coded in In IntraTMP fusion mode, a flag indicating whether this fusion probing mode is enabled may be signaled. When enabled, a fusion candidate with minimum probing cost may be selected from a fusion candidate list without signaling the intra_tmp_fusion_weight_type and/or intra_tmp_fusion_idx syntax elements. The probing cost may be derived as the SAD between the pixels in the probing line of the fused template and current block's template. The fusion weights derivation may exclude the pixels in the probing line. The template matching cost of the probing line may be weighted (e.g., by 2) in the IntraTMP candidate search process.
[0209] Feature(s) associated with improving blocks coded in IntraTMP mode and employing LIC-based prediction enhancement are provided herein. For example, different templates may be used to estimate the LIC linear parameters.
[0210] Intra template matching may employ a template area of size 4 around a current block (or coding unit, or CU) to process. A template area different from the closest above and left lines of samples next to the CU may (e.g., also) be employed for LIC parameters computation.
[0211] A part of the IntraTMP template area may be employed to compute a LIC linear model, and another part of the IntraTMP template may be employed for probing based LIC usage derivation.
[0212] LIC parameters may be computed over an area inside the IntraTMP template area (e.g., which may be different from the closest above and or left lines of neighboring samples around considered CU).
[0213] In examples, the 2, 3, or 4 first lines of samples may be on the left and top of the current CU and may be considered to compute LIC linear model parameters.
[0214] In examples, the set of samples around the current CU may be used to compute a LIC model. The set of samples may include the left, top, and top-left neighboring samples of the current CU.
[0215] A template matching cost may be estimated on the probing sub-part (e.g., included in the IntraTMP template area).
[0216] LIC usage at CU-level may be inferred based on a template-matching (TM) cost comparison on the probing zone between LIC usage and no-LIC usage.
[0217] The CU-level LIC usage may be conditionally signaled to the TM cost comparison between LIC on and LIC off cases. The signaled flag may indicate if the probing based LIC usage prediction provides the correct result or not.
[0218] In examples, the LIC parameters may be estimated based on the above and left lines of samples neighboring the current CU. The probing zone may include the left and above lines next to current CU's boundaries.
[0219] In examples, the LIC parameters may be estimated based on above and left neighboring samples of the current CU. The probing zone may include a left and an above line of neighboring samples (e.g., 1 pel away from current CU's boundaries).
[0220] A LIC mode syntax element may be explicitly signaled to indicate which part of the template area is used to derive LIC parameters. [0221] A line index with the template may be signaled to indicate the line of samples used to compute LIC linear model parameters.
[0222] For Intra-TMP LIC mode, a LIC linear model may be computed over different sub-areas of the Intra- TMP template.
[0223] The LIC parameter for an IntraTMP block (e.g., coding unit (CU)) may be estimated over a set of samples around the current CU, which may be different from the left and/or above lines of samples neighboring the considered CU.
[0224] A template size (e.g., a template size of four) may be considered to compute the block vector of an IntraTMP coding unit. The same template area may be used for LIC model estimation (e.g., without significantly increasing complexity and/or without any additional need for memory access than that which may be used in the standard IntraTMP coding/decoding process).
[0225] FIG. 12 depicts example neighboring samples used to a compute an LIC model. FIG. 12 shows the overall template for IntraTMP predictor searches and the template used for LIC model estimation.
[0226] FIG. 13 depicts an example extended set of neighboring samples used to compute (e.g., estimate) an LIC model. As shown in FIG. 13, template size for LIC model computation may be extended (e.g., to 4 lines above and on the left of current block being processed).
[0227] FIG. 14 depicts an example extended set of neighboring samples used to compute an LIC model. As shown in FIG. 14, the LIC model may be estimated over the full IntraTMP template (e.g., the same set of reconstructed surrounding samples as those used to search for IntraTMP predictors).
[0228] Extending templates for LIC model computation may increase accuracy of the LIC mode (e.g., due to a richer set of samples), which may lead to increased video compression performances.
[0229] LIC usage (e.g., whether to apply LIC to a block) may be determined based on a probing condition (e.g., criteria). FIG. 15 depicts an example approach for IntraTMP-LIC usage probing.
[0230] The IntraTMP template area may go beyond the LIC computation template lines, for example to perform LIC usage probing. The LIC model may be estimated on a subset of samples inside the overall IntraTMP template area, as shown by FIG. 15. The estimated LIC model may be used to compute a template matching cost (e.g., a distortion like the sum of absolute differences (SAD)), over a probing area inside the overall IntraTMP template, which may be different from the area used to compute the LIC model.
[0231] The relevance of using LIC for the considered block may be estimated according to the template matching cost reduction obtained by applying LIC, calculated on the probing template area. If the cost reduction is sufficiently high (e.g., exceeds a threshold value), it may indicate LIC usage is appropriate to code/decode the considered block efficiently. Otherwise, LIC for coding/decoding the block may be determined to be skipped or bypassed.
[0232] FIG. 15, depicts an example LIC model estimation template area and LIC probing area inside an IntraTMP template. In the example shown in FIG. 15, the second line and column of samples away from the block boundaries may be used for LIC model estimation, and the first line and column of samples around the block may be used as LIC probing lines.
[0233] FIGS. 16A-D depict example areas for LIC model estimation and LIC probing. In examples, the LIC model may be computed by means of the closest samples lines to the block boundaries, while the probing lines of samples may be further from the block.
[0234] FIG. 17 depicts an example decoding process of a CU in IntraTMP mode. In an example, the probing technique may be used to infer the usage of LIC for a given CU coded in IntraTMP mode (e.g., at the decoder side and the encoder side).
[0235] The input to the process may be a CU to decode in IntraTMP mode. The predictor of a current CU may be determined based on a template matching based search.
[0236] If the linear filter model (LFM) prediction enhancement is on, LFM taps may be determined based on the template and LFM may be applied.
[0237] If the LFM prediction enhancement is off then the LIC model parameters (o,p) may be computed for the current block, based on the block vector obtained from TM-based search. The probing cost associated to the LIC model (o,p) may (e.g., then) be computed. Determining (e.g., computing) the probing cost may comprise applying the LIC model to the probing area of the reference block and measuring the obtained cost (e.g., a distortion) between the so-transform reference block's probing area and the probing area in current block's template.
[0238] LIC usage may be inferred based on the determined probing cost. For example, if the probing cost is sufficiently lower than the same cost with the LIC model applied to the reference block probing area (e.g., the cost difference exceeds a threshold value), then a LIC flag may be set to true, otherwise it may be set to false. The inferred LIC flag may be used to determine whether to apply LIC to current predicted block or not. [0239] The residual block may then be decoded. The residual block may be added to the final predicted block to produce the reconstructed block.
[0240] LIC usage may be predicted based on a probing condition (e.g., criteria). The probing condition may be used to lower the rate cost of signaling LIC usage. For example, a CU level indication, such as an LIC prediction indication (e.g., a flag), may be signaled to indicate if the probing-based LIC usage prediction is correct or not. [0241] FIG. 18 depicts an example decoder side parsing process. If the LIC prediction flag parsed as shown in FIG. 18 is true, then the LIC usage flag for the considered CU may be set equal to the probingbased LIC usage prediction during the CU decoding process. If the parsed LIC prediction flag is false, then the LIC usage flag for the considered CU may be set equal to the opposite of the probing based LIC usage prediction during the CU decoding process.
[0242] This probing-prediction-based signaling of LIC usage may lead to the probability of the signaled prediction flag being equal to true is close to 1 , making the entropy of the LIC flag signaling low (e.g., lower than the entropy of other LIC usage signaling). A reduced bitrate and increased compression efficiency may be expected from the example parsing process.
[0243] FIG. 19 depicts an example CU decoding and reconstruction process. The LIC usage determination may represent a difference from the example decoding process shown in FIG. 17.
[0244] The probing cost may be computed the same way as in previous method described herein. The probing cost based LIC usage criteria may be used to compute the LIC flag predictor (e.g., instead of the LIC usage itself as in the decoding process shown in FIG. 17). Then LIC usage flag of a considered CU may be determined based on this probing based LIC flag predictor and/or the parsed LIC prediction flag (e.g., as issued from the process of FIG. 18) as follows:
Probing_based_predictor if parsed predicion flag is true
Figure imgf000038_0001
! Probing_based_predictor otherwise
[0245] FIG. 20 depicts an example encoder search process (e.g., an encoder side choice between various intra TMP prediction modes) associated with the example decoding process shown in FIG. 19. The encoding process for a CU in IntraTMP may begin by a template matching search of a set of IntraTMP prediction candidates. For each candidate, its associated LIC model may be computed, together with the associated probing cost and the LIC flag predictor based on the LIC probing cost.
[0246] The best IntraTMP predictor, together with its optimal LIC flag, may be searched based on a rate distortion cost minimization process.
[0247] The best IntraTMP coding mode for current CU may be chosen as the IntraTMP coding mode with minimum rate distortion cost among all IntraTMP prediction candidates, and the LIC usage, LFM usage, or no LIC nor LFM usage.
[0248] The optimal IntraTMP mode for the considered CU may be put in rate distortion competition with other intra coding mode(s) (not shown in FIG. 20).
[0249] The IntraTMP template part used for LIC model computation, and optionally for LIC usage probing, may be explicitly signaled. [0250] In examples, the type of template area used for LIC model determination for a given CU in IntraTMP mode, and optionally the template area used for probing based LIC usage prediction, may be explicitly signaled.
[0251] FIG. 21 depicts an example decoder parsing process (e.g., with modified signaling).
[0252] Modifying signaling as shown in FIG. 21 may bring some diversity in the set of possible LIC models that may be employed for the coding of a CU in IntraTM P-LIC mode. This increased diversity may lead to more variety of choices in the rate distortion search of the encoder, which may increase compression efficiency.
[0253] Although features and elements may be described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element may be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

1 . A video decoding device, comprising: a processor configured to: obtain an intra template matching prediction (IntraTM P) template associated with a current block; determine local illumination compensation (LIC) model parameters based on a block vector of a set of samples of the IntraTM P template, wherein at least one sample of the set of samples is non-adjacent to the current block; and decode the current block based on the determined LIC model parameters.
2. A video encoding device, comprising: a processor configured to: obtain an IntraTMP template associated with a current block; determine LIC model parameters based on a block vector of a set of samples of the IntraTMP template, wherein at least one sample of the set of samples is non-adjacent to the current block; and encode the current block based on the determined LIC model parameters.
3. The device of claim 1 , wherein the processor is further configured to: determine a template matching cost based on applying the LIC model parameters to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template; and based on a condition that the template matching cost satisfies a threshold, determine to decode the current block based on the LIC model parameters.
4. The device of claim 2, wherein the processor is further configured to: determine a template matching cost based on applying the LIC model parameters to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template; and based on a condition that the template matching cost satisfies a threshold, determine to encode the current block based on the LIC model parameters.
5. The device of claim 3 or 4, wherein at least one sample of the set of samples of the IntraTMP template is different than the subset of samples of the IntraTMP template.
6. The device of any one of claims 1-5, wherein the set of samples comprise the IntraTMP template.
7. The device of any one of claims 1 , 3, or 5, wherein the processor is further configured to: receive, in video data, an indication of a template area type, wherein the set of samples of the IntraTMP template is obtained based on the template area type.
8. The device of any one of claims 3 or 5, wherein the processor is further configured to: determine a relevance metric for using the LIC model parameters to decode the current block based on the template matching cost; obtain an LIC prediction indication in video data; and based on a condition that the relevance metric and the LIC prediction indication satisfy a threshold, determine to apply the LIC model parameters to the current block.
9. A video decoding method comprising: obtaining an intra template matching prediction (IntraTMP) template associated with a current block; determining local illumination compensation (LIC) model parameters based on a block vector of a set of samples of the IntraTMP template, wherein at least one sample of the set of samples is non-adjacent to the current block; and decoding the current block based on the determined LIC model parameters.
10. A video encoding method, comprising: obtaining an IntraTMP template associated with a current block; determining LIC model parameters based on a block vector of a set of samples of the IntraTMP template, wherein at least one sample of the set of samples is non-adjacent to the current block; and encoding the current block based on the determined LIC model parameters.
11 . The method of claim 9, wherein the method further comprises: determining a template matching cost based on applying the LIC model parameters to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template; and based on a condition that the template matching cost satisfies a threshold, determining to decode the current block based on the LIC model parameters.
12. The method of claim 10, wherein the method further comprises: determining a template matching cost based on applying the LIC model parameters to a probing area, wherein the probing area comprises a subset of samples of the IntraTMP template; and based on a condition that the template matching cost satisfies a threshold, determining to encode the current block based on the LIC model parameters.
13. The method of claim 11 or 12, wherein at least one sample of the set of samples of the IntraTMP template is different than the subset of samples of the IntraTMP template.
14. The method of any one of claims 9-13, wherein the set of samples comprise the IntraTMP template.
15. The method of any one of claims 1 , 3, or 5, wherein the method further comprises: receiving, in video data, an indication of a template area type, wherein the set of samples of the IntraTMP template is obtained based on the template area type.
PCT/EP2024/084362 2023-12-22 2024-12-02 Intratmp lic extended template and probing Pending WO2025131654A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP23307344 2023-12-22
EP23307344.4 2023-12-22

Publications (1)

Publication Number Publication Date
WO2025131654A1 true WO2025131654A1 (en) 2025-06-26

Family

ID=89619984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/084362 Pending WO2025131654A1 (en) 2023-12-22 2024-12-02 Intratmp lic extended template and probing

Country Status (1)

Country Link
WO (1) WO2025131654A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134452A1 (en) * 2022-01-11 2023-07-20 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing
WO2023205283A1 (en) * 2022-04-19 2023-10-26 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for enhanced local illumination compensation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134452A1 (en) * 2022-01-11 2023-07-20 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing
WO2023205283A1 (en) * 2022-04-19 2023-10-26 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for enhanced local illumination compensation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Algorithm description of enhanced compression model 10 (ECM 10)", no. n22884, 29 October 2023 (2023-10-29), XP030323517, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/143_Geneva/wg11/MDS22884_WG05_N00233.zip WG5_N0233-JVET-AE2025-v1-ECM10.docx> [retrieved on 20231029] *
LE LÉANNEC (INTERDIGITAL) F ET AL: "AHG12: Intra TMP extension to DIMD, LIC and SGPM", no. JVET-AF0275 ; m65615, 14 October 2023 (2023-10-14), XP030312489, Retrieved from the Internet <URL:https://jvet-experts.org/doc_end_user/documents/32_Hannover/wg11/JVET-AF0275-v2.zip JVET-AF0275-v2-clean.docx> [retrieved on 20231014] *
WANG (BYTEDANCE) Y ET AL: "Non-EE2: Extension of local illumination compensation", no. JVET-AF0200, 7 October 2023 (2023-10-07), XP030312389, Retrieved from the Internet <URL:https://jvet-experts.org/doc_end_user/documents/32_Hannover/wg11/JVET-AF0200-v1.zip JVET-AF0200.docx> [retrieved on 20231007] *

Similar Documents

Publication Publication Date Title
WO2025133069A1 (en) Intra tmp-lic merge mode
WO2025003037A1 (en) Intra tmp and lic combination
WO2020247394A1 (en) Block boundary prediction refinement with optical flow
WO2025002781A1 (en) Generalized intra prediction fusion
EP4629620A1 (en) Motion vectors as merge candidates for intra blocks
EP4629622A1 (en) Direct block vector with history merge candidates
EP4629627A1 (en) Template based clipping of block-vector based intra prediction
EP4629619A1 (en) Generalized usage of auto relocated block vector
EP4633142A1 (en) Interaction between adaptive dual tree and chroma direct block vector prediction mode
EP4629633A1 (en) Intra-block copy local illumination compensation slope adjustment enhancements
EP4633160A1 (en) Intra block copy merge / advanced motion compensation list and intra template matching prediction merge list enrichment
EP4661392A1 (en) Adaptive dual tree and chroma intra prediction modes reordering
EP4629618A1 (en) Merge mode sgpm
EP4676038A1 (en) Tmrl with block vectors
WO2025131654A1 (en) Intratmp lic extended template and probing
WO2025003313A1 (en) Adaptive ibc/intra tmp filtering
WO2025073519A1 (en) Filtering applied to chroma direct block vector
WO2025131916A1 (en) Ibc lic model merge mode enhancement
WO2025002778A1 (en) Amvr interactions with filtered prediction
WO2025003493A1 (en) Bi-predictive intra block copy with weighted averaging
WO2025149367A1 (en) Multi-reference line for template coding tools
WO2025003042A1 (en) Block vector guided chroma direct mode
WO2025003029A1 (en) Sbt applied to ibc and intratmp
WO2025002873A1 (en) Bi-predictive merge list for intra block copy coding
WO2025002858A1 (en) Improvements to intra block copy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24813212

Country of ref document: EP

Kind code of ref document: A1