[go: up one dir, main page]

WO2025170090A1 - Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique - Google Patents

Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique

Info

Publication number
WO2025170090A1
WO2025170090A1 PCT/KR2024/001754 KR2024001754W WO2025170090A1 WO 2025170090 A1 WO2025170090 A1 WO 2025170090A1 KR 2024001754 W KR2024001754 W KR 2024001754W WO 2025170090 A1 WO2025170090 A1 WO 2025170090A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
node
particle
quantum
communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/001754
Other languages
English (en)
Korean (ko)
Inventor
박주윤
이상림
이호재
김자영
안병규
이종현
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority to PCT/KR2024/001754 priority Critical patent/WO2025170090A1/fr
Publication of WO2025170090A1 publication Critical patent/WO2025170090A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/70Photonic quantum communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords

Definitions

  • the present disclosure relates to a device and method for transmitting information from a receiver, Bob, to Alice without information encoding, using a Quantum Dialogue (QD) protocol in a quantum communication system.
  • QD Quantum Dialogue
  • Various embodiments of the present disclosure propose a protocol that eliminates the encoding process performed by the receiver in the quantum dialogue protocol and ensures secure information transmission over a classical channel.
  • the present disclosure provides a device and method for performing a quantum conversation protocol in a quantum communication system.
  • the present disclosure provides a device and method for transmitting information from a recipient, Bob, to Alice without information encoding using a Quantum Dialogue (QD) protocol in a quantum communication system.
  • QD Quantum Dialogue
  • a method for operating a first node in a communication system comprising: receiving at least one synchronization signal from a second node; receiving control information from the second node; receiving a first sequence of first particles from the second node based on first particles and second particles constituting an Einstein-Podolsky-Rosen pair (EPR pair); receiving combination information of an initial state of a second sequence of the second particles and a checking particle from the second node; and transmitting a first measurement result for the combination information and transformation information obtained by performing a logical operation on specific information to the second node.
  • EPR pair Einstein-Podolsky-Rosen pair
  • a method for operating a second node in a communication system comprising: transmitting at least one synchronization signal to a first node; transmitting control information to the first node; transmitting a first sequence of first particles to the first node and storing a second sequence of second particles based on first and second particles constituting an Einstein-Podolsky-Rosen pair (EPR pair); transmitting combination information of an initial state of the second sequence and a checking particle to the first node; and receiving, from the first node, a first measurement result for the combination information and transformation information obtained by performing a logical operation on specific information.
  • EPR pair Einstein-Podolsky-Rosen pair
  • a first node comprising: a transceiver; at least one processor; and at least one memory operably connectable to the at least one processor and storing instructions that, when executed by the at least one processor, perform operations, wherein the operations include all steps of a method of operating the first node according to various embodiments of the present disclosure.
  • a second node comprising: a transceiver; at least one processor; and at least one memory operably connectable to the at least one processor and storing instructions that, when executed by the at least one processor, perform operations, wherein the operations include all steps of a method of operating the second node according to various embodiments of the present disclosure.
  • a control device for controlling a first node in a communication system comprising: at least one processor; and at least one memory operably connected to the at least one processor, wherein the at least one memory stores instructions for performing operations based on being executed by the at least one processor, wherein the operations include all steps of an operating method of the first node according to various embodiments of the present disclosure.
  • a control device for controlling a second node in a communication system comprising: at least one processor; and at least one memory operably connected to the at least one processor, wherein the at least one memory stores instructions for performing operations based on being executed by the at least one processor, wherein the operations include all steps of an operating method of the second node according to various embodiments of the present disclosure.
  • one or more non-transitory computer-readable media storing one or more instructions, wherein the one or more instructions, based on being executed by one or more processors, perform operations, the operations comprising all steps of a method of operating a first node according to various embodiments of the present disclosure, are provided.
  • one or more non-transitory computer-readable media storing one or more instructions, wherein the one or more instructions, when executed by one or more processors, perform operations, the operations comprising all steps of a method of operating a second node according to various embodiments of the present disclosure.
  • the present disclosure can provide a device and method for performing a quantum conversation protocol in a quantum communication system.
  • the present disclosure can provide a device and method for transmitting information from a recipient, Bob, to Alice without information encoding using a Quantum Dialogue (QD) protocol in a quantum communication system.
  • QD Quantum Dialogue
  • the present disclosure can provide a device and method for performing a secure protocol by eliminating the encoding process performed by a receiver in a quantum conversation protocol in a quantum communication system and transmitting information through a classical channel.
  • Figure 1 is a diagram illustrating an example of physical channels and general signal transmission used in a 3GPP system.
  • FIG. 2 is a diagram illustrating the system structure of a New Generation Radio Access Network (NG-RAN).
  • NG-RAN New Generation Radio Access Network
  • Figure 3 is a diagram illustrating the functional division between NG-RAN and 5GC.
  • Figure 4 is a diagram illustrating an example of a 5G usage scenario.
  • Figure 5 is a diagram illustrating an example of a communication structure that can be provided in a 6G system.
  • Figure 6 is a schematic diagram illustrating an example of a perceptron structure.
  • Figure 7 is a schematic diagram illustrating an example of a multilayer perceptron structure.
  • Figure 8 is a schematic diagram illustrating an example of a deep neural network.
  • Figure 9 is a schematic diagram illustrating an example of a convolutional neural network.
  • Figure 10 is a schematic diagram illustrating an example of a filter operation in a convolutional neural network.
  • Figure 11 is a schematic diagram illustrating an example of a neural network structure in which a recurrent loop exists.
  • Figure 12 is a diagram schematically illustrating an example of the operating structure of a recurrent neural network.
  • Figure 13 is a diagram illustrating an example of the electromagnetic spectrum.
  • Figure 14 is a diagram illustrating an example of a THz communication application.
  • Fig. 15 is a diagram illustrating an example of an electronic component-based THz wireless communication transmitter and receiver.
  • FIG. 16 is a diagram illustrating an example of a method for generating a THz signal based on an optical element.
  • Fig. 17 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • Fig. 18 is a diagram illustrating the structure of a photon source-based transmitter.
  • Figure 19 is a drawing showing the structure of an optical modulator.
  • FIG. 20 is a diagram illustrating an example of a quantum circuit for generating a bell state in a system applicable to the present disclosure.
  • FIG. 21 is a diagram illustrating an example of a bell state measurement circuit in a system applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating an example of a quantum teleportation system applicable to the present disclosure.
  • FIG. 24 is a diagram illustrating an example of quantum direct communication in a system applicable to the present disclosure.
  • FIG. 25 is a diagram illustrating an example of the structure of an existing QD protocol in a system applicable to the present disclosure.
  • FIG. 26 is a diagram illustrating an example of the structure of a QD protocol proposed in a system applicable to the present disclosure.
  • FIG. 31 illustrates a communication system (1) applicable to various embodiments of the present disclosure.
  • Figure 34 illustrates a signal processing circuit for a transmission signal.
  • FIG. 36 illustrates a mobile device applicable to various embodiments of the present disclosure.
  • FIG. 37 illustrates a vehicle or autonomous vehicle applicable to various embodiments of the present disclosure.
  • FIG. 38 illustrates a vehicle applicable to various embodiments of the present disclosure.
  • FIG. 39 illustrates an XR device applicable to various embodiments of the present disclosure.
  • FIG. 40 illustrates a robot applicable to various embodiments of the present disclosure.
  • FIG. 41 illustrates an AI device applicable to various embodiments of the present disclosure.
  • a or B may mean “only A,” “only B,” or “both A and B.” In other words, in various embodiments of the present disclosure, “A or B” may be interpreted as “A and/or B.” For example, in various embodiments of the present disclosure, “A, B or C” may mean “only A,” “only B,” “only C,” or “any combination of A, B and C.”
  • a slash (/) or a comma may mean “and/or.”
  • A/B may mean “A and/or B.”
  • A/B may mean “only A,” “only B,” or “both A and B.”
  • A, B, C may mean “A, B, or C.”
  • “at least one of A and B” may mean “only A,” “only B,” or “both A and B.” Furthermore, in various embodiments of the present disclosure, the expressions “at least one of A or B” or “at least one of A and/or B” may be interpreted as equivalent to “at least one of A and B.”
  • “at least one of A, B and C” can mean “only A,” “only B,” “only C,” or “any combination of A, B and C.” Additionally, “at least one of A, B or C” or “at least one of A, B and/or C” can mean “at least one of A, B and C.”
  • parentheses used in various embodiments of the present disclosure may mean “for example.” Specifically, when indicated as “control information (PDCCH)", “PDCCH” may be proposed as an example of “control information.” In other words, “control information” in various embodiments of the present disclosure is not limited to “PDCCH”, and “PDDCH” may be proposed as an example of "control information.” Furthermore, even when indicated as “control information (i.e., PDCCH)", “PDCCH” may be proposed as an example of "control information.”
  • CDMA can be implemented using wireless technologies such as UTRA (Universal Terrestrial Radio Access) or CDMA2000.
  • TDMA can be implemented using wireless technologies such as GSM (Global System for Mobile communications)/GPRS (General Packet Radio Service)/EDGE (Enhanced Data Rates for GSM Evolution).
  • OFDMA can be implemented using wireless technologies such as IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802-20, and E-UTRA (Evolved UTRA).
  • UTRA is a part of UMTS (Universal Mobile Telecommunications System).
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • E-UMTS Evolved UMTS
  • LTE-A Advanced/LTE-A pro
  • 3GPP NR New Radio or New Radio Access Technology
  • 3GPP 6G may be an evolved version of 3GPP NR.
  • LTE refers to technology after 3GPP TS 36.xxx Release 8.
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • LTE technology after 3GPP TS 36.xxx Release 13 is referred to as LTE-A pro
  • 3GPP NR refers to technology after TS 38.
  • 3GPP 6G may refer to technology after TS Release 17 and/or Release 18.
  • “xxx” refers to a standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • RRC Radio Resource Control
  • RRC Radio Resource Control
  • Figure 1 is a diagram illustrating an example of physical channels and general signal transmission used in a 3GPP system.
  • a terminal receives information from a base station via the downlink (DL) and transmits it to the base station via the uplink (UL).
  • the information transmitted and received between the base station and the terminal includes data and various control information, and various physical channels exist depending on the type and purpose of the information being transmitted and received.
  • a terminal When a terminal is powered on or enters a new cell, it performs an initial cell search operation, such as synchronizing with the base station (S11). To this end, the terminal receives a Primary Synchronization Signal (PSS) and a Secondary Synchronization Signal (SSS) from the base station to synchronize with the base station and obtain information such as a cell ID. Afterwards, the terminal can receive a Physical Broadcast Channel (PBCH) from the base station to obtain broadcast information within the cell. Meanwhile, the terminal can receive a Downlink Reference Signal (DL RS) during the initial cell search phase to check the downlink channel status.
  • PSS Primary Synchronization Signal
  • SSS Secondary Synchronization Signal
  • PBCH Physical Broadcast Channel
  • DL RS Downlink Reference Signal
  • a terminal that has completed initial cell search can obtain more specific system information by receiving a physical downlink control channel (PDCCH) and a physical downlink shared channel (PDSCH) based on information contained in the PDCCH (S12).
  • PDCCH physical downlink control channel
  • PDSCH physical downlink shared channel
  • the terminal may perform a random access procedure (RACH) for the base station (S13 to S16).
  • RACH random access procedure
  • the terminal may transmit a specific sequence as a preamble via a physical random access channel (PRACH) (S13 and S15) and receive a response message (RAR (Random Access Response) message) to the preamble via a PDCCH and a corresponding PDSCH.
  • PRACH physical random access channel
  • RAR Random Access Response
  • a contention resolution procedure may additionally be performed (S16).
  • the terminal that has performed the procedure described above can then perform PDCCH/PDSCH reception (S17) and physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) transmission (S18) as general uplink/downlink signal transmission procedures.
  • the terminal can receive downlink control information (DCI) through the PDCCH.
  • DCI downlink control information
  • the DCI includes control information such as resource allocation information for the terminal, and different formats can be applied depending on the purpose of use.
  • the base station transmits a related signal to the terminal through a downlink channel described below, and the terminal receives the related signal from the base station through a downlink channel described below.
  • PDSCH Physical Downlink Shared Channel
  • PDSCH carries downlink data (e.g., DL-shared channel transport block, DL-SCH TB) and applies modulation methods such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (QAM), 64 QAM, and 256 QAM.
  • Codewords are generated by encoding the TBs.
  • PDSCH can carry multiple codewords. Scrambling and modulation mapping are performed for each codeword, and modulation symbols generated from each codeword are mapped to one or more layers (Layer mapping). Each layer is mapped to resources along with a Demodulation Reference Signal (DMRS), generated as an OFDM symbol signal, and transmitted through the corresponding antenna port.
  • DMRS Demodulation Reference Signal
  • the PDCCH carries downlink control information (DCI) and employs modulation methods such as QPSK.
  • DCI downlink control information
  • a PDCCH consists of 1, 2, 4, 8, or 16 Control Channel Elements (CCEs), depending on the Aggregation Level (AL).
  • CCEs Control Channel Elements
  • Each CCE is comprised of six Resource Element Groups (REGs). Each REG is defined by one OFDM symbol and one (P)RB.
  • the UE obtains DCI transmitted via the PDCCH by performing decoding (also known as blind decoding) on a set of PDCCH candidates.
  • the set of PDCCH candidates decoded by the UE is defined as a PDCCH search space set.
  • the search space set may be a common search space or a UE-specific search space.
  • the UE can obtain DCI by monitoring PDCCH candidates within one or more search space sets established by the MIB or higher layer signaling.
  • the terminal transmits a related signal to the base station through the uplink channel described below, and the base station receives the related signal from the terminal through the uplink channel described below.
  • PUSCH Physical Uplink Shared Channel
  • PUSCH carries uplink data (e.g., UL-shared channel transport block, UL-SCH TB) and/or uplink control information (UCI), and is transmitted based on a CP-OFDM (Cyclic Prefix - Orthogonal Frequency Division Multiplexing) waveform, a DFT-s-OFDM (Discrete Fourier Transform - spread - Orthogonal Frequency Division Multiplexing) waveform, etc.
  • CP-OFDM Cyclic Prefix - Orthogonal Frequency Division Multiplexing
  • DFT-s-OFDM Discrete Fourier Transform - spread - Orthogonal Frequency Division Multiplexing
  • PUSCH transmissions can be dynamically scheduled by UL grants in DCI, or semi-statically scheduled (configured grant) based on higher layer (e.g., RRC) signaling (and/or Layer 1 (L1) signaling (e.g., PDCCH)).
  • PUSCH transmissions can be performed in a codebook-based or non-codebook-based manner.
  • PUCCH carries uplink control information, HARQ-ACK and/or scheduling request (SR), and can be divided into multiple PUCCHs depending on the PUCCH transmission length.
  • new radio access technology new RAT, NR.
  • next-generation communication As more and more communication devices demand greater communication capacity, the need for improved mobile broadband communication compared to existing radio access technology (RAT) is emerging. Furthermore, massive Machine Type Communications (MTC), which connects numerous devices and objects to provide various services anytime, anywhere, is also a key issue to be considered in next-generation communication. Furthermore, communication system design that considers reliability and latency-sensitive services/terminals is being discussed. The introduction of next-generation radio access technologies that take into account enhanced mobile broadband communication, massive MTC, and URLLC (Ultra-Reliable and Low Latency Communication) is being discussed, and in various embodiments of the present disclosure, these technologies are conveniently referred to as new RAT or NR.
  • new RAT New RAT
  • FIG. 2 is a diagram illustrating the system structure of a New Generation Radio Access Network (NG-RAN).
  • NG-RAN New Generation Radio Access Network
  • the NG-RAN may include a gNB and/or an eNB that provides user plane and control plane protocol termination to the UE.
  • FIG. 1 illustrates a case where only a gNB is included.
  • the gNB and eNB are connected to each other via an Xn interface.
  • the gNB and eNB are connected to the 5th generation core network (5G Core Network: 5GC) via the NG interface.
  • 5G Core Network: 5GC 5th generation core network
  • the gNB is connected to the access and mobility management function (AMF) via the NG-C interface
  • the gNB is connected to the user plane function (UPF) via the NG-U interface.
  • AMF access and mobility management function
  • UPF user plane function
  • Figure 3 is a diagram illustrating the functional division between NG-RAN and 5GC.
  • the gNB can provide functions such as inter-cell radio resource management (Inter Cell RRM), radio bearer management (RB control), connection mobility control (Connection Mobility Control), radio admission control (Radio Admission Control), measurement configuration and provision, and dynamic resource allocation.
  • the AMF can provide functions such as NAS security and idle state mobility processing.
  • the UPF can provide functions such as mobility anchoring and PDU processing.
  • the SMF Session Management Function
  • Figure 4 is a diagram illustrating an example of a 5G usage scenario.
  • the 5G usage scenario illustrated in FIG. 4 is merely exemplary, and the technical features of various embodiments of the present disclosure can also be applied to other 5G usage scenarios not illustrated in FIG. 4.
  • the three key requirement areas for 5G include (1) enhanced mobile broadband (eMBB), (2) massive machine type communication (mMTC), and (3) ultra-reliable and low latency communications (URLLC).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communication
  • URLLC ultra-reliable and low latency communications
  • KPI key performance indicator
  • eMBB focuses on improving data speeds, latency, user density, and overall capacity and coverage of mobile broadband connections. It targets throughputs of around 10 Gbps. eMBB significantly exceeds basic mobile internet access, enabling rich interactive experiences, media and entertainment applications in the cloud, and augmented reality. Data is a key driver of 5G, and for the first time, dedicated voice services may not be available in the 5G era. In 5G, voice is expected to be handled as an application, simply using the data connection provided by the communication system. The increased traffic volume is primarily due to the increasing content size and the growing number of applications that require high data rates. Streaming services (audio and video), interactive video, and mobile internet connectivity will become more prevalent as more devices connect to the internet.
  • Cloud storage and applications are rapidly growing on mobile communication platforms, and this can be applied to both work and entertainment.
  • Cloud storage is a particular use case driving the growth of uplink data rates.
  • 5G is also used for remote work in the cloud, requiring significantly lower end-to-end latency to maintain a superior user experience when tactile interfaces are used.
  • cloud gaming and video streaming are other key factors driving the demand for mobile broadband.
  • Entertainment is essential on smartphones and tablets, regardless of location, including in highly mobile environments like trains, cars, and airplanes.
  • Another use case is augmented reality and information retrieval for entertainment, where augmented reality requires extremely low latency and instantaneous data volumes.
  • mMTC is designed to enable communication between a large number of low-cost, battery-powered devices, supporting applications such as smart metering, logistics, field, and body sensors.
  • mMTC targets a battery life of approximately 10 years and/or a population of approximately 1 million devices per square kilometer.
  • mMTC enables seamless connectivity of embedded sensors across all sectors and is one of the most anticipated 5G use cases.
  • the number of IoT devices is projected to reach 20.4 billion by 2020.
  • Industrial IoT is one area where 5G will play a key role, enabling smart cities, asset tracking, smart utilities, agriculture, and security infrastructure.
  • URLLC is ideal for vehicle communications, industrial control, factory automation, remote surgery, smart grids, and public safety applications by enabling devices and machines to communicate with high reliability, very low latency, and high availability.
  • URLLC targets latency on the order of 1 ms.
  • URLLC encompasses new services that will transform industries through ultra-reliable, low-latency links, such as remote control of critical infrastructure and autonomous vehicles. This level of reliability and latency is essential for smart grid control, industrial automation, robotics, and drone control and coordination.
  • 5G can complement fiber-to-the-home (FTTH) and cable-based broadband (or DOCSIS) by delivering streams rated at hundreds of megabits per second to gigabits per second. These high speeds may be required to deliver TV at resolutions beyond 4K (6K, 8K, and beyond), as well as virtual reality (VR) and augmented reality (AR).
  • VR and AR applications include near-immersive sports events. Certain applications may require specialized network configurations. For example, for VR gaming, a gaming company may need to integrate its core servers with the network operator's edge network servers to minimize latency.
  • Automotive is expected to be a significant new driver for 5G, with numerous use cases for in-vehicle mobile communications. For example, passenger entertainment demands both high capacity and high mobile broadband, as future users will consistently expect high-quality connectivity regardless of their location and speed.
  • Another automotive application is augmented reality dashboards.
  • An AR dashboard allows drivers to identify objects in the dark on top of what they see through the windshield. The AR dashboard overlays information to inform the driver about the distance and movement of objects.
  • wireless modules will enable vehicle-to-vehicle communication, information exchange between vehicles and supporting infrastructure, and information exchange between vehicles and other connected devices (e.g., devices accompanying pedestrians).
  • Safety systems can guide drivers to safer driving behaviors, reducing the risk of accidents.
  • the next step will be remotely controlled or autonomous vehicles, which require highly reliable and fast communication between different autonomous vehicles and/or between vehicles and infrastructure.
  • autonomous vehicles will perform all driving tasks, leaving drivers to focus solely on traffic anomalies that the vehicle itself cannot detect.
  • the technological requirements for autonomous vehicles will require ultra-low latency and ultra-high-speed reliability, increasing traffic safety to levels unattainable by humans.
  • Smart cities and smart homes often referred to as smart societies, will be embedded with dense wireless sensor networks.
  • a distributed network of intelligent sensors will identify conditions for cost- and energy-efficient maintenance of cities or homes. Similar setups can be implemented for individual homes.
  • Temperature sensors, window and heating controllers, burglar alarms, and appliances will all be wirelessly connected. Many of these sensors typically require low data rates, low power, and low cost. However, for example, real-time HD video may be required from certain types of devices for surveillance purposes.
  • Smart grids interconnect these sensors using digital information and communication technologies to collect and act on information. This information can include the behavior of suppliers and consumers, enabling smart grids to improve efficiency, reliability, economic efficiency, sustainable production, and the automated distribution of fuels like electricity. Smart grids can also be viewed as another low-latency sensor network.
  • Telecommunications systems can support telemedicine, which provides clinical care in remote locations. This can help reduce distance barriers and improve access to health services that are otherwise unavailable in remote rural areas. It can also be used to save lives in critical care and emergency situations.
  • Mobile-based wireless sensor networks can provide remote monitoring and sensors for parameters such as heart rate and blood pressure.
  • Wireless and mobile communications are becoming increasingly important in industrial applications. Wiring is expensive to install and maintain. Therefore, the potential to replace cables with reconfigurable wireless links presents an attractive opportunity for many industries. However, achieving this requires wireless connections to operate with similar latency, reliability, and capacity to cables, while simplifying their management. Low latency and extremely low error rates are new requirements for 5G connectivity.
  • Logistics and freight tracking are important use cases for mobile communications, enabling the tracking of inventory and packages anywhere using location-based information systems. Logistics and freight tracking typically require low data rates but may require wide-range and reliable location information.
  • next-generation communications e.g., 6G
  • 6G next-generation communications
  • the 6G (wireless communication) system aims to achieve (i) very high data rates per device, (ii) a very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) low energy consumption for battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be divided into four aspects: intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity, and the 6G system can satisfy the requirements as shown in Table 1 below.
  • Table 1 is a table showing an example of the requirements of a 6G system.
  • 6G systems can have key factors such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine-type communication (mMTC), AI integrated communication, tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mMTC massive machine-type communication
  • AI integrated communication tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • Figure 5 is a diagram illustrating an example of a communication structure that can be provided in a 6G system.
  • 6G systems are expected to have 50 times the simultaneous wireless connectivity of 5G systems.
  • URLLC a key feature of 5G, will become even more crucial in 6G communications by providing end-to-end latency of less than 1 ms.
  • 6G systems will have significantly higher volumetric spectral efficiency, compared to the commonly used area spectral efficiency.
  • 6G systems can offer extremely long battery life and advanced battery technologies for energy harvesting, eliminating the need for separate charging for mobile devices in 6G systems.
  • New network characteristics in 6G may include:
  • 6G is expected to integrate with satellites to provide a global mobile network.
  • the integration of terrestrial, satellite, and airborne networks into a single wireless communications system is crucial for 6G.
  • Connected Intelligence Unlike previous generations of wireless communication systems, 6G is revolutionary, upgrading the wireless evolution from "connected objects" to "connected intelligence.” AI can be applied at every stage of the communication process (or at every signal processing step, as described below).
  • 6G wireless networks will transfer power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The concept of small cell networks was introduced to improve received signal quality in cellular systems by increasing throughput, energy efficiency, and spectral efficiency. Consequently, small cell networks are essential for 5G and beyond-5G (5GB) communication systems. Accordingly, 6G communication systems also adopt the characteristics of small cell networks.
  • High-capacity backhaul Backhaul connections are characterized by high-capacity backhaul networks to support high-volume traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be potential solutions to this problem.
  • High-precision localization (or location-based services) through communications is a key feature of 6G wireless communication systems. Therefore, radar systems will be integrated with 6G networks.
  • Softwarization and virtualization are two critical features that form the foundation of the design process for 5GB networks to ensure flexibility, reconfigurability, and programmability. Furthermore, billions of devices can be shared on a shared physical infrastructure.
  • AI The most crucial and newly introduced technology for 6G systems is AI. 4G systems did not involve AI. 5G systems will support partial or very limited AI. However, 6G systems will fully support AI for automation. Advances in machine learning will create more intelligent networks for real-time communications in 6G. Incorporating AI into communications can streamline and improve real-time data transmission. AI can use numerous analyses to determine how complex target tasks should be performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play a crucial role in machine-to-machine (M2M), machine-to-human, and human-to-machine communications. Furthermore, AI can facilitate rapid communication in brain-computer interfaces (BCIs). AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • M2M machine-to-machine
  • BCIs brain-computer interfaces
  • AI-based physical layer transmission refers to the application of AI-based signal processing and communication mechanisms, rather than traditional communication frameworks, in the fundamental signal processing and communication mechanisms. For example, this may include deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanisms, and AI-based resource scheduling and allocation.
  • Machine learning can be used for channel estimation and channel tracking, as well as for power allocation and interference cancellation in the physical layer of the downlink (DL). Furthermore, machine learning can be used for antenna selection, power control, and symbol detection in MIMO systems.
  • Deep learning-based AI algorithms require a large amount of training data to optimize training parameters.
  • a large amount of training data is used offline. This means that static training on training data in specific channel environments can lead to conflicts with the dynamic characteristics and diversity of the wireless channel.
  • Machine learning refers to a series of operations that train machines to perform tasks that humans can or cannot perform. Machine learning requires data and a learning model. Data learning methods in machine learning can be broadly categorized into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network training aims to minimize output errors. It involves repeatedly inputting training data into a neural network, calculating the neural network output and target error for the training data, and backpropagating the neural network error from the output layer to the input layer to update the weights of each node in the neural network to reduce the error.
  • Supervised learning uses labeled training data, while unsupervised learning may not have labeled training data.
  • the training data may be data in which each training data category is labeled.
  • Labeled training data is input to a neural network, and the error can be calculated by comparing the output (categories) of the neural network with the training data labels.
  • the calculated error is backpropagated through the neural network in the backward direction (i.e., from the output layer to the input layer), and the connection weights of each node in each layer of the neural network can be updated through backpropagation.
  • the amount of change in the connection weights of each updated node can be determined by the learning rate.
  • the neural network's calculation of the input data and the backpropagation of the error can constitute a learning cycle (epoch).
  • the learning rate can be applied differently depending on the number of iterations of the neural network's learning cycle. For example, in the early stages of training a neural network, a high learning rate can be used to quickly allow the network to achieve a certain level of performance, thereby increasing efficiency. In the later stages of training, a low learning rate can be used to increase accuracy.
  • Learning methods may vary depending on the characteristics of the data. For example, if the goal is to accurately predict data transmitted by a transmitter in a communication system, supervised learning is preferable to unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be thought of, but the machine learning paradigm that uses highly complex neural network structures, such as artificial neural networks, as learning models is called deep learning.
  • the neural network cores used in learning methods are mainly divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machines (RNN).
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machines
  • An artificial neural network is an example of a network of multiple perceptrons.
  • a large-scale artificial neural network structure can extend the simplified perceptron structure illustrated in Fig. 6 to apply the input vector to perceptrons of different dimensions. For convenience of explanation, input values or output values are called nodes.
  • the perceptron structure illustrated in Fig. 6 can be explained as consisting of a total of three layers based on input and output values.
  • An artificial neural network in which there are H perceptrons of (d+1) dimensions between the 1st layer and the 2nd layer, and K perceptrons of (H+1) dimensions between the 2nd layer and the 3rd layer can be expressed as in Fig. 7.
  • Figure 7 is a schematic diagram illustrating an example of a multilayer perceptron structure.
  • Figure 8 is a schematic diagram illustrating an example of a deep neural network.
  • the deep neural network illustrated in Figure 8 is a multilayer perceptron consisting of eight hidden layers and eight output layers.
  • the multilayer perceptron structure is referred to as a fully connected neural network.
  • a fully connected neural network there is no connection between nodes located in the same layer, and there is a connection only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, and can be usefully applied to identify correlation characteristics between inputs and outputs.
  • the correlation characteristic can mean the joint probability of inputs and outputs.
  • Figure 9 is a schematic diagram illustrating an example of a convolutional neural network.
  • Fig. 9 can assume a case where nodes are arranged two-dimensionally, with w nodes in width and h nodes in height (the convolutional neural network structure of Fig. 9).
  • a weight is added to each connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered. Since there are h ⁇ w nodes in the input layer, a total of h2w2 weights are required between two adjacent layers.
  • the convolutional neural network of Fig. 9 has a problem in that the number of weights increases exponentially according to the number of connections. Therefore, instead of considering the connections of all modes between adjacent layers, it assumes that there are small filters, and performs weighted sum and activation function operations on the overlapping portions of the filters, as in Fig. 10.
  • Figure 10 is a schematic diagram illustrating an example of a filter operation in a convolutional neural network.
  • the above filter performs weighted sum and activation function operations while moving at a certain horizontal and vertical interval while scanning the input layer, and places the output value at the current filter position.
  • This operation method is similar to the convolution operation for images in the field of computer vision, so a deep neural network with this structure is called a convolutional neural network (CNN), and the hidden layer generated as a result of the convolution operation is called a convolutional layer.
  • a neural network with multiple convolutional layers is called a deep convolutional neural network (DCNN).
  • a structure that applies a method of inputting one element of the data sequence at each timestep and inputting the output vector (hidden vector) of the hidden layer output at a specific timestep together with the immediately following element in the sequence is called a recurrent neural network structure.
  • a recurrent neural network is a structure that inputs elements (x1(t), x2(t), ,..., xd(t)) of a data sequence at a time point t into a fully connected neural network, and then inputs the hidden vectors (z1(t-1), z2(t-1),..., zH(t-1)) of the immediately preceding time point t-1 together and applies a weighted sum and activation function.
  • the reason for transmitting the hidden vector to the next time point in this way is because the information in the input vectors of the preceding time points is considered to be accumulated in the hidden vector of the current time point.
  • Figure 12 is a diagram schematically illustrating an example of the operating structure of a recurrent neural network.
  • the recurrent neural network operates in a predetermined order of time for the input data sequence.
  • the hidden vector (z1(1), z2(1),..., zH(1)) is input together with the input vector (x1(2), x2(2),..., xd(2)) at time point 2, and the vector (z1(2), z2(2),..., zH(2)) of the hidden layer is determined through a weighted sum and an activation function. This process is repeatedly performed until time point 2, time point 3, ,,, time point T.
  • Recurrent neural networks are designed to be useful for processing sequence data (e.g., natural language processing).
  • various deep learning techniques such as DNN, CNN, RNN, Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), and Deep Q-Network, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech/signal processing.
  • AI-based physical layer transmission refers to the application of AI-based signal processing and communication mechanisms, rather than traditional communication frameworks, in the fundamental signal processing and communication mechanisms. For example, this may include deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanisms, and AI-based resource scheduling and allocation.
  • THz waves also known as sub-millimeter waves, typically refer to the frequency range between 0.1 THz and 10 THz, with corresponding wavelengths ranging from 0.03 mm to 3 mm.
  • the 100 GHz to 300 GHz band (sub-THz band) is considered a key part of the THz band for cellular communications. Adding the sub-THz band to the mmWave band will increase the capacity of 6G cellular communications.
  • 300 GHz to 3 THz lies in the far infrared (IR) frequency band. While part of the optical band, the 300 GHz to 3 THz band lies at the boundary of the optical band, immediately following the RF band. Therefore, this 300 GHz to 3 THz band exhibits similarities to RF.
  • Figure 13 is a diagram illustrating an example of the electromagnetic spectrum.
  • THz communications Key characteristics include (i) the widely available bandwidth to support very high data rates and (ii) the high path loss that occurs at high frequencies (requiring highly directional antennas).
  • the narrow beamwidths generated by highly directional antennas reduce interference.
  • the small wavelength of THz signals allows for a significantly larger number of antenna elements to be integrated into devices and base stations operating in this band. This enables the use of advanced adaptive array technologies to overcome range limitations.
  • OWC technology is designed for 6G communications, in addition to RF-based communications for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul networks.
  • OWC technology has already been used in 4G communication systems, but it will be used more widely to meet the demands of 6G communication systems.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and wideband-based FSO communication are already well-known. Communications based on optical wireless technology can provide very high data rates, low latency, and secure communications.
  • LiDAR can also be used for ultra-high-resolution 4D mapping in 6G communications based on wideband.
  • FSO can be a promising technology for providing backhaul connectivity in 6G systems, in conjunction with fiber-optic networks.
  • FSO supports high-capacity backhaul connectivity for remote and non-remote areas, such as the ocean, space, underwater, and isolated islands.
  • FSO also supports cellular base station (BS) connections.
  • BS base station
  • MIMO technology One of the key technologies for improving spectral efficiency is the application of MIMO technology. As MIMO technology improves, spectral efficiency also improves. Therefore, massive MIMO technology will be crucial in 6G systems. Because MIMO technology utilizes multiple paths, multiplexing technology must be considered to ensure that data signals can be transmitted along more than one path, as well as beam generation and operation technologies suitable for the THz band.
  • Blockchain will become a crucial technology for managing massive amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology.
  • a distributed ledger is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • Blockchains are managed by a peer-to-peer network and can exist without being managed by a central authority or server. Data on a blockchain is collected and organized into blocks. Blocks are linked together and protected using cryptography.
  • Blockchain perfectly complements large-scale IoT with its inherently enhanced interoperability, security, privacy, reliability, and scalability. Therefore, blockchain technology offers several features, such as interoperability between devices, traceability of large amounts of data, autonomous interaction with other IoT systems, and the massive connectivity stability of 6G communication systems.
  • 3D BS will be provided via low-orbit satellites and UAVs. Adding a new dimension in altitude and associated degrees of freedom, 3D connections differ significantly from existing 2D networks.
  • Unsupervised reinforcement learning holds promise in the context of 6G networks. Supervised learning approaches cannot label the massive amounts of data generated by 6G networks. Unsupervised learning does not require labeling. Therefore, this technology can be used to autonomously build representations of complex networks. Combining reinforcement learning and unsupervised learning allows for truly autonomous network operation.
  • Unmanned Aerial Vehicles will be a key element in 6G wireless communications. In most cases, high-speed wireless connections will be provided using UAV technology.
  • BS entities are installed on UAVs to provide cellular connectivity.
  • UAVs offer specific capabilities not found in fixed BS infrastructure, such as easy deployment, robust line-of-sight links, and controlled mobility. During emergencies such as natural disasters, deploying terrestrial communication infrastructure is not economically feasible, and sometimes, volatile environments make it impossible to provide services. UAVs can easily handle these situations.
  • UAVs will become a new paradigm in wireless communications. This technology facilitates three fundamental requirements for wireless networks: enhanced mobile broadband (eMBB), URLLC, and mMTC.
  • eMBB enhanced mobile broadband
  • URLLC ultra low-access control
  • mMTC massive machine type of networks
  • UAVs can also support various purposes, such as enhancing network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, and accident monitoring. Therefore, UAV technology is recognized as one of the most important technologies for 6
  • Tight integration of multiple frequencies and heterogeneous communication technologies is crucial in 6G systems. As a result, users will be able to seamlessly move from one network to another without requiring any manual configuration on their devices. The best network will be automatically selected from available communication technologies. This will break the limitations of the cell concept in wireless communications. Currently, user movement from one cell to another in dense networks results in excessive handovers, resulting in handover failures, handover delays, data loss, and a ping-pong effect. 6G cell-free communications will overcome all of these challenges and provide better QoS. Cell-free communications will be achieved through multi-connectivity and multi-tier hybrid technologies, as well as heterogeneous radios on devices.
  • Autonomous wireless networks are capable of continuously sensing dynamically changing environmental conditions and exchanging information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a wireless signal in a specific direction. It is a subset of smart antennas or advanced antenna systems. Beamforming technology offers several advantages, including high signal-to-noise ratio, interference avoidance and rejection, and high network efficiency.
  • Holographic beamforming (HBF) is a novel beamforming method that differs significantly from MIMO systems because it uses software-defined antennas. HBF will be a highly effective approach for efficient and flexible signal transmission and reception in multi-antenna communication devices in 6G.
  • Big data analytics is a complex process for analyzing diverse, large-scale data sets, or "big data.” This process uncovers hidden data, unknown correlations, and customer trends, ensuring complete data management. Big data is collected from various sources, such as video, social networks, images, and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • THz-band signals have strong linearity, which can create many shadow areas due to obstacles.
  • LIS technology which enables expanded communication coverage, enhanced communication stability, and additional value-added services by installing LIS near these shadow areas, is becoming increasingly important.
  • LIS is an artificial surface made of electromagnetic materials that can alter the propagation of incoming and outgoing radio waves. While LIS can be viewed as an extension of massive MIMO, it differs from massive MIMO in its array structure and operating mechanism. Furthermore, LIS operates as a reconfigurable reflector with passive elements, passively reflecting signals without using active RF chains, which offers the advantage of low power consumption. Furthermore, because each passive reflector in LIS must independently adjust the phase shift of the incoming signal, this can be advantageous for wireless communication channels. By appropriately adjusting the phase shift via the LIS controller, the reflected signal can be collected at the target receiver to boost the received signal power.
  • THz Terahertz
  • THz waves are located between the RF (Radio Frequency)/millimeter (mm) and infrared bands, and (i) compared to visible light/infrared light, they penetrate non-metallic/non-polarizable materials well, and compared to RF/millimeter waves, they have a shorter wavelength, so they have high linearity and can focus beams.
  • the photon energy of THz waves is only a few meV, they have the characteristic of being harmless to the human body.
  • the frequency bands expected to be used for THz wireless communication may be the D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz), which have low propagation loss due to molecular absorption in the air. Discussions on standardization of THz wireless communication are being centered around the IEEE 802.15 THz working group in addition to 3GPP, and standard documents issued by the IEEE 802.15 Task Group (TG3d, TG3e) may specify or supplement the contents described in various embodiments of the present disclosure. THz wireless communication can be applied to wireless cognition, sensing, imaging, wireless communication, THz navigation, etc.
  • Figure 14 is a diagram illustrating an example of a THz communication application.
  • THz wireless communication scenarios can be categorized into macro networks, micro networks, and nanoscale networks.
  • THz wireless communication can be applied to vehicle-to-vehicle and backhaul/fronthaul connections.
  • THz wireless communication can be applied to fixed point-to-point or multi-point connections, such as indoor small cells, wireless connections in data centers, and near-field communications, such as kiosk downloads.
  • Table 2 below shows examples of technologies that can be used in THz waves.
  • Transceiver Device Available immatures UTC-PD, RTD and SBD Modulation and Coding Low order modulation techniques (OOK, QPSK), LDPC, Reed Soloman, Hamming, Polar, Turbo Antenna Omni and Directional, phased array with low number of antenna elements Bandwidth 69GHz (or 23GHz) at 300GHz Channel models Partially Data rate 100Gbps Outdoor deployment No Free space loss High Coverage Low Radio Measurements 300GHz indoor Device size Few micrometers
  • THz wireless communications can be categorized based on the methods used to generate and receive THz waves.
  • THz generation methods can be categorized as either optical or electronic-based.
  • Fig. 15 is a diagram illustrating an example of an electronic component-based THz wireless communication transmitter and receiver.
  • Methods for generating THz using electronic components include a method using semiconductor components such as a resonant tunneling diode (RTD), a method using a local oscillator and a multiplier, a MMIC (Monolithic Microwave Integrated Circuits) method using an integrated circuit based on a compound semiconductor HEMT (High Electron Mobility Transistor), and a method using a Si-CMOS-based integrated circuit.
  • a multiplier doubler, tripler, multiplier
  • a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, and matches it to the desired harmonic frequency and filters out all remaining frequencies.
  • beamforming can be implemented by applying an array antenna or the like to the antenna of Fig. 15.
  • IF represents intermediate frequency
  • tripler and multiplexer represent multipliers
  • PA represents power amplifier
  • LNA low noise amplifier
  • PLL phase-locked loop.
  • FIG. 16 is a diagram illustrating an example of a method for generating a THz signal based on an optical element.
  • Fig. 17 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • Optical component-based THz wireless communication technology refers to a method of generating and modulating THz signals using optical components.
  • Optical component-based THz signal generation technology generates an ultra-high-speed optical signal using a laser and an optical modulator, and converts it into a THz signal using an ultra-high-speed photodetector. Compared to technologies that use only electronic components, this technology can easily increase the frequency, generate high-power signals, and obtain flat response characteristics over a wide frequency band.
  • optical component-based THz signal generation requires a laser diode, a wideband optical modulator, and an ultra-high-speed photodetector.
  • an EDFA Erbium-Doped Fiber Amplifier
  • a PD Photo Detector
  • an OSA optical module (Optical Sub Assembly) that modularizes various optical communication functions (photoelectric conversion, electro-optical conversion, etc.) into a single component
  • a DSO represents a digital storage oscilloscope.
  • Fig. 18 is a diagram illustrating the structure of a photon source-based transmitter.
  • Figure 19 is a drawing showing the structure of an optical modulator.
  • the phase of a signal can be changed by passing the optical source of a laser through an optical wave guide. At this time, data is loaded by changing the electrical characteristics through a microwave contact, etc. Therefore, the optical modulator output is formed as a modulated waveform.
  • An opto-electrical modulator (O/E converter) can generate THz pulses by optical rectification operation by a nonlinear crystal, photoelectric conversion by a photoconductive antenna, emission from a bunch of relativistic electrons, etc. Terahertz pulses generated in the above manner can have a length in units of femtoseconds to picoseconds.
  • An optical/electronic converter (O/E converter) performs down conversion by utilizing the non-linearity of the device.
  • the available bandwidth can be classified based on the oxygen attenuation of 10 ⁇ 2 dB/km in the spectrum up to 1 THz. Accordingly, a framework in which the available bandwidth is composed of multiple band chunks can be considered. As an example of the above framework, if the THz pulse length for one carrier is set to 50 ps, the bandwidth (BW) becomes approximately 20 GHz.
  • Effective down-conversion from the infrared band (IR band) to the terahertz band (THz band) depends on how to utilize the nonlinearity of the optical/electrical converter (O/E converter).
  • O/E converter optical/electrical converter
  • a terahertz transmission and reception system can be implemented using a single optical-to-electrical converter.
  • the number of optical-to-electrical converters may be equal to the number of carriers. This phenomenon will be particularly noticeable in a multi-carrier system that utilizes multiple broadbands according to the aforementioned spectrum usage plan.
  • a frame structure for the multi-carrier system may be considered.
  • a signal down-converted using an optical-to-electrical converter may be transmitted in a specific resource region (e.g., a specific frame).
  • the frequency region of the specific resource region may include multiple chunks. Each chunk may be composed of at least one component carrier (CC).
  • the present disclosure relates to a method for communicating without encoding information into quantum states in a Quantum Dialogue (QD) protocol in a quantum communication system.
  • QD Quantum Dialogue
  • the present disclosure proposes a protocol for transmitting information after detection without the receiver performing an operation in the quantum dialogue protocol.
  • the Bell state is the simplest example of quantum entanglement, and refers to the following four quantum states formed by two qubits in a maximally entangled state. This can be viewed as a maximally entangled basis for the four-dimensional Hilbert space for the two qubits, and is called the Bell basis.
  • FIG. 20 is a diagram illustrating an example of a quantum circuit for generating a bell state in a system applicable to the present disclosure.
  • the Bell state can be generated through a two-qubit quantum circuit consisting of a Hadamard gate and a CNOT gate (controlled not gate), as shown in Fig. 20.
  • Table 3 shows the input/output states of the bell state generation circuit.
  • FIG. 21 is a diagram illustrating an example of a bell state measurement circuit in a system applicable to the present disclosure.
  • Bell state measurement the goal is to determine which of the four quantum entanglement states defined by the Bell states the states of two qubits belong to. If the order of the CNOT gate and the Hadamard gate in the Bell state generation circuit of Fig. 20 is reversed, a Bell state measurement circuit as shown in Fig. 21 is obtained. Measurement results as shown in Table 4 can be obtained for the four quantum entanglement states corresponding to the Bell states. Table 4 shows the input and output states of the Bell state measurement circuit.
  • Quantum teleportation is a technology that transmits quantum information from a sender at a specific location to a receiver a certain distance away. Contrary to the original meaning of the word "teleport,” in quantum teleportation, the carriers on both sides are fixed, and the actual transfer of carriers is not done, but rather, the transfer of quantum information between carriers occurs. This teleportation of information requires an entangled quantum state, or Bell state, which provides statistical correlation between separate physical systems. Since any change experienced by one of the entangled particles causes the other particle to undergo the same change, the two particles behave as if they were in a single quantum state.
  • Entanglement is a very important property that differentiates quantum systems from classical information. Entanglement refers to a state in which the results of different observations are closely related to each other. The entanglement state of a quantum system is stronger than any entanglement state (correlation) that exists in classical mechanics.
  • Two qubits can be represented as a superposition of four fundamental quantum states in Hilbert space. Here, the four fundamental quantum states are ⁇ , , , ⁇ is included.
  • the basic quantum state of two qubits can be expressed through tensor operations on the basic states of individual qubits. If the state of two qubits cannot be expressed as a tensor product of a single qubit, such a qubit state is called an entangled state.
  • the above EPR state is also called the Bell state, and the measurement result of the preceding qubit always affects the measurement of the succeeding qubit. Furthermore, all four of the above pure states are maximally entangled states and constitute the vertical basis of the two-qubit Hilbert Space.
  • the GHZ state is as follows:
  • M 2
  • M 3
  • the GHZ State can be expressed by extending it to a system corresponding to d-dimension rather than 2-dimension.
  • FIG. 22 is a diagram illustrating an example of three basic properties of quantum information in a system applicable to the present disclosure.
  • quantum error-correcting codes In the perspective of quantum error-correcting codes, applying quantum error-correcting codes in quantum information systems requires generating codewords, estimating errors in the channel, and then restoring the information without measuring the information during the encoding and restoration process, or without measurements that would alter the information. In this process, continuously occurring quantum errors are digitized through measurements for error estimation.
  • FIG. 23 is a diagram illustrating an example of a quantum teleportation system applicable to the present disclosure.
  • FIG. 24 is a diagram illustrating an example of quantum direct communication in a system applicable to the present disclosure.
  • QDC Quantum Direct Communication
  • the QDC technology family includes quantum secure direct communication (QSDC), which has the advantage of ensuring high security by not generating information leakage related to transmitted information, and includes a two-step QSDC technique that largely utilizes an entangled light source.
  • QSDC quantum secure direct communication
  • Two-step QSDC is a technique derived from super dense coding, as shown in Fig. 24, and is a technique for safely transmitting 2 bits of classical information using the four types of single entangled photons (EPR-pairs) of the mathematical formula 4 below.
  • Superdensity coding is a technique that enables classical information to be transmitted using quantum communication.
  • a transmitter can transmit two bits of classical information to a distant receiver using a single qubit through a quantum channel.
  • the transmitter is assumed to possess the first entangled qubit, and the receiver is assumed to possess the second entangled qubit.
  • the qubit that the transmitter wishes to transmit can be in one of four cases: '00', '01', '10', and '11'.
  • the entangled photon pair is not transmitted all at once, but is transmitted in two steps through an upper quantum channel and a lower quantum channel.
  • information on both sides of the entangled photon pair must be known to determine the transmitted information through measurement. Therefore, in the two-step technique, one side of the entangled photon pair is sent first to verify its safety from eavesdropping, and only when safety is guaranteed is the message information to be sent coded and transmitted on the remaining part of the photon pair.
  • the present disclosure relates to a method for transmitting information to Alice via a Quantum Dialogue (QD) protocol without information encoding, allowing Bob, a receiver, to perform the encoding process performed by the receiver in the QD protocol.
  • QD Quantum Dialogue
  • the present disclosure proposes a protocol that eliminates the encoding process performed by the receiver and allows secure transmission of information over a classical channel.
  • FIG. 25 is a diagram illustrating an example of the structure of an existing QD protocol in a system applicable to the present disclosure.
  • QD Quantum Dialogue
  • Figure 25 is a diagram showing the structure of the existing QD protocol.
  • Alice and Bob are separated from each other and use a quantum channel and a classical channel (authenticated public channel) for communication.
  • the EPR pair source is a device that creates an entangled state
  • the quantum memory is a device that can store a quantum state.
  • the encoder is a device that can perform X, Y, Z, I operations
  • the Bell measurement means a device that performs Bell measurement.
  • the dotted line indicates that Alice/Bob control the device with an internal signal
  • the red solid line indicates the quantum channel
  • the black solid line indicates the classical channel.
  • the quantum channel is a channel that is not disclosed to eavesdroppers (it can be detected if an attack is performed).
  • the classical channel assumes that Alice and Bob are authenticated to each other but is disclosed to eavesdroppers.
  • Alice creates 2(N+ ⁇ ) EPR pairs, preparing two adjacent pairs in the same state.
  • the superscript indicates the order of the EPR pair, and the subscript indicates the different particles within the pair. For example, are in the same state.
  • the EPR pair used in the protocol uses one of the following Bell states.
  • the sequence of Particle 2 is named A 2 .
  • the protocol is discarded, and if it is determined that there is no wiretapping, it continues.
  • Step 3 Encoding Alice's information and preparing for the second security check
  • Alice is a unitary operation It is performed only on particles with large superscripts among the particles in the group.
  • Alice mixes the checking single particles with sequence A 2 and sends them to Bob.
  • Bob measures the checking particles on the same basis as Alice and then checks the error rate to determine whether there is an eavesdropper.
  • Group 1' , group 2' : ,..., group N' :
  • Bob is It is performed only on particles with large superscripts among the particles in the group.
  • the secret information exchange in group 1 is as follows.
  • Figure 26 is a diagram illustrating the structure of the proposed technique. Compared to Figure 25, Bob's Encoder is absent, meaning that detection results are encoded in a form of bit flipping separately by Bob's computer, rather than optically encoding (meaning encoding due to changes in phase, polarization, etc.).
  • Step 1 Create two sequences for communication
  • the superscript indicates the order of the EPR pair, and the subscript indicates the different particles within the pair. For example, are in the same state.
  • the EPR pair used in the protocol uses one of the following Bell states:
  • Alice picks out particle 1 and names it sequence A 1 .
  • the sequence of Particle 2 is named A 2 .
  • the protocol is discarded, and if it is determined that there is no wiretapping, it continues.
  • Step 3 Encoding Alice's information and preparing for the second security check
  • the N groups remaining for Alice and Bob are used to share secret information.
  • Alice is a unitary operation It is performed only on particles with large superscripts among the particles in the group.
  • Alice mixes the checking single particles with sequence A 2 and sends them to Bob.
  • Bob measures the checking particles on the same basis as Alice and then checks the error rate to determine whether there is an eavesdropper.
  • the secret information exchange in group 1 is as follows.
  • Bob is As a result of measuring, we know the initial state prepared by Alice.
  • the information encoded by Alice can be known by measuring the result. If the measurement result of each pair is 11, 10, Bob can know that Alice sent 01.
  • Alice knows that Bob will measure 10 based on the initial state and encoding information she prepared, so she can know that Bob's secret message is 11 based on the information 01 that Bob disclosed.
  • Example 1 The expected effects for Example 1 are as follows.
  • step 5 of the existing QD protocol Bob performs Bell measurement immediately after encoding.
  • Alice since encoding is performed at a point where eavesdroppers can no longer intervene, Alice can immediately measure the quantum state received from him without encoding, and Bob's information can be reflected in the measurement results and disclosed. Therefore, according to Example 1, by avoiding unnecessary encoding for Bob, potential operational errors can be reduced, leading to improved throughput.
  • the proposed QD protocol (Example 2) operates as follows.
  • Alice creates 2(N+ ⁇ ) EPR pairs, preparing two adjacent pairs in the same state.
  • Superscript indicates the order of the EPR pair, and subscript indicates the different particles within the pair.
  • the EPR pair used in the protocol uses one of the following Bell states:
  • the sequence of Particle 2 is named A 2 in the same way.
  • Bob measures the sequence A 1 received from Alice in a random basis (X or Z). At this time, the same group is measured in the same basis.
  • Bob randomly selects ⁇ groups from A_1 and informs Alice of them through classical communication, and discloses to Alice the measurement results and measuring basis belonging to the ⁇ groups.
  • Alice uses the same basis to measure the corresponding particles of A 2 and compares them with Bob's measurements to determine whether there is eavesdropping on the channel.
  • the protocol is discarded, and if it is determined that there is no wiretapping, it continues.
  • Step 3 Encoding Alice's information and preparing for the second security check
  • the N groups remaining for Alice and Bob are used to share secret information.
  • Alice arbitrarily selects the groups to be used for sharing secret information and performs unitary operations. It is performed only on particles with large superscripts among the particles in the group.
  • N check ⁇ N (0 ⁇ 1) is the number of groups used to check for eavesdropping.
  • the remaining groups not used for secret information are used for security checking, so arbitrary operations are performed only on particles with large superscripts.
  • Alice reveals the location of the security checking groups among A 2 's groups.
  • Bob communicates with Alice by revealing the measurement results of A 2 .
  • the measurement results of Group k' are If Bob sends Alice 1-bit information You can see that Bob sent u. At this time, if the 1-bit information that Bob wants to send is u, then it is sent through the public channel. Since the eavesdropper does not know the QR that represents the initial state, he cannot know the information shared between Alice and Bob using only the disclosed information.
  • the secret information exchange in group 1 is as follows.
  • Bob is As a result of measuring, we know the initial state prepared by Alice.
  • the information encoded by Alice can be known by measuring the result. If the measurement result of each pair is 11, 00, Bob can know that Alice sent 1.
  • Alice knows that Bob will measure 00 based on the initial state and encoding information she prepared, so she can know that Bob's secret message is 0 based on the information 00 that Bob disclosed.
  • Example 2 The expected effects for Example 2 are as follows.
  • step 5 of the existing QD protocol Bob performs Bell measurement immediately after encoding.
  • Alice since encoding is performed at a point where eavesdroppers can no longer intervene, Alice can immediately measure the quantum state received from Alice without encoding, and Bob's information can be reflected in the measurement results and disclosed. Therefore, Example 2 can reduce potential operational errors by eliminating unnecessary encoding for Bob, thereby improving throughput.
  • this technique first reveals the security checking groups and then encodes the remaining groups, preventing information loss for Bob. Therefore, according to Example 2, retransmission due to information loss is not required.
  • Table 6 shows a comparison of the existing protocol and the protocol proposed according to Example 2.
  • FIG. 27 is a diagram illustrating an example of the structure of an existing QD protocol in a system applicable to the present disclosure.
  • EPR pairs are generated and sequence transmission is performed.
  • QBER measurement and confirmation of eavesdropping are performed. If eavesdropping exists, repeat from step (1). If there is no eavesdropping, (3) Alice's information encoding and transmission of checking particles are performed. (4) QBER measurement and confirmation of eavesdropping are performed. If eavesdropping exists, repeat from step (1). If there is no eavesdropping, (5) two-way communication is performed through Bob's information encoding and disclosure of measurement results. After this, quantum communication is terminated.
  • FIG. 28 is a diagram illustrating an example of the structure of a QD protocol (Example 1) proposed in a system applicable to the present disclosure.
  • EPR pairs are generated and sequence transmission is performed.
  • QBER measurement and confirmation of eavesdropping are performed. If eavesdropping exists, repeat from step (1). If eavesdropping does not exist, (3) Alice's information group selection and information encoding are performed. (4) QBER measurement and confirmation of eavesdropping are performed. If eavesdropping exists, repeat from step (1). If eavesdropping does not exist, (5) two-way communication is performed through disclosure of measurement results. After this, quantum communication is terminated.
  • Various embodiments of the present disclosure propose a QD protocol that does not require any operation for receiver-side information encoding.
  • FIG. 29 is a diagram illustrating an example of the operation process of the first node in a system applicable to the present disclosure.
  • a method performed by a first node in a communication system is provided.
  • the embodiment of FIG. 29 may further include, before step S2901, one or more of the following steps: a step in which the first node receives one or more synchronization signals from the second node; a step in which the first node receives system information from the second node; a step in which the first node receives configuration information from the second node; and a step in which the first node receives control information from the second node.
  • the embodiment of FIG. 29 may further include, before step S2901, one or more of the following steps: a step in which the first node transmits a random access preamble to the second node; a step in which the first node receives a random access response (RAR) from the second node; a step in which the first node transmits a random access message 3 to the second node; and a step in which the first node receives a contention resolution message from the second node.
  • Message 3 is a first PUSCH transmission scheduled by the RAR together with an RAR UL grant.
  • step S2901 the first node receives a first sequence of the first particles from the second node based on the first particles and the second particles constituting an EPR pair (Einstein-Podolsky-Rosen pair, EPR pair).
  • step S2902 the first node receives the initial state of the second sequence of the second particle and the combination information of the checking particle from the second node.
  • control unit (120) may transmit information stored in the memory unit (130) to an external device (e.g., another communication device) via a wireless/wired interface through the communication unit (110), or store information received from an external device (e.g., another communication device) via a wireless/wired interface in the memory unit (130).
  • the additional element (140) may be configured in various ways depending on the type of the wireless device.
  • the additional element (140) may include at least one of a power unit/battery, an input/output unit (I/O unit), a driving unit, and a computing unit.
  • the wireless device may be implemented in the form of a robot (Fig. 31, 100a), a vehicle (Fig. 31, 100b-1, 100b-2), an XR device (Fig. 31, 100c), a portable device (Fig. 31, 100d), a home appliance (Fig. 31, 100e), an IoT device (Fig.
  • Wireless devices may be mobile or stationary depending on the use/service.
  • various elements, components, units/parts, and/or modules within the wireless device (100, 200) may be entirely interconnected via a wired interface, or at least some may be wirelessly connected via a communication unit (110).
  • the control unit (120) and the communication unit (110) may be wired, and the control unit (120) and a first unit (e.g., 130, 140) may be wirelessly connected via the communication unit (110).
  • each element, component, unit/part, and/or module within the wireless device (100, 200) may further include one or more elements.
  • the control unit (120) may be composed of a set of one or more processors.
  • control unit (120) may be composed of a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphics processing processor, a memory control processor, etc.
  • memory unit (130) may be composed of RAM (Random Access Memory), DRAM (Dynamic RAM), ROM (Read Only Memory), flash memory, volatile memory, non-volatile memory, and/or a combination thereof.
  • FIG 36 illustrates a mobile device applicable to various embodiments of the present disclosure.
  • the mobile device may include a smartphone, a smart pad, a wearable device (e.g., a smartwatch, smartglasses), or a portable computer (e.g., a laptop, etc.).
  • the mobile device may be referred to as a Mobile Station (MS), a User Terminal (UT), a Mobile Subscriber Station (MSS), a Subscriber Station (SS), an Advanced Mobile Station (AMS), or a Wireless Terminal (WT).
  • MS Mobile Station
  • UT User Terminal
  • MSS Mobile Subscriber Station
  • SS Subscriber Station
  • AMS Advanced Mobile Station
  • WT Wireless Terminal
  • the portable device (100) may include an antenna unit (108), a communication unit (110), a control unit (120), a memory unit (130), a power supply unit (140a), an interface unit (140b), and an input/output unit (140c).
  • the antenna unit (108) may be configured as a part of the communication unit (110).
  • Blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with other wireless devices and base stations.
  • the control unit (120) can control components of the mobile device (100) to perform various operations.
  • the control unit (120) can include an AP (Application Processor).
  • the memory unit (130) can store data/parameters/programs/codes/commands required for operating the mobile device (100). In addition, the memory unit (130) can store input/output data/information, etc.
  • the power supply unit (140a) supplies power to the mobile device (100) and can include a wired/wireless charging circuit, a battery, etc.
  • the interface unit (140b) can support connection between the mobile device (100) and other external devices.
  • the interface unit (140b) can include various ports (e.g., audio input/output ports, video input/output ports) for connection with external devices.
  • the input/output unit (140c) can input or output video information/signals, audio information/signals, data, and/or information input from a user.
  • the input/output unit (140c) may include a camera, a microphone, a user input unit, a display unit (140d), a speaker, and/or a haptic module.
  • the input/output unit (140c) obtains information/signals (e.g., touch, text, voice, image, video) input by the user, and the obtained information/signals can be stored in the memory unit (130).
  • the communication unit (110) converts the information/signals stored in the memory into wireless signals, and can directly transmit the converted wireless signals to other wireless devices or to a base station.
  • the communication unit (110) can receive wireless signals from other wireless devices or base stations, and then restore the received wireless signals to the original information/signals.
  • the restored information/signals can be stored in the memory unit (130) and then output in various forms (e.g., text, voice, image, video, haptic) through the input/output unit (140c).
  • FIG. 37 illustrates a vehicle or autonomous vehicle applicable to various embodiments of the present disclosure.
  • Vehicles or autonomous vehicles can be implemented as mobile robots, cars, trains, manned or unmanned aerial vehicles (AVs), ships, etc.
  • AVs unmanned aerial vehicles
  • a vehicle or autonomous vehicle may include an antenna unit (108), a communication unit (110), a control unit (120), a driving unit (140a), a power supply unit (140b), a sensor unit (140c), and an autonomous driving unit (140d).
  • the antenna unit (108) may be configured as a part of the communication unit (110).
  • Blocks 110/130/140a to 140d correspond to blocks 110/130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with external devices such as other vehicles, base stations (e.g., base stations, road side units, etc.), and servers.
  • the control unit (120) can control elements of the vehicle or autonomous vehicle (100) to perform various operations.
  • the control unit (120) can include an ECU (Electronic Control Unit).
  • the drive unit (140a) can drive the vehicle or autonomous vehicle (100) on the ground.
  • the drive unit (140a) can include an engine, a motor, a power train, wheels, brakes, a steering device, etc.
  • the power supply unit (140b) supplies power to the vehicle or autonomous vehicle (100) and can include a wired/wireless charging circuit, a battery, etc.
  • the sensor unit (140c) can obtain vehicle status, surrounding environment information, user information, etc.
  • the sensor unit (140c) may include an IMU (inertial measurement unit) sensor, a collision sensor, a wheel sensor, a speed sensor, an incline sensor, a weight detection sensor, a heading sensor, a position module, a vehicle forward/backward sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, etc.
  • IMU intial measurement unit
  • the autonomous driving unit (140d) may implement a technology for maintaining a driving lane, a technology for automatically controlling speed such as adaptive cruise control, a technology for automatically driving along a set path, a technology for automatically setting a path and driving when a destination is set, etc.
  • the communication unit (110) can receive map data, traffic information data, etc. from an external server.
  • the autonomous driving unit (140d) can generate an autonomous driving route and driving plan based on the acquired data.
  • the control unit (120) can control the drive unit (140a) so that the vehicle or autonomous vehicle (100) moves along the autonomous driving route according to the driving plan (e.g., speed/direction control).
  • the communication unit (110) can irregularly/periodically acquire the latest traffic information data from an external server and can acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit (140c) can acquire vehicle status and surrounding environment information.
  • the autonomous driving unit (140d) can update the autonomous driving route and driving plan based on newly acquired data/information.
  • the communication unit (110) can transmit information regarding the vehicle location, autonomous driving route, driving plan, etc. to the external server.
  • External servers can predict traffic information data in advance using AI technology or other technologies based on information collected from vehicles or autonomous vehicles, and provide the predicted traffic information data to the vehicles or autonomous vehicles.
  • Figure 38 illustrates a vehicle applicable to various embodiments of the present disclosure.
  • the vehicle may also be implemented as a means of transportation, a train, an aircraft, a ship, or the like.
  • the vehicle (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), and a position measurement unit (140b).
  • blocks 110 to 130/140a to 140b correspond to blocks 110 to 130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with other vehicles or external devices such as base stations.
  • the control unit (120) can control components of the vehicle (100) to perform various operations.
  • the memory unit (130) can store data/parameters/programs/codes/commands that support various functions of the vehicle (100).
  • the input/output unit (140a) can output AR/VR objects based on information in the memory unit (130).
  • the input/output unit (140a) can include a HUD.
  • the position measurement unit (140b) can obtain position information of the vehicle (100).
  • the position information can include absolute position information of the vehicle (100), position information within a driving line, acceleration information, position information with respect to surrounding vehicles, etc.
  • the position measurement unit (140b) can include GPS and various sensors.
  • the communication unit (110) of the vehicle (100) can receive map information, traffic information, etc. from an external server and store them in the memory unit (130).
  • the location measurement unit (140b) can obtain vehicle location information through GPS and various sensors and store the information in the memory unit (130).
  • the control unit (120) can create a virtual object based on the map information, traffic information, and vehicle location information, and the input/output unit (140a) can display the created virtual object on the vehicle window (1410, 1420).
  • the control unit (120) can determine whether the vehicle (100) is being driven normally within the driving line based on the vehicle location information.
  • control unit (120) can display a warning on the vehicle window through the input/output unit (140a). Additionally, the control unit (120) can broadcast a warning message regarding driving abnormalities to surrounding vehicles through the communication unit (110). Depending on the situation, the control unit (120) can transmit vehicle location information and information regarding driving/vehicle abnormalities to relevant authorities through the communication unit (110).
  • FIG 39 illustrates an XR device applicable to various embodiments of the present disclosure.
  • the XR device may be implemented as an HMD, a head-up display (HUD) installed in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, digital signage, a vehicle, a robot, and the like.
  • HUD head-up display
  • the XR device (100a) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), a sensor unit (140b), and a power supply unit (140c).
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., media data, control signals, etc.) with external devices such as other wireless devices, portable devices, or media servers.
  • the media data can include videos, images, sounds, etc.
  • the control unit (120) can control components of the XR device (100a) to perform various operations.
  • the control unit (120) can be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, metadata generation and processing, etc.
  • the memory unit (130) can store data/parameters/programs/codes/commands required for driving the XR device (100a)/generating XR objects.
  • the input/output unit (140a) can obtain control information, data, etc.
  • the input/output unit (140a) can include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module, etc.
  • the sensor unit (140b) can obtain the XR device status, surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar.
  • the power supply unit (140c) supplies power to the XR device (100a) and may include a wired/wireless charging circuit, a battery, etc.
  • the memory unit (130) of the XR device (100a) may include information (e.g., data, etc.) required for creating an XR object (e.g., AR/VR/MR object).
  • the input/output unit (140a) may obtain a command to operate the XR device (100a) from the user, and the control unit (120) may operate the XR device (100a) according to the user's operating command. For example, when a user attempts to watch a movie, news, etc. through the XR device (100a), the control unit (120) may transmit content request information to another device (e.g., a mobile device (100b)) or a media server through the communication unit (130).
  • another device e.g., a mobile device (100b)
  • a media server e.g., a media server
  • the communication unit (130) may download/stream content such as movies and news from another device (e.g., a mobile device (100b)) or a media server to the memory unit (130).
  • the control unit (120) controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for content, and can generate/output an XR object based on information about surrounding space or real objects acquired through the input/output unit (140a)/sensor unit (140b).
  • the XR device (100a) is wirelessly connected to the mobile device (100b) through the communication unit (110), and the operation of the XR device (100a) can be controlled by the mobile device (100b).
  • the mobile device (100b) can act as a controller for the XR device (100a).
  • the XR device (100a) can obtain three-dimensional position information of the mobile device (100b), and then generate and output an XR object corresponding to the mobile device (100b).
  • Figure 40 illustrates robots applicable to various embodiments of the present disclosure. Robots may be classified into industrial, medical, household, military, and other categories depending on their intended use or field.
  • the robot (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), a sensor unit (140b), and a driving unit (140c).
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the control unit (120) can control components of the robot (100) to perform various operations.
  • the memory unit (130) can store data/parameters/programs/codes/commands that support various functions of the robot (100).
  • the input/output unit (140a) can obtain information from the outside of the robot (100) and output information to the outside of the robot (100).
  • the input/output unit (140a) can include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit (140b) can obtain internal information of the robot (100), surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, a radar, etc.
  • the driving unit (140c) may perform various physical operations such as moving the robot joints. In addition, the driving unit (140c) may enable the robot (100) to drive on the ground or fly in the air.
  • the driving unit (140c) may include an actuator, a motor, wheels, brakes, propellers, etc.
  • FIG. 41 illustrates an AI device applicable to various embodiments of the present disclosure.
  • AI devices can be implemented as fixed or mobile devices, such as TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, and vehicles.
  • fixed or mobile devices such as TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, and vehicles.
  • the AI device (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a/140b), a learning processor unit (140c), and a sensor unit (140d).
  • Blocks 110 to 130/140a to 140d correspond to blocks 110 to 130/140 of FIG. 35, respectively.
  • the communication unit (110) can transmit and receive wired and wireless signals (e.g., sensor information, user input, learning models, control signals, etc.) with external devices such as other AI devices (e.g., FIG. W1, 100x, 200, 400) or AI servers (200) using wired and wireless communication technology.
  • the communication unit (110) can transmit information within the memory unit (130) to the external device or transfer a signal received from the external device to the memory unit (130).
  • the control unit (120) may determine at least one executable operation of the AI device (100) based on information determined or generated using a data analysis algorithm or a machine learning algorithm. In addition, the control unit (120) may control components of the AI device (100) to perform the determined operation. For example, the control unit (120) may request, search, receive, or utilize data from the learning processor unit (140c) or the memory unit (130), and may control components of the AI device (100) to perform at least one executable operation, a predicted operation, or an operation determined to be desirable.
  • control unit (120) may collect history information including the operation contents of the AI device (100) or user feedback on the operation, and store the collected history information in the memory unit (130) or the learning processor unit (140c), or transmit the collected history information to an external device such as an AI server (FIG. W1, 400).
  • the collected history information may be used to update a learning model.
  • the memory unit (130) can store data that supports various functions of the AI device (100).
  • the memory unit (130) can store data obtained from the input unit (140a), data obtained from the communication unit (110), output data of the learning processor unit (140c), and data obtained from the sensing unit (140).
  • the memory unit (130) can store control information and/or software codes necessary for the operation/execution of the control unit (120).
  • the input unit (140a) can obtain various types of data from the outside of the AI device (100).
  • the input unit (120) can obtain learning data for model learning, input data to which the learning model will be applied, etc.
  • the input unit (140a) may include a camera, a microphone, and/or a user input unit.
  • the output unit (140b) may generate output related to sight, hearing, or touch.
  • the output unit (140b) may include a display unit, a speaker, and/or a haptic module, etc.
  • the sensing unit (140) can obtain at least one of internal information of the AI device (100), information about the surrounding environment of the AI device (100), and user information using various sensors.
  • the sensing unit (140) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar, etc.
  • the claims described in the various embodiments of the present disclosure may be combined in various ways.
  • the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a device, and the technical features of the device claims of the various embodiments of the present disclosure may be combined and implemented as a method.
  • the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a device, and the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Selon divers modes de réalisation de la présente divulgation, un procédé de fonctionnement d'un premier nœud dans un système de communication est fourni, le procédé comprenant les étapes consistant à : recevoir au moins un signal de synchronisation provenant d'un second nœud ; recevoir une première séquence d'une première particule provenant du second nœud sur la base de la première particule et d'une seconde particule constituant une paire Einstein-Podolsky-Rosen (paire EPR) ; recevoir un état initial d'une seconde séquence de la seconde particule et des informations de combinaison d'une particule de vérification provenant du second nœud ; et transmettre, au second nœud, un premier résultat de mesure pour les informations de combinaison et des informations de conversion obtenues par réalisation d'une opération logique sur des informations spécifiques.
PCT/KR2024/001754 2024-02-06 2024-02-06 Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique Pending WO2025170090A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/001754 WO2025170090A1 (fr) 2024-02-06 2024-02-06 Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/001754 WO2025170090A1 (fr) 2024-02-06 2024-02-06 Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique

Publications (1)

Publication Number Publication Date
WO2025170090A1 true WO2025170090A1 (fr) 2025-08-14

Family

ID=96700153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/001754 Pending WO2025170090A1 (fr) 2024-02-06 2024-02-06 Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique

Country Status (1)

Country Link
WO (1) WO2025170090A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8155318B2 (en) * 2004-07-06 2012-04-10 Mitsubishi Electric Corporation Quantum cryptography communication system
WO2023101371A1 (fr) * 2021-12-01 2023-06-08 엘지전자 주식회사 Dispositif et procédé de mise en œuvre de communication directe sécurisée quantique à complexité réduite dans un système de communication quantique
US20230188222A1 (en) * 2021-12-02 2023-06-15 Qulabz Inc. Measurement device independent quantum secure direct communication with user authentication
WO2023128603A1 (fr) * 2022-01-03 2023-07-06 엘지전자 주식회사 Dispositif et procédé de détection et de correction d'erreur d'enchevêtrement par rapport à un état d'enchevêtrement arbitraire de n bits quantiques dans un système de communication quantique
US20230353348A1 (en) * 2022-04-27 2023-11-02 Cisco Technology, Inc. Systems and methods for providing user authentication for quantum-entangled communications in a cloud environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8155318B2 (en) * 2004-07-06 2012-04-10 Mitsubishi Electric Corporation Quantum cryptography communication system
WO2023101371A1 (fr) * 2021-12-01 2023-06-08 엘지전자 주식회사 Dispositif et procédé de mise en œuvre de communication directe sécurisée quantique à complexité réduite dans un système de communication quantique
US20230188222A1 (en) * 2021-12-02 2023-06-15 Qulabz Inc. Measurement device independent quantum secure direct communication with user authentication
WO2023128603A1 (fr) * 2022-01-03 2023-07-06 엘지전자 주식회사 Dispositif et procédé de détection et de correction d'erreur d'enchevêtrement par rapport à un état d'enchevêtrement arbitraire de n bits quantiques dans un système de communication quantique
US20230353348A1 (en) * 2022-04-27 2023-11-02 Cisco Technology, Inc. Systems and methods for providing user authentication for quantum-entangled communications in a cloud environment

Similar Documents

Publication Publication Date Title
WO2023128603A1 (fr) Dispositif et procédé de détection et de correction d'erreur d'enchevêtrement par rapport à un état d'enchevêtrement arbitraire de n bits quantiques dans un système de communication quantique
WO2023101371A1 (fr) Dispositif et procédé de mise en œuvre de communication directe sécurisée quantique à complexité réduite dans un système de communication quantique
WO2022145548A1 (fr) Procédé et appareil de modulation basée sur une partition de données pour apprentissage fédéré
WO2022149641A1 (fr) Procédé et appareil d'apprentissage fédéré basés sur une configuration de serveur multi-antenne et d'utilisateur à antenne unique
WO2023128604A1 (fr) Procédé et dispositif pour effectuer une correction d'erreurs sur un canal de pauli asymétrique dans un système de communication quantique
WO2024101461A1 (fr) Appareil et procédé permettant de régler la synchronisation de transmission et de réaliser une association entre un ap et un ue dans un système d-mimo
WO2024195920A1 (fr) Appareil et procédé pour effectuer un codage de canal sur un canal d'interférence ayant des caractéristiques de bruit non local dans un système de communication quantique
WO2024101470A1 (fr) Appareil et procédé pour effectuer une modulation d'état quantique sur la base d'une communication directe sécurisée quantique dans un système de communication quantique
WO2024150850A1 (fr) Dispositif et procédé pour effectuer une attribution de ressources quantiques basées sur une sélection de trajet discontinu dans un système de communication quantique
WO2024150852A1 (fr) Dispositif et procédé pour exécuter une attribution de ressources quantiques basée sur une configuration d'ensemble de liaisons dans un système de communication quantique
WO2023068714A1 (fr) Dispositif et procédé permettant de réaliser, sur la base d'informations de canal, un regroupement de dispositifs pour un aircomp, basé sur un apprentissage fédéré, d'un environnement de données non iid dans un système de communication
WO2023113390A1 (fr) Appareil et procédé de prise en charge de groupement d'utilisateurs de système de précodage de bout en bout dans un système de communication sans fil
WO2022092351A1 (fr) Procédé et appareil permettant d'atténuer une limitation de puissance de transmission par transmission en chevauchement
WO2025170090A1 (fr) Appareil et procédé pour effectuer un protocole de dialogue quantique dans un système de communication quantique
WO2025178267A1 (fr) Appareil et procédé d'attnénation du bruit de mesure quantique dans un système de communication quantique
WO2025211471A1 (fr) Appareil et procédé pour émettre et recevoir des signaux dans un réseau non terrestre
WO2026005090A1 (fr) Appareil et procédé de configuration de système de transmission multi-représentation par partitionnement de connaissances à petite échelle dans un système de communication
WO2025173806A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025116064A1 (fr) Dispositif et procédé de réalisation d'edp par réutilisation de paire d'epr dans un protocole de distillation d'entrelacement ayant recours à des qecc bidirectionnels dans un système de communication quantique
WO2026005089A1 (fr) Appareil et procédé de prise en charge d'un schéma de transmission multi-représentation comprenant une procédure de fusion de connaissances dans un système de communication
WO2025206440A1 (fr) Procédé et appareil de transmission et de réception de signaux dans un système de communication sans fil
WO2025116049A1 (fr) Appareil et procédé d'exécution d'un protocole de distillation d'intrication en tenant compte d'une fidélité minimale dans un système de communication quantique
WO2025216338A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil
WO2025249596A1 (fr) Dispositif et procédé pour effectuer un protocole 2-1 edp dans un système de communication quantique
WO2025211472A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24924053

Country of ref document: EP

Kind code of ref document: A1