WO2014065651A1 - Système de maniement de la quantité de données - Google Patents
Système de maniement de la quantité de données Download PDFInfo
- Publication number
- WO2014065651A1 WO2014065651A1 PCT/MY2013/000171 MY2013000171W WO2014065651A1 WO 2014065651 A1 WO2014065651 A1 WO 2014065651A1 MY 2013000171 W MY2013000171 W MY 2013000171W WO 2014065651 A1 WO2014065651 A1 WO 2014065651A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slice
- cache
- packets
- information
- slice identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9057—Arrangements for supporting packet reassembly or resequencing
Definitions
- the present invention generally relates to a system for data throughput, more particularly the present invention relates to a system for increasing data throughput in a network.
- US patent publication no. 2009/0316815 discloses methods for reducing the overall overhead in a wireless communication network which in turn increases throughput in the wireless network, by scheduling information for first and second portion of the user terminal with less and more desirable channel conditions in different frame with higher and lower downlink MAP (DL-MAP) repetition respectively.
- DL-MAP downlink MAP
- US patent publication no. 201 1/0280195 discloses methods for increasing transmission power in the wireless communications link and introduces a feedback system in which the automatic repeat request (ARQ) block size could be adaptively selected to provide a maximum protocol data unit size that achieves the target packet error rate at the receiver device under the low signal-to-noise ratio conditions.
- the automatic repeat request will be reverted to the initial block size once the signal-to- noise ratio improves.
- US patent publication no. 2010/0135323 discloses a method of slicing a series of network packets which comprises the steps of obtaining packet from network and determining one or more protocols used by the packets. Then, analysing the header to determine position of the first data payload and creating modified packet by removing or masking the first, data payload based on the determined position of the first data payload.
- the present invention aims to provide a system for increasing data throughput by tackling the problem of transmitting duplicate data during packet transmission at the sender and receiver nodes, in that the present invention teaches to detect duplicate bits from the packets and to transmit smaller bits that can be reconstructed at the receiver node.
- It is an object of the present invention to provide a system for data throughput comprising a sender node, and a receiver node communicating with the sender node for receiving transmitted packets of information.
- the present invention aims to achieve reduced bandwidth utilization in a network link by transmitting only the necessary bits for packet re-creation at the receiver node across the network. Instead of transmitting full packets, the present invention teaches to transmit corresponding bit identifiers, significantly smaller bits from the sender node that can be reconstructed at the receiver node.
- Figure 1 illustrates the system architecture for data throughput in accordance to the present invention.
- Figure 2 illustrates the process flow for the operation at the sender node in accordance to the present invention.
- FIG. 3 illustrates the process flow for the operation at the sender node in accordance to the present invention.
- Figure 4 illustrates the process flow for the operation at the receiver node in accordance to the present invention.
- Figure 5 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 6 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 7 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 8 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 9 illustrates the schematic representation during optimum data throughput in accordance to the present invention.
- Figure 10 illustrates the flowchart for the overall operation at the sender node in accordance to the present invention.
- FIG 11 illustrates the flowchart for the overall operation at the receiver node in accordance to the present invention.
- the figure illustrates the system architecture for data throughput in accordance to the present invention, comprising a sender node (101) for transmitting packets of information, and a receiver node (102) communicating with the sender node (101) for receiving the transmitted packets of information.
- the preferred embodiment of the present invention provides that the sender node (101) and the receiver node (102) maintain a map cache or referred to as the dynamic cache (104) at respective nodes for mapping information of the packets for effective and efficient data transmission between the nodes.
- the preferred embodiment further comprises at least an ancillary cache (105) for detecting duplicate packets and continuously updating the dynamic cache (104) with information of fresh packets.
- the dynamic cache (104) at the sender node (101) is stored with a predetermined number of divided packets consisting of at least a bit identifier and a corresponding bit, preferably known as the slice identifier (90) and the corresponding slice (91), being a couple in the present invention.
- the exact dynamic cache (104) is then maintained at the receiver node (102).
- the ancillary cache (105) in the preferred embodiment comprises at least a candidate cache and at least a bit cache, hereinafter referred to as a second cache (202) and a third cache (203) respectively, whereby the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), of which any one of the slice identifier (90) provides a match for the slice identifier (90) in the dynamic cache (104).
- Stored data in the second cache (202) consists of every non-unique or indistinctive slice identifier (90), that every new entry of the slice identifier (90) matches any one of the existing slice identifier (90) in the third cache (203), but not existing in the second cache (202).
- the slice identifier (80) and corresponding slice (81) from the packet are detected as duplicates when matched with any identical slice identifier (90) and corresponding slice (91) existing in the ancillary cache (103).
- the third cache (203) stores a predetermined number bit identifiers of at least a slice identifier (90) that does not exist in the second cache (202).
- Stored data in the second cache (202) consists of every unique or distinctive slice identifier (90), that every new entry of the slice identifier (90) does not match any one of the existing slice identifier (90) in the third cache (203).
- Figure 2 and 3 illustrate the process flow for the operation at the sender node (101) in accordance to the present invention, whereby the sender node (101) further comprises at least a packet slicer (1 1), at least a corresponding bit generator (12), and at least a packet builder (13).
- the process begins in Figure 2 by first dividing the packets outputted from the egress interface of the sender node (101) using a packet slicer (1 1).
- Each of the divided packets is a slice of bit information, and herein referred to as the slice (81) in the present invention.
- Each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) and for assignment thereto, and subsisting as a couple for further process.
- the preferred embodiment Prior to transmitting the packet consisting of at least the slice identifier (80) and the corresponding slice (81), the preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system then only uses the slice identifier (80) of the couple and compiles the information at the packet builder (13).
- the preferred system checks for identical couples of the slice identifier (80) and the corresponding slice (81) in the second cache (202) using a comparison module for detecting duplicates, thereafter the system then updates the dynamic cache (104) with the every slice identifier (90) and the corresponding slice (91) found in the second cache (202) using a cache maker. Meanwhile, the slice identifier (80) and the corresponding slice (81) are sent to the packet builder (13) for compilation.
- the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module for detecting duplicates.
- the preferred embodiment When the identical slice identifier (80) is determined at the third cache (203), the preferred embodiment creates the slice identifier (90) and the corresponding slice (91) using a cache maker at the second cache (202) for detecting duplicates in forthcoming transmissions. For a slice identifier (90) not existing in the third cache (203), the preferred embodiment creates a new entry of the slice identifier (90) into the third cache (203) for forthcoming transmissions. Subsequently, the preferred embodiment at the sender node (101) builds all the compiled data information from the processes in the foregoing description into data packet using a packet builder (13) and transmits the packet to the receiver node (102).
- the compiled data information consists of only the slice identifier (80) if the matching corresponding slice (91) exists in the dynamic cache (104), the slice identifier (80) and the corresponding slice (81) if both the matching slice identifier (90) and the corresponding slice (91) exist in the second cache (202), and only the corresponding slice (81) if the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202).
- Figure 4 illustrates the process flow for the operation at the receiver node (102) in accordance to the present invention, whereby the receiver node (102) further comprises at least a packet divider (21), at least a slice checker (22), and at least a packet rebuilder (23).
- the receiver node (102) further comprises at least a packet divider (21), at least a slice checker (22), and at least a packet rebuilder (23).
- the receiver node (102) receives packet information from an egress interface and divides the packet using a packet divider (21). Upon dividing the packet, the slice checker (22) inspects the divided packets consisting of at least the slice identifier (80) or the corresponding slice (81) or a combination thereof, which is used for matching information with a dynamic cache (104) existing at the receiver node (102).
- the preferred embodiment For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81), and forwards the information to the packet rebuilder (23). For the divided packets consisting of the slice identifier (80) and the corresponding slice (81), the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the information to the packet rebuilder (23).
- the preferred embodiment forwards the information to the packet rebuilder (23).
- FIG. 5 the figures illustrate the schematic representations during data throughput in accordance to the present invention, whereby at Figure 5 it is shown that the packet data is first intercepted at the egress interface at the sender node (101) for packet slicing.
- each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) accordingly for assignment thereto, and subsisting as a couple for further process.
- the preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system then only uses the slice identifier (80) of the couple and compiles the information first.
- every slice (81) are compiled with the slice identifier (80) and the corresponding slice (81 ) compiled earlier in the preceding process for data transmission to the receiver node (102).
- the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module. If the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202), only the corresponding slice (81) will be compiled. The data information is then received at the receiver node (102) and inspects the transmitted packet. For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81).
- the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the corresponding slice (81) to the packet rebuilder (23).
- the preferred embodiment forwards the information to the packet rebuilder (23).
- FIG 9 the figure illustrates the schematic representation during optimum data throughput in accordance to the present invention.
- the optimum packet transmission or in the best case scenario occurs when the compiled packet only runs through the process for checking identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module, thereafter yielding only every slice identifier (80) for compilation for subsequent transmission to the receiver node (102).
- FIG. 10 the figure illustrates the flowchart for the overall operation at the sender node (101) in accordance to the present invention, whereby the preferred processes in the present invention begins with dividing packets into slices of information, and computing bit values of each slice (81) for assigning a slice identifier (80) to each slice (81). 00171
- the packet is essentially intercepted at the egress interface of the sender node (101) and is detected for duplicate information.
- the process then continues with comparing each slice identifier (80) with every slice identifier (90) existing in the dynamic cache (104) for matches, and compiling only the slice identifier (80) when the slice identifier (80) and the corresponding slice (81) match the slice information existing in the dynamic cache (104).
- the process flow continues by comparing each slice identifier (80) with every slice identifier (90) existing in a second cache (202).
- the dynamic cache (104) is updated with the slice identifier (90) and the corresponding slice (91) by moving the entry to the dynamic cache (104), and consequently compiling the updated slice identifier (80) and the corresponding slice (81) for packet transmission, or rather the next hop.
- the preferred embodiment compares each slice identifier (80) with every slice identifier (90) existing in a third cache (203), and updates the second cache (202) with the slice identifier (90) and the corresponding slice (91) when the slice identifier (80) matches a slice identifier (90) existing in the third cache (203) by moving the entry to the second cache (202). Consequently, the preferred embodiment uses only the corresponding slice (81) for transmission.
- the preferred embodiment creates new entry of the slice identifier (80) into the third cache (203), and consequently compiling the corresponding slice (81) for further transmitting compiled data packet in congruent packet information to the receiver node (102).
- the dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91)
- the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91)
- the third cache (203) stores a predetermined number of divided packets consisting of at least a slice identifier (90).
- All the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the processes in the foregoing description will be built into a required data packet for transmitting the packet to the next hop.
- FIG. 11 the figure illustrates the flowchart for the overall operation at the receiver node (102) in accordance to the present invention, whereby the process begins with receiving packets from the sender node (101) and intercepted at the egress interface.
- the process then continues with dividing the received packets into slices of information for inspecting every slice identifier (80) and the corresponding slice (81) using slice information from the dynamic cache (104) at the receiver node (102), and reconstructing a corresponding slice (81) when the received packets contain only slice identifier (80) by recovering the information from the dynamic cache at the receiver node.
- the dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), wherein the dynamic cache (104) at the sender node (101) and the receiver node (102) is identical
- the preferred embodiment creates an entry at the dynamic cache (104) for forthcoming process for updating the dynamic cache (104) with the fresh packet information, and forwards the information to the packet rebuilder (23), and forwards the corresponding slice (81) for compilation. If the packets contain only the corresponding slice (81) information, the preferred embodiment uses the corresponding slice (81) information for compilation.
- the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the foregoing processes will be then rebuilt into the required packet data, and then sent to the upper layers in the system.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
L'invention concerne en général un système de maniement de la quantité de données, et plus particulièrement un système destiné à augmenter la quantité de données dans un réseau. Ce système comprend : un noeud expéditeur (101) et un noeud destinataire (102) en communication avec le noeud expéditeur (101) pour recevoir les paquets d'informations transmis, le noeud expéditeur (101) et le noeud destinataire (102) maintenant un cache dynamique (104), le noeud expéditeur (101) comprenant également un cache auxiliaire (105).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| MYPI2012700827A MY157082A (en) | 2012-10-25 | 2012-10-25 | A system for data throughput |
| MYPI2012700827 | 2012-10-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014065651A1 true WO2014065651A1 (fr) | 2014-05-01 |
Family
ID=49552394
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/MY2013/000171 Ceased WO2014065651A1 (fr) | 2012-10-25 | 2013-09-27 | Système de maniement de la quantité de données |
Country Status (2)
| Country | Link |
|---|---|
| MY (1) | MY157082A (fr) |
| WO (1) | WO2014065651A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109104338A (zh) * | 2018-11-12 | 2018-12-28 | 北京天融信网络安全技术有限公司 | 链路智能探测方法、存储介质及计算机设备 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4412306A (en) * | 1981-05-14 | 1983-10-25 | Moll Edward W | System for minimizing space requirements for storage and transmission of digital signals |
| US6038231A (en) * | 1997-05-02 | 2000-03-14 | Northern Telecom Limited | Data suppression and regeneration |
| US6285686B1 (en) * | 1998-03-19 | 2001-09-04 | Hewlett-Packard Company | Using page registers for efficient communication |
-
2012
- 2012-10-25 MY MYPI2012700827A patent/MY157082A/en unknown
-
2013
- 2013-09-27 WO PCT/MY2013/000171 patent/WO2014065651A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4412306A (en) * | 1981-05-14 | 1983-10-25 | Moll Edward W | System for minimizing space requirements for storage and transmission of digital signals |
| US6038231A (en) * | 1997-05-02 | 2000-03-14 | Northern Telecom Limited | Data suppression and regeneration |
| US6285686B1 (en) * | 1998-03-19 | 2001-09-04 | Hewlett-Packard Company | Using page registers for efficient communication |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109104338A (zh) * | 2018-11-12 | 2018-12-28 | 北京天融信网络安全技术有限公司 | 链路智能探测方法、存储介质及计算机设备 |
| CN109104338B (zh) * | 2018-11-12 | 2020-10-16 | 北京天融信网络安全技术有限公司 | 链路智能探测方法、存储介质及计算机设备 |
Also Published As
| Publication number | Publication date |
|---|---|
| MY157082A (en) | 2016-04-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| RU2510894C2 (ru) | Устройство и способ для формирования протокольного модуля данных мас в системе беспроводной связи | |
| EP2932613B1 (fr) | Mécanisme de réutilisation de canal adaptatif dans des réseaux de communication | |
| US8532015B2 (en) | Methods and apparatuses for transmitting downlink control signaling on wireless relay link | |
| JP5174161B2 (ja) | マルチユーザ無線チャネルを利用するネットワークのための媒体アクセス制御プロトコル | |
| JP4778066B2 (ja) | 堅固なチャネル推定およびレート予測のための4方向のハンドシェーク | |
| US11856588B2 (en) | Terminal and communication method with two step downlink control information | |
| JP4058450B2 (ja) | 無線パケット通信方法および無線パケット通信装置 | |
| US20110286377A1 (en) | Method and apparatus for multicast block acknowledgment | |
| US20120188873A1 (en) | Communication system, communication method, receiving apparatus, and transmitting apparatus | |
| US20110093540A1 (en) | Method and system for communications using cooperative helper nodes | |
| BRPI0706595A2 (pt) | aparelho de comunicação de rádio e método de transmissão de pacote de retransmissão | |
| KR20160048220A (ko) | 미디어 액세스 제어 헤더 압축을 위한 장치 및 방법 | |
| US9832242B2 (en) | Simultaneous acknowledgments for multicast packets | |
| CN101882978A (zh) | 一种中继站下行协作重传的方法和装置 | |
| CN111163025B (zh) | 一种获取、配置子帧结构的方法及网络设备 | |
| EP1981226A1 (fr) | Procédé et appareil de sécurisation de la qualité de service | |
| WO2014065651A1 (fr) | Système de maniement de la quantité de données | |
| FI20205138A1 (en) | Solution for separating transmissions from different networks | |
| US12342178B2 (en) | Method for detecting neighbouring nodes able to communicate by powerline and by a radio channel | |
| EP3932112A1 (fr) | Sélection de mode pour communication dans un réseau maillé | |
| CN112188524B (zh) | 一种增强无线自组网链路传输性能的方法 | |
| JP2018506220A (ja) | マルチキャストブロック確認応答のための方法および装置 | |
| EP1873995A2 (fr) | Dispositif de transmission de données, dispositif de réception de données et procédé de communication de données | |
| CN121000346A (zh) | 一种数据传输方法和数据传输装置 | |
| Arango et al. | Compressing MAC headers on shared wireless media |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13788795 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13788795 Country of ref document: EP Kind code of ref document: A1 |