WO2014065651A1 - A system for data throughput - Google Patents
A system for data throughput Download PDFInfo
- Publication number
- WO2014065651A1 WO2014065651A1 PCT/MY2013/000171 MY2013000171W WO2014065651A1 WO 2014065651 A1 WO2014065651 A1 WO 2014065651A1 MY 2013000171 W MY2013000171 W MY 2013000171W WO 2014065651 A1 WO2014065651 A1 WO 2014065651A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slice
- cache
- packets
- information
- slice identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9057—Arrangements for supporting packet reassembly or resequencing
Definitions
- the present invention generally relates to a system for data throughput, more particularly the present invention relates to a system for increasing data throughput in a network.
- US patent publication no. 2009/0316815 discloses methods for reducing the overall overhead in a wireless communication network which in turn increases throughput in the wireless network, by scheduling information for first and second portion of the user terminal with less and more desirable channel conditions in different frame with higher and lower downlink MAP (DL-MAP) repetition respectively.
- DL-MAP downlink MAP
- US patent publication no. 201 1/0280195 discloses methods for increasing transmission power in the wireless communications link and introduces a feedback system in which the automatic repeat request (ARQ) block size could be adaptively selected to provide a maximum protocol data unit size that achieves the target packet error rate at the receiver device under the low signal-to-noise ratio conditions.
- the automatic repeat request will be reverted to the initial block size once the signal-to- noise ratio improves.
- US patent publication no. 2010/0135323 discloses a method of slicing a series of network packets which comprises the steps of obtaining packet from network and determining one or more protocols used by the packets. Then, analysing the header to determine position of the first data payload and creating modified packet by removing or masking the first, data payload based on the determined position of the first data payload.
- the present invention aims to provide a system for increasing data throughput by tackling the problem of transmitting duplicate data during packet transmission at the sender and receiver nodes, in that the present invention teaches to detect duplicate bits from the packets and to transmit smaller bits that can be reconstructed at the receiver node.
- It is an object of the present invention to provide a system for data throughput comprising a sender node, and a receiver node communicating with the sender node for receiving transmitted packets of information.
- the present invention aims to achieve reduced bandwidth utilization in a network link by transmitting only the necessary bits for packet re-creation at the receiver node across the network. Instead of transmitting full packets, the present invention teaches to transmit corresponding bit identifiers, significantly smaller bits from the sender node that can be reconstructed at the receiver node.
- Figure 1 illustrates the system architecture for data throughput in accordance to the present invention.
- Figure 2 illustrates the process flow for the operation at the sender node in accordance to the present invention.
- FIG. 3 illustrates the process flow for the operation at the sender node in accordance to the present invention.
- Figure 4 illustrates the process flow for the operation at the receiver node in accordance to the present invention.
- Figure 5 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 6 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 7 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 8 illustrates the schematic representation during data throughput in accordance to the present invention.
- Figure 9 illustrates the schematic representation during optimum data throughput in accordance to the present invention.
- Figure 10 illustrates the flowchart for the overall operation at the sender node in accordance to the present invention.
- FIG 11 illustrates the flowchart for the overall operation at the receiver node in accordance to the present invention.
- the figure illustrates the system architecture for data throughput in accordance to the present invention, comprising a sender node (101) for transmitting packets of information, and a receiver node (102) communicating with the sender node (101) for receiving the transmitted packets of information.
- the preferred embodiment of the present invention provides that the sender node (101) and the receiver node (102) maintain a map cache or referred to as the dynamic cache (104) at respective nodes for mapping information of the packets for effective and efficient data transmission between the nodes.
- the preferred embodiment further comprises at least an ancillary cache (105) for detecting duplicate packets and continuously updating the dynamic cache (104) with information of fresh packets.
- the dynamic cache (104) at the sender node (101) is stored with a predetermined number of divided packets consisting of at least a bit identifier and a corresponding bit, preferably known as the slice identifier (90) and the corresponding slice (91), being a couple in the present invention.
- the exact dynamic cache (104) is then maintained at the receiver node (102).
- the ancillary cache (105) in the preferred embodiment comprises at least a candidate cache and at least a bit cache, hereinafter referred to as a second cache (202) and a third cache (203) respectively, whereby the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), of which any one of the slice identifier (90) provides a match for the slice identifier (90) in the dynamic cache (104).
- Stored data in the second cache (202) consists of every non-unique or indistinctive slice identifier (90), that every new entry of the slice identifier (90) matches any one of the existing slice identifier (90) in the third cache (203), but not existing in the second cache (202).
- the slice identifier (80) and corresponding slice (81) from the packet are detected as duplicates when matched with any identical slice identifier (90) and corresponding slice (91) existing in the ancillary cache (103).
- the third cache (203) stores a predetermined number bit identifiers of at least a slice identifier (90) that does not exist in the second cache (202).
- Stored data in the second cache (202) consists of every unique or distinctive slice identifier (90), that every new entry of the slice identifier (90) does not match any one of the existing slice identifier (90) in the third cache (203).
- Figure 2 and 3 illustrate the process flow for the operation at the sender node (101) in accordance to the present invention, whereby the sender node (101) further comprises at least a packet slicer (1 1), at least a corresponding bit generator (12), and at least a packet builder (13).
- the process begins in Figure 2 by first dividing the packets outputted from the egress interface of the sender node (101) using a packet slicer (1 1).
- Each of the divided packets is a slice of bit information, and herein referred to as the slice (81) in the present invention.
- Each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) and for assignment thereto, and subsisting as a couple for further process.
- the preferred embodiment Prior to transmitting the packet consisting of at least the slice identifier (80) and the corresponding slice (81), the preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system then only uses the slice identifier (80) of the couple and compiles the information at the packet builder (13).
- the preferred system checks for identical couples of the slice identifier (80) and the corresponding slice (81) in the second cache (202) using a comparison module for detecting duplicates, thereafter the system then updates the dynamic cache (104) with the every slice identifier (90) and the corresponding slice (91) found in the second cache (202) using a cache maker. Meanwhile, the slice identifier (80) and the corresponding slice (81) are sent to the packet builder (13) for compilation.
- the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module for detecting duplicates.
- the preferred embodiment When the identical slice identifier (80) is determined at the third cache (203), the preferred embodiment creates the slice identifier (90) and the corresponding slice (91) using a cache maker at the second cache (202) for detecting duplicates in forthcoming transmissions. For a slice identifier (90) not existing in the third cache (203), the preferred embodiment creates a new entry of the slice identifier (90) into the third cache (203) for forthcoming transmissions. Subsequently, the preferred embodiment at the sender node (101) builds all the compiled data information from the processes in the foregoing description into data packet using a packet builder (13) and transmits the packet to the receiver node (102).
- the compiled data information consists of only the slice identifier (80) if the matching corresponding slice (91) exists in the dynamic cache (104), the slice identifier (80) and the corresponding slice (81) if both the matching slice identifier (90) and the corresponding slice (91) exist in the second cache (202), and only the corresponding slice (81) if the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202).
- Figure 4 illustrates the process flow for the operation at the receiver node (102) in accordance to the present invention, whereby the receiver node (102) further comprises at least a packet divider (21), at least a slice checker (22), and at least a packet rebuilder (23).
- the receiver node (102) further comprises at least a packet divider (21), at least a slice checker (22), and at least a packet rebuilder (23).
- the receiver node (102) receives packet information from an egress interface and divides the packet using a packet divider (21). Upon dividing the packet, the slice checker (22) inspects the divided packets consisting of at least the slice identifier (80) or the corresponding slice (81) or a combination thereof, which is used for matching information with a dynamic cache (104) existing at the receiver node (102).
- the preferred embodiment For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81), and forwards the information to the packet rebuilder (23). For the divided packets consisting of the slice identifier (80) and the corresponding slice (81), the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the information to the packet rebuilder (23).
- the preferred embodiment forwards the information to the packet rebuilder (23).
- FIG. 5 the figures illustrate the schematic representations during data throughput in accordance to the present invention, whereby at Figure 5 it is shown that the packet data is first intercepted at the egress interface at the sender node (101) for packet slicing.
- each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) accordingly for assignment thereto, and subsisting as a couple for further process.
- the preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system then only uses the slice identifier (80) of the couple and compiles the information first.
- every slice (81) are compiled with the slice identifier (80) and the corresponding slice (81 ) compiled earlier in the preceding process for data transmission to the receiver node (102).
- the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module. If the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202), only the corresponding slice (81) will be compiled. The data information is then received at the receiver node (102) and inspects the transmitted packet. For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81).
- the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the corresponding slice (81) to the packet rebuilder (23).
- the preferred embodiment forwards the information to the packet rebuilder (23).
- FIG 9 the figure illustrates the schematic representation during optimum data throughput in accordance to the present invention.
- the optimum packet transmission or in the best case scenario occurs when the compiled packet only runs through the process for checking identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module, thereafter yielding only every slice identifier (80) for compilation for subsequent transmission to the receiver node (102).
- FIG. 10 the figure illustrates the flowchart for the overall operation at the sender node (101) in accordance to the present invention, whereby the preferred processes in the present invention begins with dividing packets into slices of information, and computing bit values of each slice (81) for assigning a slice identifier (80) to each slice (81). 00171
- the packet is essentially intercepted at the egress interface of the sender node (101) and is detected for duplicate information.
- the process then continues with comparing each slice identifier (80) with every slice identifier (90) existing in the dynamic cache (104) for matches, and compiling only the slice identifier (80) when the slice identifier (80) and the corresponding slice (81) match the slice information existing in the dynamic cache (104).
- the process flow continues by comparing each slice identifier (80) with every slice identifier (90) existing in a second cache (202).
- the dynamic cache (104) is updated with the slice identifier (90) and the corresponding slice (91) by moving the entry to the dynamic cache (104), and consequently compiling the updated slice identifier (80) and the corresponding slice (81) for packet transmission, or rather the next hop.
- the preferred embodiment compares each slice identifier (80) with every slice identifier (90) existing in a third cache (203), and updates the second cache (202) with the slice identifier (90) and the corresponding slice (91) when the slice identifier (80) matches a slice identifier (90) existing in the third cache (203) by moving the entry to the second cache (202). Consequently, the preferred embodiment uses only the corresponding slice (81) for transmission.
- the preferred embodiment creates new entry of the slice identifier (80) into the third cache (203), and consequently compiling the corresponding slice (81) for further transmitting compiled data packet in congruent packet information to the receiver node (102).
- the dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91)
- the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91)
- the third cache (203) stores a predetermined number of divided packets consisting of at least a slice identifier (90).
- All the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the processes in the foregoing description will be built into a required data packet for transmitting the packet to the next hop.
- FIG. 11 the figure illustrates the flowchart for the overall operation at the receiver node (102) in accordance to the present invention, whereby the process begins with receiving packets from the sender node (101) and intercepted at the egress interface.
- the process then continues with dividing the received packets into slices of information for inspecting every slice identifier (80) and the corresponding slice (81) using slice information from the dynamic cache (104) at the receiver node (102), and reconstructing a corresponding slice (81) when the received packets contain only slice identifier (80) by recovering the information from the dynamic cache at the receiver node.
- the dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), wherein the dynamic cache (104) at the sender node (101) and the receiver node (102) is identical
- the preferred embodiment creates an entry at the dynamic cache (104) for forthcoming process for updating the dynamic cache (104) with the fresh packet information, and forwards the information to the packet rebuilder (23), and forwards the corresponding slice (81) for compilation. If the packets contain only the corresponding slice (81) information, the preferred embodiment uses the corresponding slice (81) information for compilation.
- the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the foregoing processes will be then rebuilt into the required packet data, and then sent to the upper layers in the system.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention generally relates to a system for data throughput, more particularly the present invention relates to a system for increasing data throughput in a network, comprising a sender node (101) and a receiver node (102) communicating with the sender node (101) for receiving transmitted packets of information, whereby the sender node (101) and the receiver node (102) maintain a dynamic cache (104), and the sender node (101) further comprises an ancillary cache (105).
Description
A SYSTEM FOR DATA THROUGHPUT
TECHNICAL FIELD
The present invention generally relates to a system for data throughput, more particularly the present invention relates to a system for increasing data throughput in a network.
BACKGROUND OF INVENTION
With the increasing number of users on the network, bandwidth utilization is observed to be increasing significantly in recent years. Bandwidth resources are scarce and limited. Conventionally, an entire packet of data is transmitted across the network. By virtue of that, duplicates or redundant data will be also transmitted across the network and utilizes large bandwidths.
There are several prior arts divulged system and method for increasing throughput across network. US patent publication no. 2009/0316815 discloses methods for reducing the overall overhead in a wireless communication network which in turn increases throughput in the wireless network, by scheduling information for first and second portion of the user terminal with less and more desirable channel conditions in different frame with higher and lower downlink MAP (DL-MAP) repetition respectively.
US patent publication no. 201 1/0280195 discloses methods for increasing transmission power in the wireless communications link and introduces a feedback system in which the automatic repeat request (ARQ) block size could be adaptively selected to provide a maximum protocol data unit size that achieves the target packet error rate at the receiver device under the low signal-to-noise ratio conditions. The automatic repeat request will be reverted to the initial block size once the signal-to- noise ratio improves.
US patent publication no. 2010/0135323 discloses a method of slicing a series of network packets which comprises the steps of obtaining packet from network and determining one or more protocols used by the packets. Then, analysing the header to determine position of the first data payload and creating modified packet by removing or masking the first, data payload based on the determined position of the first data payload.
The prior arts however did not address the fact that it is possible to reduce redundant data in a packet by introducing utilization of supporting caches for identifying duplicate data for maximizing bandwidth and improving data throughput. It is therefore important to utilize the bandwidth on the network efficiently so that the data throughput can be maximized, thus reducing the cost of network operator and enhancing the network traffic.
SUMMARY OF INVENTION
The present invention aims to provide a system for increasing data throughput by tackling the problem of transmitting duplicate data during packet transmission at the sender and receiver nodes, in that the present invention teaches to detect duplicate bits from the packets and to transmit smaller bits that can be reconstructed at the receiver node.
It is an object of the present invention to provide a system for data throughput comprising a sender node, and a receiver node communicating with the sender node for receiving transmitted packets of information.
It is another object of the present invention to provide a map cache, hereinafter referred to as the dynamic cache, maintained at both the sender node and the receiver node for mapping information of the packets for data transmission between the nodes.
It is yet an object of the present invention to provide an ancillary cache to the dynamic cache comprising of a candidate cache and a bit cache, hereinafter referred to as the second cache and the third cache respectively to detect duplicate packets and continuously updating the dynamic cache.
It is yet another object of the present invention to provide the sender node with a first packet slicer for dividing packets into predetermined sizes of bits slices, a corresponding bit generator for generating corresponding slice identifiers to the divided packets, and a packet builder for compiling the divided slices or the corresponding slice identifiers or a combination thereof.
It is a further object of the present invention to provide the receiver node with a second packet slicer for dividing packets received from the sender node into slices, a slice checker for inspecting each of the slices, and a second packet builder for compiling the slices after checking at the slice checker.
It is yet a further object of the present invention to provide a process flow for data throughput in a network at the sender node, and to provide a process flow for data throughput in a network at the receiver node.
Ultimately, the present invention aims to achieve reduced bandwidth utilization in a network link by transmitting only the necessary bits for packet re-creation at the receiver node across the network. Instead of transmitting full packets, the present invention teaches to transmit corresponding bit identifiers, significantly smaller bits from the sender node that can be reconstructed at the receiver node.
BRIEF DESCRIPTION OF DRAWINGS Figure 1 illustrates the system architecture for data throughput in accordance to the present invention.
Figure 2 illustrates the process flow for the operation at the sender node in accordance to the present invention.
Figure 3 illustrates the process flow for the operation at the sender node in accordance to the present invention.
Figure 4 illustrates the process flow for the operation at the receiver node in accordance to the present invention. Figure 5 illustrates the schematic representation during data throughput in accordance to the present invention.
Figure 6 illustrates the schematic representation during data throughput in accordance to the present invention.
Figure 7 illustrates the schematic representation during data throughput in accordance to the present invention.
Figure 8 illustrates the schematic representation during data throughput in accordance to the present invention.
Figure 9 illustrates the schematic representation during optimum data throughput in accordance to the present invention. Figure 10 illustrates the flowchart for the overall operation at the sender node in accordance to the present invention.
Figure 11 illustrates the flowchart for the overall operation at the receiver node in accordance to the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Described below are preferred embodiments of the present invention with reference to the accompanying drawings. Each of the following preferred embodiments describes an example not limiting in any aspect.
Referring to Figure 1, the figure illustrates the system architecture for data throughput in accordance to the present invention, comprising a sender node (101) for transmitting packets of information, and a receiver node (102) communicating with the sender node (101) for receiving the transmitted packets of information.
The preferred embodiment of the present invention provides that the sender node (101) and the receiver node (102) maintain a map cache or referred to as the dynamic cache (104) at respective nodes for mapping information of the packets for effective and efficient data transmission between the nodes. The preferred embodiment further comprises at least an ancillary cache (105) for detecting duplicate packets and continuously updating the dynamic cache (104) with information of fresh packets.
The dynamic cache (104) at the sender node (101) is stored with a predetermined number of divided packets consisting of at least a bit identifier and a corresponding bit, preferably known as the slice identifier (90) and the corresponding slice (91), being a couple in the present invention. The exact dynamic cache (104) is then maintained at the receiver node (102).
The ancillary cache (105) in the preferred embodiment comprises at least a candidate cache and at least a bit cache, hereinafter referred to as a second cache (202) and a third cache (203) respectively, whereby the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), of which any one of the slice identifier (90) provides a match for the slice identifier (90) in the dynamic cache (104).
Stored data in the second cache (202) consists of every non-unique or indistinctive slice identifier (90), that every new entry of the slice identifier (90) matches any one of the existing slice identifier (90) in the third cache (203), but not existing in the second cache (202).
The slice identifier (80) and corresponding slice (81) from the packet are detected as duplicates when matched with any identical slice identifier (90) and corresponding slice (91) existing in the ancillary cache (103).
The third cache (203) stores a predetermined number bit identifiers of at least a slice identifier (90) that does not exist in the second cache (202). Stored data in the second cache (202) consists of every unique or distinctive slice identifier (90), that every new entry of the slice identifier (90) does not match any one of the existing slice identifier (90) in the third cache (203). Processes for employing the system architecture at the sender node (101) and the receiver node (102) in the preceding description will be more apparent in the following descriptions, whereby each packet transmitted out from the sender node (101) will be divided into slices of information to execute the preferred processes in the present invention.
Referring now to Figure 1 to Figure 3, Figure 2 and 3 illustrate the process flow for the operation at the sender node (101) in accordance to the present invention, whereby the sender node (101) further comprises at least a packet slicer (1 1), at least a corresponding bit generator (12), and at least a packet builder (13).
It is shown in the figures that the process begins in Figure 2 by first dividing the packets outputted from the egress interface of the sender node (101) using a packet slicer (1 1). Each of the divided packets is a slice of bit information, and herein referred to as the slice (81) in the present invention. Each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) and for assignment thereto, and subsisting as a couple for further process.
Prior to transmitting the packet consisting of at least the slice identifier (80) and the corresponding slice (81), the preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system
then only uses the slice identifier (80) of the couple and compiles the information at the packet builder (13).
For slice identifier (80) and the corresponding slice (81) not existing for a match in the dynamic cache (104), the preferred system checks for identical couples of the slice identifier (80) and the corresponding slice (81) in the second cache (202) using a comparison module for detecting duplicates, thereafter the system then updates the dynamic cache (104) with the every slice identifier (90) and the corresponding slice (91) found in the second cache (202) using a cache maker. Meanwhile, the slice identifier (80) and the corresponding slice (81) are sent to the packet builder (13) for compilation.
On every occasion that the slice identifier (80) and the corresponding slice (81) do not match a slice identifier (90) and a corresponding slice (91) in both the dynamic cache (104) and the second cache (202), the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module for detecting duplicates.
When the identical slice identifier (80) is determined at the third cache (203), the preferred embodiment creates the slice identifier (90) and the corresponding slice (91) using a cache maker at the second cache (202) for detecting duplicates in forthcoming transmissions. For a slice identifier (90) not existing in the third cache (203), the preferred embodiment creates a new entry of the slice identifier (90) into the third cache (203) for forthcoming transmissions. Subsequently, the preferred embodiment at the sender node (101) builds all the compiled data information from the processes in the foregoing description into data packet using a packet builder (13) and transmits the packet to the receiver node (102).
The compiled data information consists of only the slice identifier (80) if the matching corresponding slice (91) exists in the dynamic cache (104), the slice identifier (80) and the corresponding slice (81) if both the matching slice identifier (90) and the corresponding slice (91) exist in the second cache (202), and only the corresponding
slice (81) if the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202).
Referring now to Figure 1 and Figure 4, Figure 4 illustrates the process flow for the operation at the receiver node (102) in accordance to the present invention, whereby the receiver node (102) further comprises at least a packet divider (21), at least a slice checker (22), and at least a packet rebuilder (23).
It is shown in Figure 4 that the receiver node (102) receives packet information from an egress interface and divides the packet using a packet divider (21). Upon dividing the packet, the slice checker (22) inspects the divided packets consisting of at least the slice identifier (80) or the corresponding slice (81) or a combination thereof, which is used for matching information with a dynamic cache (104) existing at the receiver node (102).
For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81), and forwards the information to the packet rebuilder (23). For the divided packets consisting of the slice identifier (80) and the corresponding slice (81), the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the information to the packet rebuilder (23).
For the divided packets consisting of only the corresponding slice (80), the preferred embodiment forwards the information to the packet rebuilder (23).
At the packet rebuilder (23), data information compiled from the foregoing processes at the receiver node (102) will be then rebuilt into the required packet data, and then sent to the upper layers in the system of the receiver node (102).
Referring now to Figures 5 to 9, the figures illustrate the schematic representations during data throughput in accordance to the present invention, whereby at Figure 5 it
is shown that the packet data is first intercepted at the egress interface at the sender node (101) for packet slicing.
Still referring to Figure 5, each slice (81) will be then computed for a slice identifier (80) using the corresponding bit generator (12) accordingly for assignment thereto, and subsisting as a couple for further process.
The preferred embodiment first checks for identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module. For an existing slice identifier (90) and the corresponding slice (91) in the dynamic cache (104), the system then only uses the slice identifier (80) of the couple and compiles the information first.
In Figure 6, it is shown that for slice identifier (90) and the corresponding slice (91) not existing in the dynamic cache (104), the preferred system checks for identical couples of the slice identifier (80) and the corresponding slice (81) in the second cache (202) using a comparison module for detecting duplicates, and the system then updates the dynamic cache (104) with every slice identifier (90) and the corresponding slice (91) found in the second cache (202) using a cache maker as shown in Figure 7.
For packet slice (91) that do not exist in the second cache (202), every slice (81) are compiled with the slice identifier (80) and the corresponding slice (81 ) compiled earlier in the preceding process for data transmission to the receiver node (102).
On every occasion that the slice identifier (80) and the corresponding slice (81) do not match a slice identifier (90) and a corresponding slice (91) in both the dynamic cache (104) and the second cache (202), the system checks for an identical slice identifier (90) in the third cache (203) using a comparison module. If the corresponding slice (91) does not exist in any one of the dynamic cache (104) and second cache (202), only the corresponding slice (81) will be compiled. The data information is then received at the receiver node (102) and inspects the transmitted packet.
For the divided packets consisting of only the slice identifier (80), the preferred embodiment utilizes the identical dynamic cache (104) at the receiver node (102) to reconstruct the corresponding slice (81). Whereas the divided packets consisting of the slice identifier (80) and the corresponding slice (81), the preferred embodiment updates the dynamic cache (104) at the receiver node (102) with the fresh packet information, and forwards the corresponding slice (81) to the packet rebuilder (23).
For the divided packets consisting of only the corresponding slice (81), the preferred embodiment forwards the information to the packet rebuilder (23).
At the packet rebuilder (23), data information compiled from the foregoing processes at the receiver node (102) will be then rebuilt into the required packet data, and then sent to the upper layers in the system of the receiver node (102). It is shown in Figure 8 that for a slice identifier (90) not existing in the third cache (203), the preferred embodiment creates a new entry of the slice identifier (90) into the third cache (203) for forthcoming transmissions.
Referring to Figure 9, the figure illustrates the schematic representation during optimum data throughput in accordance to the present invention.
The optimum packet transmission or in the best case scenario occurs when the compiled packet only runs through the process for checking identical couples of the slice identifier (80) and the corresponding slice (81) with the dynamic cache (104) at the sender node (101) using a comparison module, thereafter yielding only every slice identifier (80) for compilation for subsequent transmission to the receiver node (102).
Referring now to Figure 10, the figure illustrates the flowchart for the overall operation at the sender node (101) in accordance to the present invention, whereby the preferred processes in the present invention begins with dividing packets into slices of information, and computing bit values of each slice (81) for assigning a slice identifier (80) to each slice (81).
00171
- 11 -
The packet is essentially intercepted at the egress interface of the sender node (101) and is detected for duplicate information.
The process then continues with comparing each slice identifier (80) with every slice identifier (90) existing in the dynamic cache (104) for matches, and compiling only the slice identifier (80) when the slice identifier (80) and the corresponding slice (81) match the slice information existing in the dynamic cache (104).
If the slice identifier (80) and the corresponding slice (81) do not match the slice information existing in the dynamic cache (104), the process flow continues by comparing each slice identifier (80) with every slice identifier (90) existing in a second cache (202).
When the slice identifier (80) matches a slice identifier (90) existing in the second cache (202), the dynamic cache (104) is updated with the slice identifier (90) and the corresponding slice (91) by moving the entry to the dynamic cache (104), and consequently compiling the updated slice identifier (80) and the corresponding slice (81) for packet transmission, or rather the next hop. If the slice identifier (80) does not match the slice information existing in the second cache (202), the preferred embodiment compares each slice identifier (80) with every slice identifier (90) existing in a third cache (203), and updates the second cache (202) with the slice identifier (90) and the corresponding slice (91) when the slice identifier (80) matches a slice identifier (90) existing in the third cache (203) by moving the entry to the second cache (202). Consequently, the preferred embodiment uses only the corresponding slice (81) for transmission.
If the slice identifier (80) does not match the slice information existing in the third cache (203), the preferred embodiment creates new entry of the slice identifier (80) into the third cache (203), and consequently compiling the corresponding slice (81) for further transmitting compiled data packet in congruent packet information to the receiver node (102).
MY2013/000171
- 12 -
The dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), and the third cache (203) stores a predetermined number of divided packets consisting of at least a slice identifier (90).
All the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the processes in the foregoing description will be built into a required data packet for transmitting the packet to the next hop.
Referring now to Figure 11 , the figure illustrates the flowchart for the overall operation at the receiver node (102) in accordance to the present invention, whereby the process begins with receiving packets from the sender node (101) and intercepted at the egress interface.
The process then continues with dividing the received packets into slices of information for inspecting every slice identifier (80) and the corresponding slice (81) using slice information from the dynamic cache (104) at the receiver node (102), and reconstructing a corresponding slice (81) when the received packets contain only slice identifier (80) by recovering the information from the dynamic cache at the receiver node.
The dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91), wherein the dynamic cache (104) at the sender node (101) and the receiver node (102) is identical
If the packets contain both the slice identifier (80) and the corresponding slice (81) information, the preferred embodiment creates an entry at the dynamic cache (104) for forthcoming process for updating the dynamic cache (104) with the fresh packet information, and forwards the information to the packet rebuilder (23), and forwards the corresponding slice (81) for compilation. If the packets contain only the
corresponding slice (81) information, the preferred embodiment uses the corresponding slice (81) information for compilation.
Consequently, the compiled data information comprises either at least a slice identifier (80) or at least a corresponding slice (81) or a combination thereof from the foregoing processes will be then rebuilt into the required packet data, and then sent to the upper layers in the system.
In as much as the present invention is subject to many variations, modifications and changes in detail, it is intended that all matter contained in the foregoing description or shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Claims
1. A system for data throughput in a network comprises:
a sender node (101) for transmitting packets of information; and a receiver node (102) communicating with the sender node (101) for receiving the transmitted packets of information;
characterized in that the sender node (101) and the receiver node (102) maintain a dynamic cache (104) at respective nodes for mapping information of the packets for data transmission between the nodes, whilst at least an ancillary cache (105) detects duplicate packets and continuously updates the dynamic cache (104) with information of fresh packets.
2. A system for data throughput in a network in accordance to claim 1, wherein the sender node (101) further comprises either at least a packet slicer (1 1) for dividing packets into predetermined sizes of bits, at least a corresponding bit generator (12) for generating bit identifiers to the corresponding divided packets, or at least packet builder (13) for compiling the divided packets or the corresponding bit identifiers or a combination thereof.
3. A system for data throughput in a network in accordance to claim 1, wherein the receiver node (102) further comprises either at least a packet slicer (21) for dividing packets received from the sender node (101), at least slice checker (22) for inspecting each divided packets, or at least a packet rebuilder (23) for compiling the divided packets after checking at the slice checker (22).
4. A system for data throughput in a network in accordance to claim 1 , wherein the sender node (101) transmits packet of information through a process comprising the steps of:
dividing packets into slices of information, and computing bit values of each slice (81) for assigning a slice identifier (80) to each slice (81);
comparing each slice identifier (80) with every slice identifier (90) existing in the dynamic cache (104) for matches, and compiling the slice
identifier (80) when the slice identifier (80) and the corresponding slice (81) match the slice information existing in the dynamic cache (104);
comparing each slice identifier (80) with every slice identifier (90) existing in a second cache (202) for mismatches in the dynamic cache (104); updating the dynamic cache (104) with the slice identifier (80) and the corresponding slice (81) when the slice identifier (80) matches a slice identifier (90) existing in the second cache (202), and consequently compiling the updated slice identifier (80) and the corresponding slice (81);
comparing each slice identifier (80) with every slice identifier (90) existing in a third cache (203) for mismatches in the second cache (202); updating the second cache (202) with the slice identifier (80) and the corresponding slice (81) when the slice identifier (80) matches a slice identifier (90) existing in the third cache (203), and consequently compiling the corresponding slice (81) in accord for packet transmission;
creating new entry of the slice identifier (80) into the third cache (203) for mismatches in the third cache (203), and consequently compiling the corresponding slice (81) in accord for packet transmission; and
transmitting compiled packet information to the receiver node (102).
A system for data throughput in a network in accordance to claim 4, wherein the dynamic cache (104) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91).
A system for data throughput in a network in accordance to claim 4, wherein the second cache (202) stores a predetermined number of divided packets consisting of at least a slice identifier (90) and corresponding slice (91).
A system for data throughput in a network in accordance to claim 4, wherein the third cache (203) stores a predetermined number of divided packets consisting of at least a slice identifier (90).
A system for data throughput in a network in accordance to claim 4, wherein the receiver node (102) receives the transmitted packets of information through a process comprising the steps of:
receiving packets from the sender node (101);
dividing the received packets into slices of information for inspecting the slice identifier (80) and the corresponding slice (81);
using slice information from a dynamic cache (104) at the receiver node (102) identical to the dynamic cache (104) at the sender node (101) for reconstructing a corresponding slice (81) when the received packets contain only slice identifier (80) information thereof;
compiling the corresponding slice (81) for packets received containing both the slice identifier (80) and the corresponding slice (81) information; creating new entry into the dynamic cache (104) at the receiver node (102) for packets received containing both the slice identifier (80) and the corresponding slice (81) information; and
compiling the corresponding slice (81) for packets received containing the corresponding slice (81) information for forwarding compiled information to upper layers.
9. A system for data throughput in a network in accordance to claim 4, wherein the sender node (101) and the receiver node (102) comprise an egress interface for intercepting the packet.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| MYPI2012700827 | 2012-10-25 | ||
| MYPI2012700827A MY157082A (en) | 2012-10-25 | 2012-10-25 | A system for data throughput |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014065651A1 true WO2014065651A1 (en) | 2014-05-01 |
Family
ID=49552394
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/MY2013/000171 Ceased WO2014065651A1 (en) | 2012-10-25 | 2013-09-27 | A system for data throughput |
Country Status (2)
| Country | Link |
|---|---|
| MY (1) | MY157082A (en) |
| WO (1) | WO2014065651A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109104338A (en) * | 2018-11-12 | 2018-12-28 | 北京天融信网络安全技术有限公司 | Link intelligent detection method, storage medium and computer equipment |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4412306A (en) * | 1981-05-14 | 1983-10-25 | Moll Edward W | System for minimizing space requirements for storage and transmission of digital signals |
| US6038231A (en) * | 1997-05-02 | 2000-03-14 | Northern Telecom Limited | Data suppression and regeneration |
| US6285686B1 (en) * | 1998-03-19 | 2001-09-04 | Hewlett-Packard Company | Using page registers for efficient communication |
-
2012
- 2012-10-25 MY MYPI2012700827A patent/MY157082A/en unknown
-
2013
- 2013-09-27 WO PCT/MY2013/000171 patent/WO2014065651A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4412306A (en) * | 1981-05-14 | 1983-10-25 | Moll Edward W | System for minimizing space requirements for storage and transmission of digital signals |
| US6038231A (en) * | 1997-05-02 | 2000-03-14 | Northern Telecom Limited | Data suppression and regeneration |
| US6285686B1 (en) * | 1998-03-19 | 2001-09-04 | Hewlett-Packard Company | Using page registers for efficient communication |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109104338A (en) * | 2018-11-12 | 2018-12-28 | 北京天融信网络安全技术有限公司 | Link intelligent detection method, storage medium and computer equipment |
| CN109104338B (en) * | 2018-11-12 | 2020-10-16 | 北京天融信网络安全技术有限公司 | Link intelligent detection method, storage medium and computer equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| MY157082A (en) | 2016-04-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| RU2510894C2 (en) | Apparatus and method for generating mac protocol data unit in wireless communication system | |
| EP2932613B1 (en) | Adaptive channel reuse mechanism in communication networks | |
| JP5174161B2 (en) | Medium access control protocol for networks using multi-user radio channels | |
| JP4778066B2 (en) | 4-way handshaking for robust channel estimation and rate prediction | |
| JP4058450B2 (en) | Wireless packet communication method and wireless packet communication device | |
| US20110286377A1 (en) | Method and apparatus for multicast block acknowledgment | |
| US20140010146A1 (en) | Methods and Apparatuses For Transmitting Downlink Control Signaling On Wireless Relay Link | |
| US8908536B2 (en) | Density-based power outage notification transmission scheduling in frequency-hopping networks | |
| US20120188873A1 (en) | Communication system, communication method, receiving apparatus, and transmitting apparatus | |
| US20110093540A1 (en) | Method and system for communications using cooperative helper nodes | |
| KR20160048220A (en) | Apparatus and methods for media access control header compression | |
| Brown et al. | Key performance aspects of an LTE FDD based smart grid communications network | |
| CN101882978A (en) | A method and device for relay station downlink cooperative retransmission | |
| US9350645B2 (en) | Simultaneous acknowledgments for multicast packets | |
| CN105634977A (en) | Method and device for discovering a path maximum transmission unit (PMTU) | |
| CN111163025B (en) | A method and network device for acquiring and configuring subframe structure | |
| EP1981226A1 (en) | Quality of service securing method and apparatus | |
| WO2014065651A1 (en) | A system for data throughput | |
| US12015453B2 (en) | Method and device for transmitting a message | |
| Zhang et al. | Enhancing vehicular internet connectivity using whitespaces, heterogeneity, and a scouting radio | |
| FI20205138A1 (en) | A solution for separating transmissions from different networks | |
| US12342178B2 (en) | Method for detecting neighbouring nodes able to communicate by powerline and by a radio channel | |
| CN112188524B (en) | Method for enhancing transmission performance of wireless ad hoc network link | |
| EP1873995A2 (en) | Data transmission device, data reception device and data communication method | |
| KR20170109539A (en) | Method and apparatus for multicast block acknowledgement |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13788795 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13788795 Country of ref document: EP Kind code of ref document: A1 |