CN103309940B - A kind of method to the sequence of out of order data flow - Google Patents
A kind of method to the sequence of out of order data flow Download PDFInfo
- Publication number
- CN103309940B CN103309940B CN201310161560.0A CN201310161560A CN103309940B CN 103309940 B CN103309940 B CN 103309940B CN 201310161560 A CN201310161560 A CN 201310161560A CN 103309940 B CN103309940 B CN 103309940B
- Authority
- CN
- China
- Prior art keywords
- data
- caching
- file
- sequence number
- order
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000003780 insertion Methods 0.000 claims description 29
- 230000037431 insertion Effects 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 5
- 239000000872 buffer Substances 0.000 abstract description 5
- 230000008859 change Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000011144 upstream manufacturing Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a kind of method that the out of order data flow of high speed can be sorted with carrying out high-performance, high availability in the case of memory-limited, including:Data will be reached and be inserted in the first caching of fixed size suitably position in order, if suitable position is can not find in the first caching, in order the arrival data will be inserted in the second caching suitable position;Sequentially from the described first caching, reading Data Concurrent and downstream is given, if read without data or data invalid on certain position, waiting;By described second cache in data in order return load to described first caching in.As first cache size of the present invention is fixed, so work well is remained in the limited system of level cache capacity, and the organizational form of the second caching is flexible, and improve buffer memory capacity and autgmentability, the wait after data lacking number can be reduced, reduce data jamming, it is to avoid data are abandoned.
Description
Technical field
The present invention relates to data flow sort method, carries out high property more particularly in System with Limited Memory to out of order data flow
The method of energy high availability sequence.
Background technology
In data handling system, data source can outwards produce mass data, and the data of same type constitute burst of data stream.
So-called non-ordered data stream, is order phase that the sequencing that reaches of each data in the data flow is not produced with data
With.And the downstream in data handling system, usually require that input data be by its generation order(Sequence number)Reach, it is therefore desirable to
The sequence link of one centre carries out data sorting.Due to the inborn characteristic that data are out of order, sequence number is less(Represent and first generate)'s
Data later can just may be reached, and in the case that immutable up-stream system issues data order, order module needs caching
All data, until the correct data of sequence number are reached.
Patent《A kind of method for realizing packet in multi engine parallel processor》(The patent No. 200510093220)Will be upper
Trip data are marked affiliated passage, and by building caching of queuing up on different engines, the passage that is chosen by data every time is exported
Data method is realizing sequence, but the method is using being limited in the case of memory-limited.Patent《Data sorting system with
And the data reordering method in portable apparatus》(The patent No. 200910261953)Frequency values of the statistics containing some attribute datas,
The ranking value of data is calculated, and ranking results are generated by special sequencing unit.The method is using statistical information prediction input number
According to pattern, for reach data order random in the case of have no more advantages.Patent《The method of packet of reordering and set
Standby》(The patent No. 01125541)Input data is grouped, data in bucket is sorted in the way of bucket, when data order is complete in bucket
Cheng Shi, sends data.This mode sends data in units of bucket, and transmitting efficiency is not high, and restricted by internal memory.
Traditional sort method requires the input size of designated length, and can only wait during for disconnected number of lacking number.And
In high-performance high availability field, due to wait after data lacking number and the data jamming that causes even to abandon data be dangerous
, and comprehensively data cached method cannot be implemented in the system of memory-limited.
Content of the invention
The shortcoming of prior art in view of the above, it is an object of the invention to provide a kind of can be in the feelings of memory-limited
The method out of order data flow of high speed sorted with carrying out high-performance, high availability under condition, will for solving traditional sort method
Seek the input size of designated length, and can only wait during for disconnected number of lacking number, in high-performance high availability field, due to
Wait the data jamming that may cause even to abandon data after data lacking number, and comprehensively data cached method is in memory-limited
System in cannot implement the problems such as.
For solving the above problems, the present invention provides a kind of method to the sequence of out of order data flow, is applied to data processing and sets
In standby, it is characterised in that the data processing equipment at least includes the first caching and the second caching, and first cache size
Fixing, methods described includes:Read and data are reached, and the arrival data are inserted into first according to the data sequence number for reaching data
Predeterminated position in caching, if the described first caching cannot be inserted, will reach data and is inserted into predeterminated position in the second caching;
Data are read sequentially from the first caching, judge whether there are valid data in reading position, be if so, then sent to downstream, if it is not,
Then wait, until reading and being sent to downstream when there are valid data the position;By the data return load in the second caching to first
In caching.
Preferably, in the sort method of the invention described above, the second caching can be one or more file.
As, in the sort method of the present invention, the size of the first caching is fixed, institute is slow for one-level with the inventive method
Deposit, for example internal memory, restricted in the case of, can work well, and the organizational form of the second caching is various, therefore method
Applied widely.
The preferred embodiments of the present invention use file as the second caching, improve buffer memory capacity, favorable expandability can be with
Machine-readable take, so reduce data lacking number after wait, reduce data jamming, it is to avoid data abandon.
Description of the drawings
Fig. 1 shows the DFD to out of order data flow sort method according to the present invention.
Fig. 2 shows the schematic diagram of update 11 in Fig. 1.
The flow chart that Fig. 3 shows update 12 in Fig. 1.
Fig. 4 shows the flow chart for sending operation 2 in Fig. 1.
The flow chart that Fig. 5 shows return load operation 3 in Fig. 1.
Fig. 6-Figure 15 shows collated to data flow to one embodiment of out of order data flow sort method using the present invention
The schematic diagram of journey.
Specific embodiment
Embodiments of the present invention are described below by way of specific instantiation, those skilled in the art can be by this specification
Disclosed content understands other advantages of the present invention and effect easily.The present invention can also pass through in addition different concrete realities
The mode of applying is carried out or applies, the every details in this specification can also based on different viewpoints and application, without departing from
Various modifications and changes are carried out under the spirit of the present invention.
Referring now to accompanying drawing, it should be noted that in each accompanying drawing, identical is operated or component, if no special instructions, then
All marked using identical.Show the DFD to out of order data flow sort method according to the present invention referring initially to Fig. 1, Fig. 1.Figure
Upstream data in 1 represents out of order data flow, and operation 1 represents update, and during update data are completed with sequence.
Can see on figure, update 1 can be divided into child-operation 11 and 12, wherein child-operation 11 represents and caches data insertion first
In 4, the expression of child-operation 12 is inserted data in the second caching 5.After upstream data is reached, inserted by certain rule respectively
First caching 4 and the second caching 5, the process of insertion just complete sequence.Send operation 3 and press certain rule again, first is cached 4
In data downstream send, just obtained orderly downstream data.
The present invention is discussed in detail with reference to Fig. 2-Fig. 5 to operate each several part of out of order data flow sort method embodiment.
The schematic diagram of update 11 in Fig. 1 is shown referring to Fig. 2, Fig. 2.Below to the mark in Fig. 2 and update
Important parameter in 11 is described in detail:
Mark 4 corresponds to the first caching in Fig. 1, and in the present embodiment, the first buffer setting in internal memory, referred to as arrange by internal memory
Sequence window.Mark 41 point to a series of squares represent upstream reach data flow, the data in each square representative of data flow, from
Putting in order top to bottm represents order of arrival, and seqN, seq1 ... represent generation order of the data in source, referred to as data
Sequence number.The order of arrival that in figure can be seen that data is not consistent with the generation of data order, so mark 41 is out of order data
Stream, after sorted, data order should become seq1, seq2 ... seqN.Mark 42 represents the head position of re-ordering window 4,
SeqStart represents the data sequence number of data in 4 head position 42 of memory order window.WindowSize represents memory order window
Mouth size.LastSendSeq represents the ranked maximum data sequence number for completing and being already sent to downstream.
When data flow 41 is reached, first according to data sequence number, by formula:
lastSendSeq≤SeqN≤lastSendSeq+WindowSize (1)
Judge the data whether in the data area that can be accommodated in re-ordering window 4.When data sequence number meets formula(1)
When, represent in the data area that the data can be accommodated in re-ordering window 4, can be by data insertion sort window.If discontented
Sufficient formula(1), represent that the data are not at the data area that re-ordering window 4 can be accommodated, then need update 12 is executed, insert
Enter operation 12 to be described in detail hereinafter in conjunction with Fig. 3.In this example the rule of update 11 be first by formula:
Index=seqNmodWindowSize (2)
Calculate position of the data in re-ordering window 4.Formula(2)In, seqN represents data sequence number, WindowSize table
Show the size of re-ordering window 4.Mod is modulus computing(When modulus result is 0, those skilled in the art can be right as needed
Index is modified).By formula(2)The Index for trying to achieve represents that the data that data sequence number is seqN should deposit in sequence window
The Index position in mouth 4.This kind of mode natively places data into the position that it should be present after sorting, so as to realize
Ranking function.
It should be noted that those skilled in the art can change the above-mentioned strategy by the first caching of data insertion, for example
But it is not limited to, using different from formula(2)Mode calculate position of the data in re-ordering window 4, or the side using mapping
Data sequence number is mapped to formula certain position in re-ordering window 4, while the position is mapped to the data content in internal memory
Memory cell etc., these modifications are all not less than the design of the present invention.
If the data that collects can not insert the first caching 4, i.e., the data for receiving in this example are not at 4 institute of re-ordering window
The data area that can be accommodated, then need to execute update 12, and data are deposited the second caching 5 of insertion.In the present embodiment, second delays
Deposit 5 implementations for employing cache file.Each data size in each cache file has its upper limit, and fixing which is
Quiescent value.Entry number in cache file is also to determine that therefore each cache file size is always fixed.Cache file can
Multiple to have.Each cache file is just like properties:
beginPos:The data sequence number of start position data in this document;
endPos:The data sequence number of end position data in this document;
FileSize:The size of the cache file, the value are quiescent values;
lastReadPos:The data sequence number of the data that the cache file last time reads;
Marker bit:One marker bit of correspondence in the cache file per data, represents the validity of the data, i.e. table
Show the data whether by return load mistake.
In this example, for convenience of description, each cache file adopt identical size, and the FileSize of each cache file,
Size WindowSize of endPos, beginPos and re-ordering window 4 has following relation:
endPos-beginPos=N*WindowSize=FileSize (3)
Flow chart referring now to update 12 in Fig. 1 is shown in Fig. 3, Fig. 3.In update 12, travel through first
Each cache file, if the data sequence number for reaching data falls between the beginPos and endPos of current cache file,
To then reach in data insertion current cache file.If all of cache file has been traveled through, fail to find a file and cause
The current data sequence number for reaching data is fallen between its beginPos and endPos, then travel through cache file again to find failure
File.The file of so-called failure, refers to that the endPos in this document is completed and be already sent to downstream most less than ranked
Big data sequence number lastSendSeq.After finding obsolete documents, then press following formula(4)Refresh this document beginPos with
EndPos, wherein round function representation are rounded downwards:
beginPos=round[(seqN-WindowSize)/FileSize]*FileSize+WindowSize (4)
endPos=beginPos+FileSize-1
And original all data in this document are deleted, then will reach data again and insert this document.If traveled through slow
After depositing file, fail obsolete documents are found, then new cache file is generated, then insert data are reached in the new cache file.
A kind of method that data are inserted cache file is described below, first uses formula(5)Calculate data insertion locations
InsertPos and skew offset:
insertPos=seqN-beginPos+1 (5)
offset=insertPos*dataSize
Wherein, dataSize represents the size of data, according to inserting data by the insertion position insertPos for calculating
Enter cache file, and after insertion data, the mark position that has for putting the data in file is effective.So far, update 12 is complete
Become.
It should be noted that those skilled in the art can make change to the update 12 in this example, for example but not
It is limited to:The second caching is realized using memory database or alternate manner, or the size of cache file can be different, Huo Zhehuan
Deposit file attribute formula is met with re-ordering window(4)Outside other relations, or using other when data are inserted file
Strategy etc., so these changes are all without departing from the design of the present invention.
Show the flow chart that operation 2 is sent in Fig. 1 referring now to Fig. 4, Fig. 4.When being inserted into number in re-ordering window 4
According to after completing the sequence of partial data, it is possible to by the data in re-ordering window 4 by certain method downstream send with
Obtain orderly downstream data stream.In this example, send operation 2 be successively and cyclically from the leading position scan of re-ordering window 4
To tail position.If scanning is to data and data sequence number is continuous, the data are downstream sent, and will
LastSendReq increases, and otherwise waits for.In this example, data sequence number continuously refers to that the data sequence number of the data is more than
LastSendReq value and adjacent with lastSendReq value.
It should be noted that the sending strategy needs for sending operation 2 are corresponding with update 11, in this example, due to insertion
Operation 11 employs formula(2)Data are inserted after calculating position of the data in re-ordering window 4, so send operation 2 adopting
The mode of scan round re-ordering window 4 successively.If update 11 employs the mode of other insertions, operation 2 is sent
Also correspondingly need to change strategy, these change to those skilled in the art it is readily conceivable that and all without departing from this
The design of invention.
Referring now to the flow chart that Fig. 5, Fig. 5 show return load operation 3 in Fig. 1.Return load operation 3 be by second cache in
Data return load is scanned and is sent to in re-ordering window 4 through sending the position that operation 2 is scanned and is transmitted across for sending operation 2
Downstream.As shown in figure 5, return load operation 3 cyclically scans cache file, for current file, first determine whether that this document data are
No effective.Judge herein to refer to whether the lastReadPos for judging current file has exceeded the endPos of current file, if super
The data that crosses in explanation current file are all completed by return load, it is not necessary to return load again.If not less than continuation judges position
In the data whether effectively and whether in the data area that can be accommodated in re-ordering window 4 of lastReadPos, if data have
The data area that effect and the data can be accommodated in re-ordering window 4, then read the data insertion sort window, then count this
According to flag bit be changed to invalid, and lastReadPos is added 1.
The each operation to out of order data flow sort method to the present invention is discussed in detail above, and the present invention can be to one
The out of order data flow sequence in road, it is also possible to the out of order data flow sequence of multichannel.Principle for making the present invention becomes apparent from, with reference to figure
6-15 is introduced and how the out of order data flow of two-way to be ranked up with one embodiment of the invention.
Fig. 6 shows the inlet flow of two upstream datas, memory order window 4, the original state of cache file 5.Input
In queue, each square represents data, in queue under represent upwards data reach order, in square numeral implication
Represent the order for actually generating of data, the more early generation of the less expression data of numeral.In figure can see, in two inlet flows be
Out of order data, need to be merged into being flowed for one in order.
For convenience of description, the mark and parameter to Fig. 6 is agreed as follows:Re-ordering window size WindowSize is 4, caching
File number 1, is represented with mark 51, for 2, AN, size FileSize of each cache file represents that presently described data are come
From the data of A queuing data serial number N, SN represents that presently described data are the numbers positioned at data sequence number in re-ordering window for N
According to BUFn_N is located at data of the global sequence number in cache file n for N.Insert A/B:Represent two first caching insertion lines
Journey, reads insertion sort window 4 after the data of input rank A/B respectively.The process of wherein two threads of Insert A/B is only
Vertical, and be illustrative ease, here and 16 kind order of arrivals of the imperfect data for covering two queues under global clock, only
Description one of which order of arrival.Read:Represent and thread is sent, for reading the data in re-ordering window 4 and downstream sending.
Store:Represent the second caching insertion thread, for entering data buffer storage in the second caching 5.Load:Represent return load thread, be used for
Data are read back to re-ordering window 4 from the second caching 5.In follow-up Fig. 7-14, if no special instructions, identical mark is also abided by
Keep above-mentioned agreement.
Under original state, seqStart be the beginPos and endPos of 1, lastSendSeq and two cache files all
For 0.Sequence starts, and Insert A/B have read A3, B1, B4 respectively from queue A/B first, due to according to formula(1)?:
lastSendSeq=0<1,3,4≤4=lastSendSeq+WindowSize
So three data serial number ranges all in re-ordering window 4 within, so according to formula(2)Calculate three data
Position in re-ordering window 4 is respectively 1,3,4(In this example, the value of modulus computing is modified to 4 when being 0, those skilled in the art
Other correction strategies are adopted as needed can), by these data insertion sort windows 4, after insertion, state is as shown in Figure 7.
Read thread can be with Insert A Wire/B Wire journey while work.Its scan round re-ordering window 4, data 1 are sent to
Downstream, and lastSendSeq is updated to 1.State after transmission is as shown in Figure 8.Subsequently Read thread order ground reads sequence
The position 2 of window 4, but find there is no data at re-ordering window position 2, so Read thread here obstruction, waits pending data.
While Read thread work, Insert A Wire/B Wire journey have read A5, A2 respectively from queue A/B, for A5, press
Formula(1)?:
lastSendSeq=1≤5≤lastSendSeq+WindowSize=1+4=5
So which is located in the data area of re-ordering window 4, further according to formula(2)Which is calculated in the position of re-ordering window
For:
5mod4=1
So A5 is placed in re-ordering window 4 where position 1, A2 is placed in re-ordering window where position 2 in the same manner.
State after A5, A2 insertion is as shown in Figure 9.
Insert A/B continues to read data B7, according to formula(1)It was found that B7 is not belonging to the number accommodated by re-ordering window 4
According to scope, so needing Store thread to be inserted into the second caching.Again because when initial, the biginPos=0 of cache file 51,
EndPos=0, according to flow process shown in above-mentioned Fig. 3, B7 can also insert cache file 51.So Store thread finds failure text again
Part, discovery cache file 51 are obsolete documents, thus according to formula(4), beginPos=6, the endPos of flush buffers file 51
=7, further according to formula(5)It is 2 to calculate insertion position of the B7 in cache file 51, and then B7 insertion is cached by Store thread
At the position 2 of file 1, the state after insertion is as shown in Figure 10.
Insert A/B continues to read data B8, according to formula(1)It was found that B8 is equally not belonging to re-ordering window 4 being accommodated
Data area, so being also required to Store thread to be inserted into the second caching 5.As B8 should not be present in cache file 51(8>
7), and now also without obsolete documents, so a newly-built cache file 52, calculates the beginPos=8 of cache file 52,
EndPos=9, further according to formula(5)It is 1 to calculate insertion position of the B8 in cache file 51, so will be literary for B8 insertion caching
At the position 1 of part 52.State after insertion is as shown in figure 11.
Insert A/B continues to read data A6, according to formula(1)It was found that A6 is equally not belonging to re-ordering window 4 being accommodated
Data area, so being also required to Store thread to be inserted into the second caching 5.It is computed finding that A6 should insert cache file
In 51(beginPos=6≤6≤7=endPos), further according to formula(5)Calculate the position 1 that A6 should be inserted cache file 51
Place.State after insertion is as shown in figure 12.
While Store, Insert A Wire/B Wire journey works, Read thread is still working, when Read thread finds sequence window
There are data S2 at the position 2 of mouth 4(Because Insert thread inserts A2)Afterwards, terminate wait, S2 is downstream sent, and continue according to
Secondary scanning, transmission, scan the tail position of re-ordering window 4 and have sent after data S4, be recycled to the head position of re-ordering window 4
Put, the seqStart value for updating re-ordering window 4 is 5, and continues to scan successively, sends, after having sent data S5 of head position,
It is 5 to update lastSendSeq, finds do not have data, then wait, such as Figure 13 when continuing to scan at the position 2 of re-ordering window 4
Shown.
Load thread can be worked simultaneously with Store, Insert A/B, Read thread, and which scans each cache file one by one,
It was found that effective data in file are just returned after belonging to the data area accommodated by re-ordering window 4 by data in cache file successively
It is loaded into re-ordering window 4.State after the completion of return load is as shown in figure 14.
After Read thread finds to have data S6 at the position 2 of re-ordering window 4, terminate to wait, S6 is downstream sent, and
Continue to scan on, send and finish until all data processings, the state after the completion of process is as shown in figure 15.
Through said process, the out of order data flow of upstream is sorted and is addressed to downstream.
Above-described embodiment only principle of the illustrative present invention and its effect, not for the restriction present invention.This area
Technical staff can carry out modifications and changes to above-described embodiment as needed.Such as but not limited to:Modification is inserted, sends, is returned
The strategy of load, using non-file mode as second caching etc..Therefore, those of ordinary skill in the art such as
All equivalent modifications completed under without departing from disclosed spirit and technological thought or change, must be by the present invention
Claim covered.
Claims (4)
1. a kind of method to the sequence of out of order data flow, is applied in data processing equipment, it is characterised in that the data processing
Equipment at least includes the first caching and the second caching, and first cache size is fixed, and methods described includes:
Read and data are reached, and the arrival data are inserted in the described first caching according to the data sequence number for reaching data
Predeterminated position, if the described first caching cannot be inserted, the arrival data is inserted into predeterminated position in the second caching;Institute
The second caching is stated for one or more files, the strategy for reaching the data insertion file is:Each file is traveled through, such as
The really described sequence number for reaching data falls in this document between the end sequence number of the start sequence number of data and data, then arrive described
Reach data and insert this document in order;If failed to find suitable file through the traversal, failed file is found, multiple
With the obsolete documents and by obsolete documents described in the arrival data insertion;Failing to find obsolete documents, then
Create new file and the arrival data are inserted the new file;Wherein, the strategy of the multiplexing obsolete documents is:Delete
All data in this document, and according to the following formula in the file in start sequence number beginPos and the file of data
End sequence number endPos of data:
BeginPos=round [(seqN-WindowSize)]/FileSize*FileSize+WindowSize
EndPos=beginPos+FileSize-1
Wherein, seqN represents the sequence number for reaching data, and WindowSize represents the size of the described first caching, FileSize
Represent the size of the file, round represents downwards and rounds;
Data are read sequentially from the described first caching, judge whether there are valid data in reading position, be if so, then sent to down
Trip, if it is not, then wait, until reading and being sent to downstream when there are valid data the position;
Data return load during described second is cached is in the described first caching:The file is traveled through, if the file is effectively,
Data in the file are then traveled through, if the effective and data of data belong to the scope of the described first caching, by institute
State data insertion described first caching, and the data are set to invalid.
2. according to claim 1 to out of order data flow sequence method, it is characterised in that described from described first caching
In sequentially read data, judge whether there are valid data in reading position, be if so, then sent to downstream, if it is not, then wait, directly
The step of reading and be sent to downstream when having valid data to the position includes:After sending the data to downstream, increase
LastSendSeq value, updates seqStart value, and returns to the described first caching head after reading the described first caching tail position
Portion position continuation order reads data, and wherein, lastSendSeq represents the ranked maximum data for completing and being sent to downstream
Sequence number, seqStart represent the sequence number of the data that the described first caching head position should be stored.
3. the method to the sequence of out of order data flow according to claim 1, it is characterised in that the reading reaches data,
And the arrival data are inserted into predeterminated position in the described first caching according to the data sequence number for reaching data, if cannot
The described first caching is inserted, then by described arrival the step of data are inserted into predeterminated position in the second caching, data is inserted
To first caching in strategy be:If sequence number SeqN for reaching data meets:
lastSendSeq≤SeqN≤lastSendSeq+WindowSize
Then by following rules by described for the arrival data insertion the first caching:
Index=SeqNmodWindowSize
Wherein, lastSendSeq represents the ranked maximum data sequence number for completing and being sent to downstream, and WindowSize represents
The size of first caching, mod is modulus computing, and Index represents that the arrival data should deposit in the described first caching
In position.
4. according to claim 1 to out of order data flow sequence method, it is characterised in that the file characteristic of the failure
It is the end sequence number of data of this document less than the ranked maximum data sequence number for completing and being sent to downstream.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310161560.0A CN103309940B (en) | 2013-05-03 | 2013-05-03 | A kind of method to the sequence of out of order data flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310161560.0A CN103309940B (en) | 2013-05-03 | 2013-05-03 | A kind of method to the sequence of out of order data flow |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103309940A CN103309940A (en) | 2013-09-18 |
CN103309940B true CN103309940B (en) | 2017-03-08 |
Family
ID=49135158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310161560.0A Active CN103309940B (en) | 2013-05-03 | 2013-05-03 | A kind of method to the sequence of out of order data flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103309940B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729135B (en) * | 2016-08-11 | 2021-03-16 | 创新先进技术有限公司 | Method and device for parallel data processing in sequence |
CN106375329B (en) * | 2016-09-20 | 2019-08-06 | 腾讯科技(深圳)有限公司 | A kind of data push method and sequence controller and data delivery system |
CN109039549B (en) * | 2018-07-13 | 2021-07-23 | 新华三技术有限公司 | Message retransmission method and device |
CN113014547B (en) * | 2021-01-29 | 2022-11-01 | 深圳市风云实业有限公司 | Sequencing mapping-based direct data transmission system and method |
US12066943B1 (en) | 2023-06-15 | 2024-08-20 | Rivai Technologies (Shenzhen) Co., Ltd. | Alias processing method and system based on L1D-L2 caches and related device |
CN116431529B (en) * | 2023-06-15 | 2023-08-29 | 睿思芯科(深圳)技术有限公司 | Alias processing system, method and related equipment based on L1D-L2 cache |
CN117420962B (en) * | 2023-12-14 | 2024-05-14 | 深圳市德兰明海新能源股份有限公司 | Data access management method, single chip microcomputer product and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1866905A (en) * | 2005-05-17 | 2006-11-22 | 华为技术有限公司 | Method and apparatus for shaping transmission service stream in network |
EP1062590B1 (en) * | 1998-03-17 | 2008-03-12 | Microsoft Corporation | A scalable system for clustering of large databases |
KR20110070739A (en) * | 2009-12-18 | 2011-06-24 | 한국전자통신연구원 | Apparatus and method for managing index information of high dimensional data |
-
2013
- 2013-05-03 CN CN201310161560.0A patent/CN103309940B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1062590B1 (en) * | 1998-03-17 | 2008-03-12 | Microsoft Corporation | A scalable system for clustering of large databases |
CN1866905A (en) * | 2005-05-17 | 2006-11-22 | 华为技术有限公司 | Method and apparatus for shaping transmission service stream in network |
KR20110070739A (en) * | 2009-12-18 | 2011-06-24 | 한국전자통신연구원 | Apparatus and method for managing index information of high dimensional data |
Also Published As
Publication number | Publication date |
---|---|
CN103309940A (en) | 2013-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103309940B (en) | A kind of method to the sequence of out of order data flow | |
CN109416694A (en) | The key assignments storage system effectively indexed including resource | |
Tomasic et al. | Performance of inverted indices in shared-nothing distributed text document information retrieval systems | |
EP3314464B1 (en) | Storage and retrieval of data from a bit vector search index | |
US20160139887A1 (en) | Code generator for programmable network devices | |
US9632928B2 (en) | Parallel garbage collection implemented in hardware | |
NZ526102A (en) | Method of organizing, interrogating and navigating a database | |
CN110032538A (en) | A kind of data reading system and method | |
WO2019174206A1 (en) | Data reading method and apparatus of storage device, terminal device, and storage medium | |
CN113468181B (en) | Parallel Hash connection acceleration method and system based on FPGA | |
CN104850618A (en) | System and method for providing sorted data | |
US8914601B1 (en) | Systems and methods for a fast interconnect table | |
Arge et al. | Cache-oblivious data structures | |
US20070022269A1 (en) | Storage space management methods and systems | |
Haeupler et al. | Faster algorithms for incremental topological ordering | |
CN119576811A (en) | A method and device for local message classification based on flow | |
Bartal | Distributed paging | |
CN104598390B (en) | A data storage method and device | |
CN105956203B (en) | An information storage method, an information query method, and a search engine device | |
Farshi et al. | Experimental study of geometric t-spanners | |
Rosenberg | Guidelines for data-parallel cycle-stealing in networks of workstations ii: On maximizing guaranteed output | |
Babka et al. | On online labeling with polynomially many labels | |
Feder et al. | Combining request scheduling with web caching | |
Afek et al. | Recursive design of hardware priority queues | |
JPH11112564A (en) | List management system, method and storage medium, and packet switch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |