[go: up one dir, main page]

CN1405685A - Structure and method for updating data in cache memory - Google Patents

Structure and method for updating data in cache memory Download PDF

Info

Publication number
CN1405685A
CN1405685A CN02146219A CN02146219A CN1405685A CN 1405685 A CN1405685 A CN 1405685A CN 02146219 A CN02146219 A CN 02146219A CN 02146219 A CN02146219 A CN 02146219A CN 1405685 A CN1405685 A CN 1405685A
Authority
CN
China
Prior art keywords
memory
data
aforementioned
buffer blocks
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN02146219A
Other languages
Chinese (zh)
Other versions
CN1225700C (en
Inventor
林志钢
陈维彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Technologies Inc filed Critical Via Technologies Inc
Priority to CNB021462194A priority Critical patent/CN1225700C/en
Publication of CN1405685A publication Critical patent/CN1405685A/en
Application granted granted Critical
Publication of CN1225700C publication Critical patent/CN1225700C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a structure and method for updating data of cache memory in a local processor, which utilizes the characteristic of cache control to forcedly map the packet header of a header buffer to different addresses of a memory space, so that when the processor wants to access the cache memory in the processor, the processor finds out cache miss every time, and forces the processor to alternately request data from a buffer block A and a buffer block B of the header buffer memory in a main channel adapter, and loads the whole block into the cache memory, thereby improving the cache updating efficiency and increasing the speed of packet access.

Description

The structure and the method for memory cache body data for updating
Technical field
The invention relates to the structure and the method for the memory cache body data for updating of a kind of processor inside.
Background technology
High-speed transfer network is now contained very wide, for example unlimited knitmesh transmission technology (Infiniband), Cable Modem (Cable Modem), fiber optic network (Optical Network) and serial hard disc transmission technology (Serial ATA) etc. frequently.For instance, infinitely frequently knit transmission technology and contain the second layer articulamentum (link) in network OSI (OPen System Interconnect Reference Model) seven layer protocols, the agreement of three-layer network layer (network).Its purpose is to transmit by the I/O that servomechanism inside is frequent, and the data of signal distribution/exchange stream, shifts out fully outside the server system, with node the mode of node is managed.Be made in like this in the running of many medium-and-large-sized network servicers or cluster system, can reject Datagram by repetitiousness decoding, the formed calculation resources waste of coding, and accelerate the reaction velocity of outer network service efficiency is postponed.
The unlimited transmission technology of knitmesh is frequently done one to one the mode of node with node or the I/O read-write of one-to-many is managed, and some node may be defined to sub-network (subnet), can be authorized to manage the beneath data of this node and flow to or configuration.From specification, the unlimited transmission technology of knitmesh frequently can reach single node 2.5Gbps transmission speed, and four nodes can reach 10Gbps, and under maximum 12 passages transmitted simultaneously, Zui Da transfer rate was up to 30Gbps in theory.
The signal transmission principle of the unlimited transmission technology of knitmesh frequently, be exactly criss-cross interlacing of tracks, handover mechanism, can be applied in copper cash transmission and fiber optic conduction two media simultaneously, attachable product, range of application, from servomechanism, HubSwitch (switch), router (router) to relevant interface card, and end points management software or the like.
Seeing also Fig. 1, is to show that a kind of package that applies to the unlimited transmission technology of knitmesh frequently receives the calcspar of structure.As shown in Figure 1, mainly comprise a primary channel adapter 1, its hardware module is for supporting two or more physical layer ports, to receive package from physical layer 2, one main busbar interface, two area controller interfaces and dynamic randon access memory 4 are shared by area controller, this area controller is respectively receiving processor 5 and transport processor 8, and by a SRAM (SRAM) 3 as packet buffer, come as the package transmission between main transmission line interface (Host Lines interface) and the network (Network) and the reservoir of reception.In the hardware module of this primary channel adapter 1, exist a plurality of direct access memory bodys (Direct Memory Access, DMA) engine and by area controller assign instruction with the transmission data between SRAM 3 and memory main body (Host Memory).All corresponding two hardware engines (HardwareEngines) of each port, one is used for transmitting another and then is used for receiving, this primary channel adapter 1, for example, be that main central processing unit (Host CPU) can be attached on the network of unlimited frequency knitmesh transmission technology (Infiniband).
Continuation is with reference to figure 1, when package is sent in the process of memory main body (Host Memory) by physical layer 2 continually via primary channel adapter 1, it is temporary complete package can be sent to SRAM 3, and duplicate a packet header (Packet header) to header buffer memory 6 simultaneously and keep in, so that receiving processor 5 does not need via packet access header repeatedly between Dynamic Random Access Memory of sharing 4 and the SRAM 3, and the header that can take package is fast done processing fast, and alleviates the access load and the load that reduces Dynamic Random Access Memory 4 of SRAM 3.
According to known techniques, when receiving processor 5 is desired the access packet header, all must send out an instruction cycle to Dynamic Random Access Memory 4, and once can only bit of access, finish up to packet access, therefore the access of promptly so-called non-memory cache body though reduce the time of memory main body to package carrying between the Dynamic Random Access Memory 4, but the time of access still remains to be improved, and the speed that makes receiving processor 5 handle package probably can influence whole running.Yet, some embedded receiving processors 5 use inner memory cache body to come the packet header that enters in the access header buffer memory 6, if can't detect outside data upgrade cause get inefficacy (Invalidation) soon the time, can cause new packet header come in to make the content of this header buffer memory 6 be updated the back and be not written into before the inner memory cache body or not in the memory cache body, because receiving processor 5 is identical because of the address of each institute access packet header, can take place to get soon to hit, what this moment, receiving processor 5 was read is previous old information, so receiving processor 5 can't be handled the packet header of data updating.
Summary of the invention
Therefore, fundamental purpose of the present invention is to provide the structure of the memory cache body data for updating of a kind of area controller inside, gets usefulness and the packet access speed upgraded soon to promote.
The structure of memory cache body data for updating of the present invention, be applied to the fast taking system of an area controller, with the reception data of access one primary channel adapter, one main central processing unit can be attached on the unlimited frequency knitmesh transmission network for aforementioned primary channel adapter, comprises:
One buffer memory, with the temporary reception data that is received, and aforementioned buffer memory is distinguished some buffer blocks;
One memory cache body is arranged in the aforementioned areas processor, by mapping to a memory body space to be addressed to aforementioned buffer blocks; And
One data is written into mechanism, each aforementioned buffer blocks is mapped to a plurality of different address areas in aforementioned memory body space, when reading different address area relevant for aforementioned buffer blocks by this area controller, make this memory cache body that not middle situation take place to get soon, and be written into the data of this buffer blocks, so that aforementioned memory cache body obtains the reception header that upgrades.
The structure of the another kind of memory cache body of the present invention data for updating is applied to the fast taking system of a processor, comprises:
One memory external body, with the data that temporary this processor is received, the said external memory body is distinguished some buffer blocks;
One memory cache body is arranged in the aforementioned processing device, by mapping to a memory body space to be addressed to aforementioned buffer blocks; And
One data is written into mechanism, each aforementioned buffer blocks is mapped to aforementioned memory body space in turn, when reading the address in aforementioned memory body space in regular turn, make this memory cache body that not middle situation take place to get soon, and be written into the reception data that this buffer blocks is upgraded by this processor.
Another purpose of the present invention is to provide a kind of method of data for updating of embedded processor fast taking system, utilize the memory body space force to shine upon the influence that produces in getting not soon, and from the external buffer memory body buffer blocks of upgrading is written in the memory cache body, with resolution system can't detect outside data upgrade cause get Problem of Failure soon.
The method of memory cache body data for updating of the present invention is applied to the fast taking system of a processor, comprises:
The memory external body of distinguishing the aforementioned processing device is some buffer blocks, to keep in the data that this processor is received;
The different address areas in addressing one memory body space are in the same buffer blocks of said external memory body; And
When this processor reads aforementioned different address area, make the situation of fast taking system in can taking place to get soon not, and according to the addressing of aforementioned different address areas being written into the identical buffer blocks of said external memory body, and then obtain the data that this buffer blocks is upgraded.
The method of the another kind of memory cache body of the present invention data for updating is applied to the fast taking system of a processor, comprises:
Distinguishing a memory external body is some buffer blocks, to keep in the data that this processor is received;
Shine upon aforementioned some buffer blocks to memory bodys space in turn; And this processor is when reading aforementioned memory body space in regular turn, makes the situation of fast taking system in can taking place to get soon not, is written into the content of aforementioned buffer blocks in turn, and then obtains the data that this buffer blocks is upgraded.
The method of another memory cache body data for updating of the present invention is applied to the memory cache body of a processor, comprises:
Distinguishing a memory external body is some buffer blocks, to keep in the data that this processor is received;
Shine upon aforementioned some buffer blocks to memory body address realms in turn, but the aforementioned memory body address realm of this memory cache body addressing; And
When this processor read aforementioned memory body address realm in regular turn, the situation in memory cache being known from experience take place to get not soon was written into the data of aforementioned buffer blocks in turn, and then obtains the data that this buffer blocks is upgraded.
When known techniques is desired the packet header of access header buffer memory at the area controller of high-speed transfer network, whether the data that can't discover memory cache body inside because of area controller has was upgraded, so area controller can be read old information, and if adopt the mode of non-memory cache body, the speed of then handling package can slow down slowly, and the structure of the data updating of the fast taking system that road of the present invention provides, make the fast taking system utilization of general embedded processor get the characteristic that reads control soon, the packet header of header buffer memory forced map to different address, a memory body space, when causing this processor to desire the access packet header at every turn, all be found to be and get not soon, and then force it from external buffer memory body alternately requirement new data, so can promote it gets renewal usefulness soon, and increase the speed of packet access.
Description of drawings
Fig. 1 is the calcspar that the unlimited transmission technology of knitmesh frequently package receives structure.
Fig. 2 receives calcspar for the package of memory cache body data for updating structure of the present invention.
Fig. 3 is the synoptic diagram of memory cache body data for updating of the present invention.
Fig. 4 is the address synoptic diagram in the memory body space of memory external volume buffer piece mapping of the present invention.
Label declaration
1 primary channel adapter, 2 physical layers
3 SRAMs, 4 Dynamic Random Access Memories
5 receiving processors, 6 header buffer memories
7 memory body spaces, 8 transport processors
51 memory cache bodies, 61 buffer blocks
Embodiment
The invention provides a kind of method of memory cache body data for updating, be applied to the fast taking system of an embedded processor, this processor inside comprises the data of a memory cache body with mapped access one memory external body, and this method comprises: it is some buffer blocks that this memory external body is distinguished in planning; The different address areas in addressing one memory body space are to the same buffer blocks of this memory external body, make the content of buffer blocks map to the memory body space of this difference address area institute addressing simultaneously; And, this processor can read the aforementioned different address areas relevant for same buffer blocks, make the situation of fast taking system in taking place to get not soon, promptly read the address area of expection in can taking place to get soon not, be not in the mapping area of memory cache body, force its fast taking system to be written into the content of renewal from this buffer blocks.
In one embodiment of this invention, in the package reception environment as above-mentioned unlimited frequency knitmesh transmission technology, see also shown in Figure 2, receiving processor 5 with fast taking system is written into the packet header of reception from the header buffer memory 6 of primary channel adapter 1, and the method for memory cache body data for updating of the present invention comprises: distinguish header buffer memory 6 for some buffer blocks 61 to store the packet header that receive; The different address areas in addressing one memory body space are to the same buffer blocks 61 of header buffer memory 6, in brief, the different address areas in memory cache body 51 these memory body spaces of mapping of receiving processor 5 can be addressed to the same buffer blocks 61 of header buffer memory 6; And, when this receiving processor 5 reads the packet header of a buffer blocks 61, can be by the different address areas of addressing relevant for this buffer blocks 61, be not in memory body space that memory cache body 51 is shone upon, make the situation of fast taking system in taking place to get not soon, and then make fast taking system be written into the packet header of renewal from this buffer blocks 61.
See also shown in Figure 3, in this embodiment of the present invention, header buffer memory 6 is to distinguish several buffer blocks A, B, header in order to the temporary package that receives, when area controller 5 is desired the header of access package, elder generation to its inner memory cache body 51 reads, and be written into mechanism by a data, this mechanism is forced a plurality of different address areas that map to a memory body space with a buffer blocks 61 of header buffer memory 6, and but the address area in this memory body space is all the address realm that receiving processor 5 addressing are read, cause receiving processor 5 each packet header of desiring access buffer piece 61, can not shine upon the address area of scopes by this buffer blocks 61 of addressing at memory cache body 51, state in its fast taking system all being found to be get not soon, and force receiving processor 5 to primary channel adapter 1, buffer blocks A and buffer blocks B to header buffer memory 6 require data for updating in turn, cause the memory cache body 51 of receiving processor 5 can obtain to upgrade.
An above-mentioned so-called data is written into mechanism, be the different addresses that buffer blocks 61 repetition compulsions of header buffer memory 6 mapped to a memory body space 7 (as shown in Figure 4), when reading aforementioned different address by this receiving processor 5, make this memory cache body 51 that not middle situation can take place to get soon, and comply with the identical buffer blocks 61 that aforementioned different addresses are written into header buffer memory 6, so that memory cache body 51 obtains the header of the package of renewal; Or the different address areas in a memory body space 7 are addressed to the same buffer blocks 61 of header buffer memory 6, the addressing meeting of reading aforementioned different address areas by this area controller 5 is written into the identical buffer blocks 61 of header buffer memory 6, so that memory cache body 51 obtains the header of the package that upgrades.
See also Fig. 4 and cooperate and consult Fig. 3, the memory body space synoptic diagram that shows the embodiment mapping of memory cache body data for updating structure of the present invention, but the address realm of wherein supposing known memory body space 7 is address 1000 to address 6000 are the address realms that read for receiving processor 5 addressing, then the content of the buffer blocks 61 of header buffer memory 6 are forced the different address areas that map to this memory body space 7.For example, the content of buffer blocks A is forced address 1000-2000 and the address 3000-4000 that maps to this memory body space 7 simultaneously.When package is sent in the process of memory main body through primary channel adapter 1 by physical layer 2, it is temporary that complete package will be sent to SRAM 3, and the header that duplicates this package simultaneously is temporary to the buffer blocks A of header buffer memory 6, therefore, receiving processor 5 addressing address 1000 or address 3000 and all can read the packet header of buffer blocks A.
When receiving processor 5 reads the address area 1000-2000 that memory cache body 51 shone upon, the packet header of buffer blocks A has been written in the memory cache body 51, treat that new packet header comes in to make the content of this buffer blocks A to be updated the back and be not written into before the inner memory cache body 51 or not in memory cache body 51, receiving processor 5 can be by reading address area 3000-4000, make fast taking system be found to be state in getting not soon, force receiving processor 5 to primary channel adapter 1, require to upgrade the packet header data to the buffer blocks A of header buffer memory 6.
So, buffer blocks A and different address areas in buffer blocks B maps to 1000-6000 memory body space, address in turn, then receiving processor 5 reads the content of this address realm 1000-6000 in regular turn, to impel the data in the memory cache body 51 to be upgraded in turn, and then obtain the fast processing of package.
The structure and the method for memory cache body data for updating of the present invention will have plurality of advantages and feature, the characteristic of control is got in utilization wherein of the present invention soon, address known some address can take place to get soon not when being the area controller access in, then the part block with the header buffer memory forces a plurality of different address areas that map to a memory body space, when causing area controller to handle the packet header of this block, nationality is by reading the different address areas of expection in can taking place to get soon not, and be addressed to identical memory body block to be loaded in the memory cache body, obtain the packet header of renewal, make area controller can correctly handle the package that enters, so increase the processing speed of package effectively.
Though the present invention describes as above with preferred embodiment, the personage who is familiar with one's own profession can be modified in invention described herein, obtains effect of the present invention simultaneously.Therefore, understand above description to the personage that is familiar with one's own profession skill and Yan Weiyi discloses widely, and its content does not lie in restriction the present invention.

Claims (10)

1. the structure of a memory cache body data for updating, be applied to the fast taking system of an area controller, with the reception data of access one primary channel adapter, one main central processing unit can be attached on the unlimited frequency knitmesh transmission network for aforementioned primary channel adapter, it is characterized in that comprising:
One buffer memory, with the temporary reception data that is received, and aforementioned buffer memory is distinguished some buffer blocks;
One memory cache body is arranged in the aforementioned areas processor, by mapping to a memory body space to be addressed to aforementioned buffer blocks; And
One data is written into mechanism, each aforementioned buffer blocks is mapped to a plurality of different address areas in aforementioned memory body space, when reading different address area relevant for aforementioned buffer blocks by this area controller, make this memory cache body that not middle situation take place to get soon, and be written into the data of this buffer blocks, so that aforementioned memory cache body obtains the reception header that upgrades.
2. the structure of memory cache body data for updating as claimed in claim 1 is characterized in that: the temporary reception data of described buffer memory is a packet header.
3. the structure of memory cache body data for updating as claimed in claim 1 is characterized in that: the buffer blocks of described buffer memory maps to aforementioned memory body space in turn.
4. the structure of memory cache body data for updating as claimed in claim 3, it is characterized in that: when described area controller reads the address in described memory body space in regular turn, make this memory cache body that not middle situation take place to get soon, and be written into the reception data that this buffer blocks is upgraded.
5. the structure of a memory cache body data for updating is applied to the fast taking system of a processor, it is characterized in that comprising:
One memory external body, with the data that temporary this processor is received, the said external memory body is distinguished some buffer blocks;
One memory cache body is arranged in the aforementioned processing device, by mapping to a memory body space to be addressed to aforementioned buffer blocks; And
One data is written into mechanism, each aforementioned buffer blocks is mapped to aforementioned memory body space in turn, when reading the address in aforementioned memory body space in regular turn, make this memory cache body that not middle situation take place to get soon, and be written into the reception data that this buffer blocks is upgraded by this processor.
6. the structure of memory cache body data for updating as claimed in claim 5 is characterized in that: a buffer blocks of described memory external body is forced a plurality of different address areas that map to described memory body space.
7. the structure of memory cache body data for updating as claimed in claim 6, it is characterized in that: described processor reads the different address areas relevant for described buffer blocks, make this memory cache body that not middle situation take place to get soon, and then be written into the reception data that this buffer blocks is upgraded.
8. the method for a memory cache body data for updating is applied to the fast taking system of a processor, comprises:
The memory external body of distinguishing the aforementioned processing device is some buffer blocks, to keep in the data that this processor is received;
The different address areas in addressing one memory body space are in the same buffer blocks of said external memory body; And
When this processor reads aforementioned different address area, make the situation of fast taking system in can taking place to get soon not, and according to the addressing of aforementioned different address areas being written into the identical buffer blocks of said external memory body, and then obtain the data that this buffer blocks is upgraded.
9. the method for a memory cache body data for updating is applied to the fast taking system of a processor, comprises:
Distinguishing a memory external body is some buffer blocks, to keep in the data that this processor is received;
Shine upon aforementioned some buffer blocks to memory bodys space in turn; And this processor is when reading aforementioned memory body space in regular turn, makes the situation of fast taking system in can taking place to get soon not, is written into the content of aforementioned buffer blocks in turn, and then obtains the data that this buffer blocks is upgraded.
10. the method for a memory cache body data for updating is applied to the memory cache body of a processor, comprises:
Distinguishing a memory external body is some buffer blocks, to keep in the data that this processor is received;
Shine upon aforementioned some buffer blocks to memory body address realms in turn, but the aforementioned memory body address realm of this memory cache body addressing; And
When this processor read aforementioned memory body address realm in regular turn, the situation in memory cache being known from experience take place to get not soon was written into the data of aforementioned buffer blocks in turn, and then obtains the data that this buffer blocks is upgraded.
CNB021462194A 2002-10-16 2002-10-16 Device and method for caching memory update data Expired - Lifetime CN1225700C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB021462194A CN1225700C (en) 2002-10-16 2002-10-16 Device and method for caching memory update data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB021462194A CN1225700C (en) 2002-10-16 2002-10-16 Device and method for caching memory update data

Publications (2)

Publication Number Publication Date
CN1405685A true CN1405685A (en) 2003-03-26
CN1225700C CN1225700C (en) 2005-11-02

Family

ID=4751029

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB021462194A Expired - Lifetime CN1225700C (en) 2002-10-16 2002-10-16 Device and method for caching memory update data

Country Status (1)

Country Link
CN (1) CN1225700C (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1928841B (en) * 2005-09-08 2010-10-13 松下电器产业株式会社 Cache memory analyzing method
CN102455879A (en) * 2010-10-21 2012-05-16 群联电子股份有限公司 Memory storage device, memory controller and method for automatically generating filling file

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1928841B (en) * 2005-09-08 2010-10-13 松下电器产业株式会社 Cache memory analyzing method
CN102455879A (en) * 2010-10-21 2012-05-16 群联电子股份有限公司 Memory storage device, memory controller and method for automatically generating filling file
CN102455879B (en) * 2010-10-21 2014-10-15 群联电子股份有限公司 Memory storage device, memory controller and method for automatically generating filling files

Also Published As

Publication number Publication date
CN1225700C (en) 2005-11-02

Similar Documents

Publication Publication Date Title
US10042804B2 (en) Multiple protocol engine transaction processing
CN1277216C (en) Method and apparatus for scalable disambiguated coherence in shared storage hierachies
CN1143230C (en) Device and method for partition memory protection in multiprocessor system
US20030233388A1 (en) Transaction management in systems having multiple multi-processor clusters
US20030225938A1 (en) Routing mechanisms in systems having multiple multi-processor clusters
US7069392B2 (en) Methods and apparatus for extended packet communications between multiprocessor clusters
US20090287902A1 (en) Distributed computing system with universal address system and method
US20190087352A1 (en) Method and system transmitting data between storage devices over peer-to-peer (p2p) connections of pci-express
JP2002519785A (en) Split directory-based cache coherency technique for multiprocessor computer systems
WO2014142473A1 (en) Key value-based data storage system and operation method thereof
CN106843772A (en) A kind of system and method based on uniformity bus extension nonvolatile memory
US8930640B2 (en) Multiprocessor computer system with reduced directory requirement
CN1991795A (en) System and method for information processing
CN1018098B (en) Microprocessor Bus Interface Unit
CN116795736A (en) Data pre-reading method, device, electronic equipment and storage medium
CN116414563A (en) Memory control device, cache consistency system and cache consistency method
US7653790B2 (en) Methods and apparatus for responding to a request cluster
CN100339836C (en) Method, system and apparatus for memory-based data transfer
CN1860452A (en) Methods and apparatus for providing early responses from a remote data cache
US20040268052A1 (en) Methods and apparatus for sending targeted probes
CN1405685A (en) Structure and method for updating data in cache memory
US7114031B2 (en) Structure and method of cache memory data update
US5895496A (en) System for an method of efficiently controlling memory accesses in a multiprocessor computer system
JP2005323359A (en) Method and apparatus for providing interconnected network functionality
CN1464415A (en) Multi-processor system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20051102