[go: up one dir, main page]

HK1030293A - Hierachical prefetch for semiconductor memories - Google Patents

Hierachical prefetch for semiconductor memories Download PDF

Info

Publication number
HK1030293A
HK1030293A HK01100999.0A HK01100999A HK1030293A HK 1030293 A HK1030293 A HK 1030293A HK 01100999 A HK01100999 A HK 01100999A HK 1030293 A HK1030293 A HK 1030293A
Authority
HK
Hong Kong
Prior art keywords
semiconductor memory
data
prefetch
stage
bit
Prior art date
Application number
HK01100999.0A
Other languages
Chinese (zh)
Inventor
B‧季
T‧基里哈塔
G‧米勒
D‧汉森
Original Assignee
因芬尼昂技术北美公司
国际商业机器公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 因芬尼昂技术北美公司, 国际商业机器公司 filed Critical 因芬尼昂技术北美公司
Publication of HK1030293A publication Critical patent/HK1030293A/en

Links

Description

Hierarchical prefetch for semiconductor memory
The present disclosure relates to semiconductor memories, and more particularly, to a hierarchical prefetch method and apparatus for increasing the overall data rate or bandwidth of a semiconductor memory.
Dynamic Random Access Memory (DRAM) is used to store large amounts of digitally encoded information in various electronic systems. The data rate of DRAMs has become important as microprocessors operate at increased clock rates. This requires DRAM devices with extremely high data rates for read and write capability to remain synchronized with the microprocessor. The access speed from address input to data input/output limits the data rate of a DRAM, and data transmission requires a signal to pass through many circuits, receivers, drivers, decoders and sense amplifiers. Without improving the processing technology for faster device speeds, it is not easy to increase the access speed.
Therefore, many techniques have been developed to increase the data rate of the circuit. One such technique is known as "prefetching" and is disclosed in 1994, 2, 8, U.S. patent No.5,285,421 entitled "eliminating page bound restrictions on initial accesses to a serially connected access memory" and 1995, 2, 21 to Margulis et al, U.S. patent No.5,392,239 entitled "burst mode DRAM".
"prefetch" techniques take advantage of burst access patterns by latching additional data for subsequent burst patterns to one register in addition to the data corresponding to the specified address. The "prefetch" technique receives, among other things, a starting address and subsequent addresses are generated internally by the DRAM. The internal address generation is much faster than the externally received subsequent address, substantially improving access to subsequent burst modes if subsequent data is available. By storing the additional data fetched at the register as a prefetch, the successor data can be accessed when the successor address is generated. In this way, the total time for completing a large number of sequential accesses is reduced, increasing the data rate of the burst access mode to as large as the prefetch.
A data rate of 200Mb/sec or higher is used for 256MbDRAM with 2-bit prefetch. The prior art includes a DQ module (input/output pin) with 2 read/write driver (RWDs) bus lines directed to each DQ. This increases the data rate to twice that without prefetching. However, increasing prefetching would equate high cost with chip size overhead.
Accordingly, there is a need for a hierarchical prefetch method and apparatus for increasing data rate or bandwidth while maintaining low chip size overhead for semiconductor memories.
A semiconductor memory according to the present invention includes a data path including a plurality of hierarchical levels, each level including a data rate different from other levels. At least two prefetch circuits are disposed between the stages. The at least two prefetch circuits include at least two latches for receiving the data bits and storing the data bits until a next stage in the hierarchy is able to receive the data bits. At least two prefetch circuits are connected between the stages such that the total data rate of each stage between the stages is substantially the same. The control signal controls the at least two latches such that the prefetch circuit maintains a total bit transfer rate between the stages.
In an alternative embodiment, the prefetch circuitry preferably has an 8-bit depth. The plurality of stages may include a first stage at a lower hierarchical level and a second stage at a higher hierarchical level with a prefetch circuit between the two stages having a depth greater than or equal to a quotient of a bit rate of the first stage divided by a bit rate of the second stage, rounding any decimal to a nearest integer. The stage may include one of a sense amplifier and a first-in-first-out circuit. The semiconductor memory preferably includes an overall data rate of greater than 400 megabits/second. The hierarchy may be configured using hierarchical data lines and read/write drivers on the memory cell array. The total data rate between stages may be calculated by multiplying the prefetch depth by the bit data rate of the stage.
The semiconductor memory chip includes a memory array having partitions, each partition having four quadrants, each quadrant including an odd column and an even column of memory cells. The data path associated with each quadrant includes a local data line for transmitting memory data. The local data lines are coupled to a first stage comprising a first sense amplifier circuit, the first stage being coupled to a second stage comprising a second sense amplifier circuit via a main data line. The second stage is connected to a third stage including a first-in/first-out/off chip driver circuit through a read/write driver line, and the first-in/first-out/off chip driver circuit is connected to the input/output lead. At least two latches are provided within a stage for providing a prefetch capability for data to be transmitted through the data path, the at least two latches for receiving data bits and storing the data bits until a next stage in the data path is able to receive the data bits. At least two latches are associated with the stages such that the total data rate between the stages is substantially equal to the required data rate for each stage. The control signal controls at least two latch circuits to provide prefetch capability to maintain a data rate between the stages.
In an alternative embodiment, the semiconductor memory chip preferably has a prefetch depth of 8 bits. The prefetch depth may be allocated as second level 4 bits and third level 2 bits. The prefetch depth may be allocated as level 2 first bits, level 2 second bits, and level 2 third bits. In the case of rounding any fractional number to the nearest integer, the prefetch circuitry has a value greater than or equal to the quotient of the bit rate of one level divided by the bit rate of another level. The semiconductor memory chip preferably includes an overall data rate of greater than 400 megabits/second. The control signals may include pointer signals for transferring data in the correct burst sequence between stages. The semiconductor memory chip is preferably a synchronous DRAM chip. The second stage includes a switch for activating the second stage and the control signal includes a pointer signal for activating and deactivating the switch. The third stage includes a switch for activating the third stage and the control signal includes a control signal for activating and deactivating the switch. The bit rate of the first level is about 20ns per bit. The bit data rate of the second stage is between 10ns per bit and 20ns per bit. The bit rate of the third stage is approximately 5ns per bit. The semiconductor memory chip further includes a control circuit for incrementing an address from an odd and even start address to provide sequential addresses that generate the control signals. The semiconductor memory chip further includes a control circuit for formulating an address from the odd and even start addresses to provide interleaved addresses for generating the control signal. The total data rate between levels may be determined by multiplying the prefetch depth by the bit data rate of the level.
These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The disclosure is illustrated in detail in the following description of preferred embodiments with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram representing a memory circuit having a hierarchical prefetch data path in accordance with the present invention;
FIG. 2 is a block diagram representation of another embodiment of the memory circuit of FIG. 1 with hierarchical prefetch data paths in accordance with the present invention;
FIG. 3 is a block diagram in general form of a hierarchical prefetch circuit according to the present invention;
FIG. 4 is a schematic/block diagram of a pre-fetched 1Gb SDRAM chip shown in FIFOs according to the present invention; and
fig. 5 is a schematic diagram of an SSA and FIFO control circuit according to the present invention.
The present invention relates to semiconductor memories, and more particularly, to a hierarchical prefetch method and apparatus for increasing the data rate or bandwidth of a semiconductor memory with low chip size overhead. Increasing the operating frequency and bandwidth of Dynamic Random Access Memories (DRAMs) in current designs has attracted considerable attention. In the present invention, hierarchical prefetching of read and write accesses on the column/data paths on the DRAMs is disclosed. The present invention implements two or more prefetch stages in layers in multiple circuit blocks on the column/data path, e.g., FIFO/OCD (first in first out/off chip driver), SSA (second sense amplifier), SA (sense amplifier), etc. According to the present invention, bit data rates are optimized for each hierarchical data path level using hierarchical prefetching, which increases the total data rate with a small design size overhead.
One aspect of the present invention is to set an optimized prefetch at each hierarchical datapath level, where the optimal prefetch number for a given level is selected so that the level no longer becomes the bottleneck for the datapath. One example of an implementation is an 8-bit hierarchical prefetch that can be used for synchronous drams (sdram), where a 2-bit prefetch is used for FIFO/OCD to point to RWDe and RWDo of each DQ, a 4-bit prefetch is used for SSAs to get MDQ < 0: other chips that may be used with the present invention include Rambus DRAMs (RDRAMs) or SyncLink DRAMs (SLDRAMs) to which each RWD is directed. Sequential and interleaved bursts with odd or even start addresses are supported according to the invention.
Referring now to the detailed description of the drawings, in which like numerals are considered to represent the same or similar components throughout the several views, and beginning with FIG. 1, a hierarchical prefetch scheme for a rank/data path of a high density memory device 10 is schematically depicted. Various circuit blocks on the column data path, such as one or more FIFO/OCDs, SSA, SA, etc., provide a prefetching scheme. Some advantages of the invention will be described in more detail below.
The hierarchical prefetching according to the present invention provides an increased data rate with little or no increase in chip size overhead. Advantageously, this is achieved by providing data paths in a hierarchical level within two or more levels of local data Lines (LDQ), main data lines (MDQ), read/write drivers (RWD), and offline drivers (OCD), i.e. data outputs. Since the individual hierarchical levels do not limit the data rate, the overall characteristics of the data rate of the read/write column/data path are improved, resulting in an optimal overall data rate. The prefetch scheme is determined using circuit delay information. Prefetching is designed into the system, for example, into a DRAM or SDRAM chip, to minimize the delay at the bottleneck, i.e., the delay at which the circuit is slowest, to support circuits before or after the bottleneck. Note that without prefetching, if one data path stage, such as SSA, is slower than the other stages, the chip data rate is not increased. This is an example of a bottleneck.
Unlike conventional single-level prefetching, hierarchical prefetching of different circuit blocks (at different locations) also results in savings in routing area/layout area and power consumption. FIG. 1 schematically shows an example of an 8-bit hierarchical prefetch circuit 12 in which a 4-bit prefetch is performed in a Second Sense Amplifier (SSA) dual latch to support a RWD/4MDQs (note that SSA is itself a latch, and preferably an additional latch is added to facilitate prefetching), and a 2-bit prefetch is performed in a FIFO (first in/first out/off)/OCD dual latch to support a DQ/2 RWDs. For read accesses, multiple other SSAs are activated at the same time as the read command (fig. 1 shows 8 SSAs for each DQ, other multiple SSAs could be used, e.g., 4 or 16), the array is read and the data in the SSA master-slave dual latch is held (note that the SSA itself is a latch). The predecode pointer control signal (see FIG. 5) then transfers the SSA double-latch data to multiple RWDs (FIGS. 1 and 2 schematically show two RWDs for each DQ). Finally, the arriving data from the two 4-bit data packets of RWDE and RWDO are latched into the OCD/FIFO double latch and directed to the DQ leads by controlling the pointers (see FIG. 5) in an 8-bit burst operation, thus performing an 8-bit prefetch. The write operation is performed in the opposite direction of the read operation.
In the example shown in FIG. 1, since 2-bit prefetching occurs at the FIFO/OCD, if the goal is to achieve a total data rate of about 400Mb/s (2.5ns for each bit), then the RWD bit data (transmission) rate is preferably about 5ns per bit. Then, since an additional 4-bit prefetch is performed at SSA, the data (transfer) rate required for MDQ is preferably about 20ns per bit, which is also preferably the access time for LDQ. Thus, the RWD, MDQ and LDQ data per bit (transmission) rates are attenuated by factors of 2, 8 and 8 from the data rate (i.e., 5/2.5, 20/2.5 and 20/2.5), respectively. The data per bit (transmission) rates described in the examples are illustrative only and not limiting. The prefetching scheme can be extended to achieve higher or lower data rates, depending on the design of the circuit. Note that this two-level 8b hierarchical prefetch is more efficient than the conventional 1-level 8b prefetch, since 8b prefetching per DQ requires more than 4 times the RWDs for the conventional level 8b prefetch.
Referring to FIG. 2, another illustrative example of an 8-bit hierarchical prefetch circuit 112 for a semiconductor memory 110 is shown in which a 2-bit prefetch is performed at the SA double latch (SA latch plus one additional latch), a 2-bit prefetch is performed at the SSA double latch (SSA latch plus one additional latch), and a 2-bit prefetch is performed at the FIFO/OCD double latch. Thus, the RWD, MDQ and LDQ per bit data (transmission) rates are attenuated by coefficients 2, 4 and 8 (i.e., 5/2.5, 20/2.5 and 20/2.5) from the total data rate (about 400Mb/s (2.5ns)), respectively.
The 8-bit hierarchical prefetch architecture of the present invention helps support sequential and interleaved burst operations with any starting address. The transfer of bits between prefetch stages is controlled by pointer signals generated by pointer control circuit 270 (FIG. 5). Pointers are specified by PNTo and PNTe (fig. 4), PNTo and PNTe are for PNTe < 0: 3>, which preferably increases from address 0, in the correct burst sequence PNTe < 0: 3> and PNTo < 0: 3> are used together to transfer data from the SSA to the FIFO, thus allowing a sequential burst sequence starting from an odd address (0+1) (even addresses may also be used for the starting address). If an address n is the starting address (odd or even), incrementing the address by 1 (e.g., n +1) is performed until the prefetch depth (e.g., n +7 for 8-bit prefetch) to provide the next address information for the pointer. This is used for sequential bursts. For interleaved bursts, the next address is determined by a formula within the prefetch depth range (n +7 for 8-bit prefetch) that selects the address of the next bit based on the starting address (odd or even). The control circuit 270 preferably includes logic circuitry for generating an address for the pointer signal.
Referring to fig. 3, a general circuit 212 of a data path according to the present invention is shown, which includes three data path stages a-C. Stages a-C assume different data rates/signal times (per bit data rate) a, b and C, respectively. One goal of the design is to have the chip data rate meet the data rate/bit c with less design overhead. Note that the data rate of each stage is determined by the product of the data rate/signal path and the number of signal paths, i.e., pre-fetching. This is achieved by setting a prefetch with m (dividing the integer value with the fractional part rounded to the next integer of data rate/bit a by the data rate/bit B) between stages a-B and setting a prefetch with n (dividing the integer value with the fractional part rounded to the next integer of data rate/bit B by the data rate/bit C) at stage B-C. m and n are preferably multiples of 2 and can be adjusted accordingly. To change the prefetch depth of each level, a pointer is designed to correspond to the prefetch depth, which is an important aspect of the present invention, and will be described in detail below. The pointer signal is provided using control circuit 214. Control circuit 214 provides the latch synchronization included with the prefetch circuitry to sequentially latch the data to allow for optimal timing and increase the overall data rate of the data path.
Referring now to FIG. 4, an example of an indeterminate implementation of an 8-bit hierarchical prefetch for a 1Gb SDRAM 200 is shown. The 1Gb SDRAM 200 includes four 256Mb quadrants or partitions 202. The two quadrants 204 and 206 located in the left half of the chip are associated with 8DQs at the left edge of the chip, and the two quadrants 208 and 210 located in the right half of the chip are associated with the right edge of the chip8DQs are related. Each partition 202 is further divided into quadrants 201 (each 64Mb cell) that are logically divided into 32Mb column address areas of even 212 and odd 214. Each of the regions 212 and 214 includes 8 sets of 4 even MDQs (MDQe)<0:3>) 8 groups of 4 odd MDQs (MDQo)<0:3>). For the illustrative circuit, each set of MDQes<0:3>And MDQo<0:3>Supporting 8 burst bits of corresponding DQs as a hierarchical 8b prefetch, using 8 sets of MDQe per column access<0:3>And MDQo<0:3>64 bits, or 8 burst bits x 8DQs, may be read or written simultaneously. Then through one of the four pointers (PNTe)<0:3>For MDQe<0:3>And PNTo<0:3>For MDQe<0:3>) Selecting MDQe<0:3>And MDQo<0:3>Two of the above 8 burst bits, two consecutive burst bits are simultaneously transferred to the corresponding RWDe and RWDo. For example, PNTe<0:3>Increasing from address 0, allows for sequential burst sequences starting with an odd address (0+ 1). As an alternative, a formula for selecting an address may be used for the interleaved burst sequence. Then when the FIFO input Pointer (PNTI) is enabled, the two even and odd bits on RWDE and RWDO are prefetched to two first-in/first-out/disconnect circuits (FIFOs)0And FIFO1). The actual PNTI comprises a record transition so that even and odd bits are stored as FIFOs, respectively0And FIFO1First and second burst bits within. 8 RWDE's and 8 RWDO's for 8DQs are tied in the center of each Column Decoder (CDECs) and shared with the adjacent 64Mb cells, eliminating 32 wires and saving about 75 μm per chip. Also shown are Row Decoders (RDECs). This hierarchical 8b prefetch architecture reduces the array and data path frequencies 1/8 and 1/2, respectively, increasing the column burst frequency above 400Mb/s per DQ.
Referring to fig. 5, one example of a circuit of the data path from the MDQ to the DQ of fig. 1 is shown. All PNTe < 0: 3> and PNTo < 0: 3> support 8 bit bursts. In other embodiments, a system with four PNTe's of activation < 0: 3> and four PNTo < 0: 3> of 4 bit bursts. For a 2-bit burst, four PNTe's are activated < 0: 3> and four PNTo < 0: 3> is provided. One chip clock signal may be used to generate PNTe < 0: 3> and PNTo < 0: 3> or as feedback from the data path circuitry. By the pointer control circuit 270 generating the pointer signal, the circuit 270 outputs a signal represented by PNTe < 0: 3> and PNTo < 0: 3> to activate circuits 250 and 251. In the correct burst sequence, PNTe < 0: 3> is used to transfer data from SSA to FIFO latch circuits 258 and 259, PNTe < 0: 3> and PNTo < 0: 3> together. The pointer is preferably incremented from address 0, thus allowing a sequential burst sequence starting with an odd address (an even address may also be used as the starting address). If address n is the starting address (odd or even), incrementing the address by 1 (e.g., n +1) is performed until the prefetch depth (e.g., n +7, for 8-bit prefetch) to provide the next address for the next internal address of the pointer. This is used for sequential bursts. For interleaved bursts, the next address for the next internal address of the pointer is determined by a formula that selects the address of the next pointer based on the starting address (odd or even). The start address information is input to control circuitry 270 which generates an address for the pointer to retrieve the pointer according to a sequential or interleaving scheme.
The circuit shown in fig. 5 is divided into odd column and even column circuits. Each circuit 251 represents an SSA, at least one latch 253 and at least one switch 255 for activating the circuit 251. The circuit 250 includes even column circuitry and includes SSA, at least one latch 252 and at least one switch 254 for enabling the circuit 250. Each of the circuits 250 and 251 includes an MDQ and the MDQ disable line as an input. In this embodiment, 8 SSAs are included and 8MDQ/MSQ disabled pairs are included. SSAs include an additional latch in addition to latches 252 and 253. Switches 254 and 255 are used to enable circuits 250 and 251, and circuits 250 and 251 are used for data transfer according to a sequential burst sequence. The second sense amplifier enable SSAE allows the SSA to activate and also be used for synchronous data transfer. The additional latches 252 and 253, which store data in the SSA (a portion of which is shown), are used to help buffer the data until the FIFO/OCD is ready to receive and transfer the data within a 4-bit burst. In this way, by the pointer signal PNTe < 0: 3> and PNTo < 0: the 3> controlled circuits 250 and 251 and latches 252 and 253 implement a 4-bit prefetch. The transfer of data from circuits 250 and 251 continues through RWDO and RWDE, which include latches 256 and 257.
The control signals PNTI, PNTO <0> and PNTO <1> alternate the transfer of data through FIFO latches 258 and 259 and are preferably controlled by pointer signals PNTo and PNTe. A switch 260(FIFO output switch) controlled by PNTO <0> and PNTO <1> is employed to enable and disable the transfer of data therein to provide a 2-bit prefetch. The PNTI may be provided by the control circuit 270 or from another source. An 8-bit prefetch is achieved between the MDQs and DQ according to the invention by using the structure shown in fig. 5. The structure shown in fig. 5 may be extended to provide more prefetching. The circuit shown in fig. 5 can also be used in the embodiment shown in fig. 2 as well as in other circuits.
Having described preferred embodiments for hierarchical prefetching for semiconductor memories (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above description. It is therefore to be understood that changes may be made in the particular embodiments of the invention disclosed which are within the scope and spirit of the invention as outlined by the appended claims. Having thus described the invention with the details and particularity required by the patent laws, what is claimed and desired protected by letters patent is set forth in the appended claims.

Claims (23)

1. A semiconductor memory, comprising:
a data path comprising a plurality of hierarchical memory levels, each level comprising a different bit rate than the other levels;
at least two prefetch circuits disposed between the stages, the at least two prefetch circuits including at least two latches for receiving data bits and storing the data bits until a next stage in the hierarchy is able to receive the data bits, the at least two prefetch circuits disposed between the stages such that a total data rate between the stages is substantially the same; and
a control signal for controlling the at least two latches so that the prefetch circuitry can maintain the overall data rate between the stages.
2. The semiconductor memory according to claim 1, wherein the prefetch circuit has a depth of 8 bits.
3. The semiconductor memory as recited in claim 1, wherein the plurality of stages includes a first stage at a lower level and a second stage at a higher level, with a prefetch circuit between the two stages having a depth greater than or equal to a quotient of a bit rate of the first stage divided by a bit rate of the second stage, rounding any decimal number to a nearest integer.
4. The semiconductor memory as recited in claim 1, wherein the stage includes one of a sense amplifier and a first-in/first-out/off.
5. The semiconductor memory of claim 1, wherein the total data rate is greater than 400 megabits per second.
6. The semiconductor memory as recited in claim 1, wherein hierarchical levels are configured using hierarchical data lines on the memory cell array and the read/write drivers.
7. The semiconductor memory according to claim 1, wherein an overall data rate between stages is determined by multiplying a prefetch depth by a bit data rate of the stage.
8. A semiconductor memory chip comprising:
a memory array having partitions, each partition having four quadrants, each quadrant including an odd column and an even column of memory cells;
a data path associated with each quadrant, each quadrant including a local data line for transmitting memory data, the local data line connected to a first stage including a first sense amplifier circuit, the first stage connected to a second stage including a second sense amplifier circuit through a main data line, the second stage connected to a third stage including a first in/first out/off line driver circuit through a read/write driver line, the first in/first out/off line driver circuit connected to the input/output lead;
at least two latch circuits disposed within the stages for providing a prefetch capability for transferring data through the data paths, the at least two latch circuits for receiving data bits and storing the data bits until a next stage in the data paths is able to receive the data bits, the at least two latch circuits being associated with the stages such that a data rate between the stages is substantially equal to a required data rate for each stage; and
a control signal for controlling the at least two latch circuits to provide a prefetch capability to maintain a data rate between the stages.
9. The semiconductor memory chip of claim 8, wherein the latch circuit provides an 8-bit prefetch depth.
10. The semiconductor memory chip as recited in claim 9, wherein the prefetch depth distribution is second level 4 bits and third level 2 bits.
11. The semiconductor memory chip as recited in claim 9, wherein the prefetch depth distribution is first level 2 bits, second level 2 bits, and third level 2 bits.
12. The semiconductor memory chip as recited in claim 8, wherein the prefetch depth is a value greater than or equal to a quotient of a bit rate of one level divided by a bit rate of another level in a case where any decimal is rounded to a nearest integer.
13. The semiconductor memory chip of claim 8, wherein the semiconductor memory chip comprises a total data rate greater than 400 megabits per second.
14. The semiconductor memory chip of claim 8, wherein the correct burst sequence control signal comprises a pointer signal for transferring data between stages.
15. The semiconductor memory chip according to claim 8, wherein the semiconductor memory chip is one of a synchronous DRAM chip, a rambus DRAM chip, and a SyncLink DRAM chip.
16. The semiconductor memory chip of claim 8, wherein the second stage includes a switch for enabling the second stage, and the control signal includes a pointer signal for activating and deactivating the switch.
17. The semiconductor memory chip of claim 8, wherein the third stage includes a switch for activating the third stage, and the control signal includes a pointer signal for activating and deactivating the switch.
18. The semiconductor memory chip of claim 8, wherein the first level bit data rate is about 20ns per bit.
19. The semiconductor memory chip of claim 8, wherein the bit rate of the second level is between about 10ns per bit to about 20ns per bit.
20. The semiconductor memory chip of claim 8, wherein the third level data rate is about 5ns per bit.
21. The semiconductor memory chip of claim 8, further comprising a control circuit for incrementing an address from one of an even and odd starting address to provide a sequential address for generating the control signal.
22. The semiconductor memory chip of claim 8, further comprising a control circuit for formulating an address from one of an even and odd starting address to provide an interleaved address for generating the control signal.
23. The semiconductor memory chip according to claim 8, wherein an overall data rate between stages is determined by a product of a prefetch depth and a bit data rate of the stage.
HK01100999.0A 1999-02-11 2001-02-12 Hierachical prefetch for semiconductor memories HK1030293A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US60/119,713 1999-02-11
US09/333,539 1999-06-15

Publications (1)

Publication Number Publication Date
HK1030293A true HK1030293A (en) 2001-04-27

Family

ID=

Similar Documents

Publication Publication Date Title
US6081479A (en) Hierarchical prefetch for semiconductor memories
US5883855A (en) High speed semiconductor memory with burst mode
US8078821B2 (en) Semiconductor memory asynchronous pipeline
US6172893B1 (en) DRAM with intermediate storage cache and separate read and write I/O
JP2003249077A (en) Semiconductor memory device and its control method
US8140783B2 (en) Memory system for selectively transmitting command and address signals
US20070028027A1 (en) Memory device and method having separate write data and read data buses
KR100362193B1 (en) Data Output Device of DDR SDRAM
JP3183159B2 (en) Synchronous DRAM
US6205084B1 (en) Burst mode flash memory
USRE38955E1 (en) Memory device having a relatively wide data bus
CN1279541C (en) Hierarchical prefetch for semiconductor memory
US6219283B1 (en) Memory device with local write data latches
US6628565B2 (en) Predecode column architecture and method
HK1030293A (en) Hierachical prefetch for semiconductor memories
KR100532444B1 (en) Memory device implementing 2N bit prefetch scheme using N bit prefetch structure and 2N bit prefetching method and auto-precharge method
US12537052B2 (en) Memories, operation methods thereof and memory systems
US7729198B2 (en) Synchronous memory circuit
KR950008663B1 (en) Dram access control apparatus
MXPA96004528A (en) System memory unit with performance / improved cost, using dynamic random access memory with extend data output