[go: up one dir, main page]

HK1166386B - Architecture for address mapping of managed non-volatile memory - Google Patents

Architecture for address mapping of managed non-volatile memory Download PDF

Info

Publication number
HK1166386B
HK1166386B HK12106897.7A HK12106897A HK1166386B HK 1166386 B HK1166386 B HK 1166386B HK 12106897 A HK12106897 A HK 12106897A HK 1166386 B HK1166386 B HK 1166386B
Authority
HK
Hong Kong
Prior art keywords
simultaneously addressable
block
nvm
command
read
Prior art date
Application number
HK12106897.7A
Other languages
Chinese (zh)
Other versions
HK1166386A1 (en
Inventor
塔霍马.托尔科斯
尼尔.雅各布.瓦卡拉特
肯尼思.L.赫曼
巴利.科勒帝
威蒂姆.克梅尔尼特斯基
安东尼.珐
丹尼尔.杰弗里.波斯特
张晓翰
Original Assignee
苹果公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/614,369 external-priority patent/US8370603B2/en
Application filed by 苹果公司 filed Critical 苹果公司
Publication of HK1166386A1 publication Critical patent/HK1166386A1/en
Publication of HK1166386B publication Critical patent/HK1166386B/en

Links

Description

Architecture for address mapping for managed non-volatile memory
RELATED APPLICATIONS
This application claims priority from U.S. provisional patent application No.61/140,436 filed on 23/12/2008 and U.S. patent application No.12/614,369 filed on 6/11/2009, each of which is incorporated herein by reference in its entirety.
Technical Field
The present subject matter relates generally to accessing and managing managed non-volatile memory.
Background
Flash memory is an Electrically Erasable Programmable Read Only Memory (EEPROM). Because flash memories are non-volatile and relatively dense, they are used to store files and other persistent objects in handheld computers, mobile phones, digital cameras, portable music players, and many other devices where other storage schemes (e.g., magnetic disks) are not suitable.
NAND is a type of flash memory that is accessible like a block device (block device) such as a hard disk or memory card. A typical block size is 32 pages of 512 bytes, each of which is a block size of 16 KB. Each block consists of a plurality of pages. A typical page size is 512 bytes. Associated with each page is a plurality of bytes (e.g., 12-16 bytes) for storing an error detection and correction checksum. Reading and programming are performed page by page, erasing is performed block by block, and data in a block can only be written sequentially. NAND relies on Error Correction Codes (ECC) to compensate for bits that may flip (flip) during normal device operation. When performing an erase or program operation, the NAND device can detect blocks that fail to be programmed or erased and mark those blocks as bad in a bad block map. Data may be written to different good blocks and bad block mappings updated.
Managed NAND devices combine the original NAND with a memory controller to handle error correction and detection and memory management functions of the NAND memory. Managed NAND is commercially available in Ball Grid Array (BGA) packages or other Integrated Circuit (IC) packages that support standardized processor interfaces, such as Multimedia Memory Cards (MMC) and Secure Digital (SD) cards. The managed NAND device may include a plurality of NAND devices or dies that are accessible using one or more chip select signals. Chip select is a control line used in a digital circuit to select one chip from several chips connected to the same bus. Chip select is typically a command pin on most IC packages that connects an input pin on the device to internal circuitry within the device. When the chip select pin is held in an inactive state, the chip or device ignores the state change on its input pin. When the chip select pin is held in the active state, the chip or device responds as if it were the only chip on the bus.
The open NAND flash interface working group (ONFI) has developed a low-level interface for NAND flash chips to allow interoperability between compliant devices from different vendors. ONFI specification version 1.0 specifies: standard physical interfaces (pin-out, pin) for NAND flash in TSOP-48, WSOP-48, LGA-52 and BGA-63 packages; a standard command set for reading, writing and erasing NAND flash memory chips; and mechanisms for self-identification. ONFI specification version 2.0 supports a dual channel interface, with odd chip selects (also referred to as chip enables or "CEs") connected to channel 1 and even CEs connected to channel 2. The physical interface will have no more than 8 CEs for the entire enclosure.
While the ONFI specification allows interoperability, the current ONFI specification does not fully utilize managed NAND schemes.
Disclosure of Invention
The disclosed architecture uses address mapping to map block addresses on a host interface to internal block addresses of a non-volatile memory (NVM) device. The block address is mapped to an internal chip select for selecting a simultaneously addressable unit (CAU) identified by the block address. The disclosed architecture supports generic non-volatile memory commands for read, write, erase and get status operations. The architecture also supports an extended command set for supporting read and write operations that leverage the multi-CAU architecture.
Drawings
FIG. 1 is a block diagram of an example memory system including a host processor coupled to a managed NVM package.
FIG. 2A illustrates an example address mapping for a managed NVM package implementing the address mapping of FIG. 2A.
Fig. 2B is a block diagram of the example NVM package of fig. 1.
FIG. 2C illustrates an example address mapping scheme for the managed NVM package of FIG. 1.
FIG. 2D illustrates the address mapping scheme of FIG. 2C including bad block replacement.
FIG. 3 is a flow diagram of example operations using a read command with an address.
FIG. 4 is a flow diagram of example operations using a write command with an address.
FIG. 5 is a flow diagram of example operations using an erase command with an address.
6A-6B are flow diagrams of example operations using a StrideRead command.
FIG. 7 is a flowchart of an example operation using a StrideWrite command.
FIG. 8 illustrates the use of a command queue in the NVM package of FIG. 1.
FIG. 9 is a flow diagram of an example process for recording commands in the command queue shown in FIG. 8.
Detailed Description
Memory System overview
Fig. 1 is a block diagram of an example memory system 100, including a host processor 102 coupled to a managed NVM package 104 (i.e., a managed NAND package). NVM package 104 can be a BGA package or other IC package that includes multiple NVM devices 108 (e.g., multiple raw NAND devices). The memory system 100 may be used in a variety of devices, including but not limited to: handheld computers, mobile phones, digital cameras, portable music players, toys, thumb drives, email devices, and any other device where non-volatile memory is desired or required. As used herein, an original NVM is a storage device or package managed by an external host processor, and a managed NVM is a storage device or package that includes at least one internal memory management function such as error correction, wear leveling (wear leveling), bad block management, etc.
In some implementations, NVM package 104 can include a controller 106 for accessing and managing NVM device 108 through internal channels using internal chip select signals. The internal channel is the data path between controller 106 and NVM device 108. The controller 106 may perform memory management functions (e.g., wear leveling, bad block management) and may include an error correction (ECC) engine 110 for detecting and correcting data errors (e.g., flipped bits). In some implementations, the ECC engine 110 may be implemented as a hardware component in the controller 106 or as a software component executed by the controller 106. In some implementations, the ECC engine 110 can be located in the NVM device 108. A pipeline management module 112 that efficiently manages data throughput may be included.
In some implementations, host processor 102 and NVM package 104 can transmit information (e.g., control commands, addresses, data) over a communication channel visible to the host ("host channel"). The host channel may support a standard interface, such as the original NAND interface or a dual channel interface, as described in ONFI specification version 2.0. Host processor 102 may also provide a host Chip Enable (CE) signal. The host CE is visible to the host processor 102 for selecting a host channel.
In the example memory system 100, NVM package 104 supports CE hiding. CE hiding allows a single host CE to be used for each internal channel in NVM package 104, thereby reducing the number of signals required to support the interface of NVM package 104. As described with reference to fig. 2A, memory accesses (memory accesses) may be mapped to internal channels and NVM device 108 using address space and address mapping. The individual NVM devices 108 may be enabled with internal CE signals generated by the controller 106.
Example Address mapping
FIG. 2A illustrates an example address mapping relationship for managed NVM. Controller 106 maps the block address received on the host channel to a specific block address internal to NVM device 108. To assist in address mapping, controller 106 provides host processor 102 with geometric parameters including, but not limited to: die size, block size, page size, metadata size (MDS), run (run), and stride (stride).
The run and stride parameters enable the host processor 102 to generate an efficient sequence of page addresses. The run parameter identifies the number of CAUs in NVM package 104 that are currently addressable simultaneously using the host CE and address mapping. A CAU may be a portion of NVM device 108 that is accessible from a single host channel that can be written to or read from at the same time as another CAU. The CAU may also be the entire NVM device 108. The stride parameter identifies the number of blocks within the CAU for vendor-specific operation commands.
In the example block mapping shown in fig. 2A, NVM package 104 has run 2 (i.e., two CAUs) and span 4 (i.e., 4 blocks per CAU), allowing host processor 102 to generate a slice (slice) of 8 blocks: b0, b1, b2, b3, b4, b5, b6 and b 7. Thus, a slice is a set of blocks that sum up run times span. NVM packages with different stroke and span values can be fabricated based on the desired application or memory architecture. Note that block identifiers have been italicized in fig. 2A and 2B to visually distinguish blocks belonging to different CAUs.
The MDS parameter identifies the number of bytes associated with each page size that are allowed for metadata. The page size is a data area of a page of the non-volatile memory. Perfect Page Size (PPS) is the number of bytes equal to the page size plus MDS. The original Page Size (Raw Page Size, RPS) is the Size of a physical Page of the nonvolatile memory.
Example NVM Package implementing Address mapping
Fig. 2B is a block diagram of the example managed NVM package 104 of fig. 1 implementing the address mapping of fig. 2A. NVM package 104 may include a host interface having a host channel, a Command Latch Enable (CLE) input, an Address Latch Enable (ALE) input, a Chip Enable (CE) input, and a ready/busy (R/B) input. The host interface may include more or fewer inputs. In this example, the host interface receives a logical address from the host processor 102. The logical address may include bits representing such a field [ block address page address offset ], which is typical of NVM addressing.
In some implementations, the controller 106 reads the logical address from the host co-track and maps the block address to a specific internal block address using the address mapping of FIG. 2A. For example, if the logical address is [0, 0]]The block address is 0. The block address is mapped as an internal chip select for the NVM device 108aThe block address, page address, and offset form the physical address of the PPS used to access data from the selected CAU. In this example, the present CAU includes the entire physical NVM device 108a, as compared to CAU 202, which includes a portion of NVM device 108 b. Thus, the block address performs two functions: 1) assisting selection by mapping bits of block address to internal CEs for CAU or NVM devicesSelecting a CAU in a physical NVM device, or a physical NVM device; 2) for providing physical addresses to access blocks in selected CAU or NVM devices.
In this example, even blocks are mapped to NVM device 108a and odd blocks are mapped to CAU 202 in NVM device 108 b. When the controller 106 detects an even numbered block address, the controller 106 activates internal chip enable for the NVM device 108aAnd when the controller 106 detects odd numbered block addresses, the controller 106 activates internal chip enable CE for the NVM device 108b1. The address mapping scheme can be extended to any desired number of CAU and/or NVM devices in a managed NVM package. In some implementations, the most significant bits of a block address may be used to select an internal CE, and the remaining block address bits, or the entire block address, may be combined with the page address and offset into a physical address to access the block to perform the operation. In some implementations, to select the internal CE to activate, decode logic can be added to the NVM package or controller 106 to decode the block address.
The advantage of the above address mapping scheme is that the host interface of NVM package 104 can be simplified (reduced pin count) and still support generic raw NVM commands (e.g., raw NAND commands) for read, write, erase, and get status operations. In addition, extended commands may be used to leverage the multi-CAU architecture. Similar to the interleaved commands for a conventional native NVM architecture (e.g., native NAND architecture), NVM package 104 supports simultaneous read and write operations.
In some implementations, the engine 110 performs error correction on the data and sends the status to the host processor through the host interface. This state informs the host processor whether the operation failed, allowing the host processor to adjust the block address to access a different CAU or NVM device. For example, if a large number of errors occur in response to an operation on a particular CAU, the host processor may modify the block address to avoid activating the internal CE for the defective NVM device.
Fig. 2C illustrates an example address mapping scheme for managed NVM package 104 of fig. 1. In particular, the mapping may be used for managed NAND devices that include multiple dies, where each die may potentially include multiple planes (planes). In some implementations, the address map operates on a Concurrently Addressable Unit (CAU). A CAU is a portion of physical storage accessible from a single host channel that can be read, programmed, or erased simultaneously or in parallel with other CAUs in the NVM package. The CAU may be, for example, a single plane or a single die. The CAU size is the number of erasable blocks in the CAU.
The mapping will be described using an example memory architecture. For this example architecture, the block size is defined as the number of pages in the erasable block. In some implementations, 16 bytes of metadata are available for every 4 kilobytes of data. Other memory architectures are possible. For example, the metadata may be allocated with more or fewer bytes.
The address mapping scheme shown in FIG. 2C allows the use of the original NAND protocol for reading/programming/erasing NAND blocks and other commands that enable performance to be optimized. NVM package 104 includes an ECC engine (e.g., ECC engine 110) for managing data reliability of the NAND. Thus, the host controller 102 need not include the ECC engine 110 or otherwise process data for reliability.
NVM package 104 defines a CAU as an area that can be accessed (e.g., moving data from NAND memory cells to internal registers) simultaneously or in parallel with other CAUs. In this example architecture, it is assumed that all CAUs include the same number of blocks. In other implementations, the CAU may have a different number of blocks. Table I below describes an example row address format for accessing a page in a CAU.
TABLE I-exemplary Row Address Format
R[X+Y:X+Y+Z-1] R[X:X+Y-1] R[0:X-1]
CAU Block Page
Referring to table I, an example n-bit (e.g., 24-bit) row address may be presented to a controller in a NAND device in the following format: [ CAU: block (2): page]. The CAU is a number (e.g., an integer) representing a die or plane. The Block (Block) is the Block offset in the CAU identified by the CAU number, and the Page (Page) is the Page offset in the Block identified by the Block (Block). For example, in a device with 128 pages per block, 8192 blocks per CAU, and 6 CAUs: x will be 7 (2)7128), Y will be 13 (2)138192) and Z would be 3 (2)2<6<23)。
The example NVM package 104 shown in fig. 2C includes two NAND dies 204a, 204b, and each die has two planes. For example, die 204a includes planes 206a, 206 b. Also, die 204b includes planes 206c, 206 d. In this example, each plane is a CAU and each CAU has 2048 multi-level cell (MLC) blocks of 128 pages each. Program and erase operations may be performed on blocks of one span (blocks from each CAU). A span is defined as an array of blocks each from a different CAU. In the illustrated example, "span 0" defines block 0 from each of CAUs 0-3, "span 1" defines block 1 from each of CAUs 0-3, "span 2" defines block 2 from each of CAUs 0-3, and so on.
The NVM package includes an NVM controller 202, and the NVM controller 202 communicates with the CAU via a control bus 208 and an address/data bus 210. During operation, NVM controller 202 receives a command from a host controller (not shown) and, in response to the command, asserts control signals on control bus 208 and an address or data on address/data bus 210 to perform an operation (e.g., a read, program, or erase operation) on one or more CAU. In some implementations, the command includes a command having a [ CAU: block (2): page ] form of the row address as described with reference to fig. 2C.
FIG. 2D illustrates the address mapping scheme of FIG. 2C including bad block replacement. In this example, a stride address has been issued by host controller 102 for NVM package 104 having three CAUs, where one of the CAUs holds a bad block in the stride block offset. The "span 4" address will normally access the CAU 0: block 4, CAU 1: block 4 and CAU 2: and 4. block 4. In this example, however, bad block CAU 1: block 4 is divided by CAU 1: block 2000 replaces.
Example Command set
NVM package 104 can support a transparent mode. The transparent mode enables access to the memory array without ECC and can be used to evaluate the performance of the controller 106. NVM package 104 also supports generic raw NVM commands for read, write, and get status operations. Tables 1-3 describe example read, write, and Commit (Commit) operations. For conventional raw NVM, the NVM device should be ready before a write command is issued. As described with reference to table 4, the readiness may be determined using a status read operation.
TABLE 1 example read operation
TABLE 2-example write operation (write mode)
TABLE 3 example write operation (commit mode)
TABLE 4 example State read operation
In addition to the above operations, the controller 106 may support various other commands. A Page Parameter Read (Page Parameter Read) command returns geometry parameters from NVM package 104. Some examples of geometric parameters include, but are not limited to: die size, block size, page size, MDS, run, and stride. The Abort (Abort) command causes the controller 106 to monitor the current operation and stop subsequent stride operations in the process. The Reset command stops the current operation, invalidating the contents of the memory cell being changed. The command register in the controller 106 is cleared in preparation for the next command. Read id (read id) command returns product identification. The Read Timing (Read Timing) command returns the setup, hold, and delay times for the write and erase commands. The Read device parameter (Read DeviceParameter) command returns the specific identification of NVM package 104, including specification support, device version, and firmware version.
An example command set is described in table 5 below.
TABLE 5 example Command set
Function(s) 1stCollection 2ndCollection
Page reading 00h 30h
Page read with address 07h 37h
Stride reading 09h----09h 39h
Page writing 80h 10h
Page write with address 87h 17h
Stride write 89h----89h 19h
Block erase 60h D0h
Block erase with address 67h D7h
Read status 70h -
Read status with address 77h -
Read bit rollover counter 72h -
Reading ID 90h -
Read timing 91h -
Reading device parameters 92h -
Reset FFh -
Pause 99h -
Example read, write and Erase operations
To leverage the multi-CAU architecture in NVM package 104, NVM package 104 can utilize an extended command set to support access to all or several CAUs. NVM package 104 may support the following extended command, where all addresses are aligned to the PPS: read with address, write with address, erase with address, and status with address. Fig. 3-7 indicate where interleaving across CAUs may occur. The points at which interleaving may occur (referred to as "interleaved points") are indicated by circles. Since the start point and the end point of each operation are each a staggered point, the start point and the end point of each operation appear as a white circle and a cross-hatched circle, respectively, and all intermediate points where staggering may occur are indicated by striped hatched circles. Fig. 3-7 assume that the NVM package is in a fully ready state after a series of operations.
FIG. 3 is a flow diagram of example operations 300 using a read command with an address. At step 302, the host processor issues a read command with an address to the NVM package. At step 304, the host processor performs a wait for the address status sequence until the NVM package provides a status indicating that the address is ready to be read. At step 306, the host processor issues a confirm command with an address to the NVM package. At step 308, the controller in the NVM package transmits the PPS byte data to the host processor over the host channel. Error correction is applied to the bytes in the NVM package using an ECC engine (e.g., ECC engine 110). In this example read command operation with an address, an interleaving point may occur at the beginning and end of the operation and between intermediate steps 302 and 304, and intermediate steps 304 and 306 of the operation.
An example read command operation with an address for a single page spanning two CAUs (run 2 and span 1) may be as follows:
(read) [ Block 0 Page 0]
(READ) [ Block 1 Page 0]
(GetPageStatus) [ Block 0 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 1 Page 0] W4R { data + metadata }
FIG. 4 is a flow diagram of example operations 400 using a write command with an address. At step 402, the host processor issues a write command with an address. At step 404, the host processor transfers PPS bytes of data to the controller in the NVM package over the host channel. Error correction is applied to these bytes using an ECC engine. At step 406, the host processor issues a commit command with an address to the memory array corresponding to the address, the commit command with the address committing the uncommitted write to the CAU. Any corresponding ECC syndromes are also submitted. At step 408, the host processor performs a wait for the sequence of states with the address until the NVM package provides a state indicating that data has been written to the address. In this example write command operation with an address, the interleaving points may occur at the beginning and end of the operation and between intermediate points 406 and 408 of the operation.
An example write command operation with an address for a single page spanning two CAUs (run 2 and span 1) may be as follows:
(StrideWrite) [ Block 0 Page 0] < data + metadata >
(StrideWrite) [ Block 1 Page 0] < data + metadata >
(GetPageStatus) [ Block 0 Page 0] W4R { State }
(GetPageStatus) [ Block 1 Page 0] W4R { State }
(submit) [ Block 0 Page 0]
(submit) [ Block 1 Page 0]
FIG. 5 is a flow diagram of example operations 500 using an erase command with an address. At step 502, the host processor issues an erase command with an address. At step 504, the host processor performs a wait for the status with the address until the NVM package provides a status indicating that the address is ready to be erased. In this operation of the example erase command with address, the interleaving points may occur at the beginning and end of the operation and between intermediate steps 502 and 504 of the operation.
Example stride operation
To balance the utilization of vendor specific commands, NVM packages support multiple page operations within the CAU. In particular, the NVM package supports both stridread and stridwrite commands.
Fig. 6A and 6B are flow diagrams of example operations 600 using a stridread command with an address. Referring to step 602 of FIG. 6A, given the number of blocks S in a span of NVM devices and the number of pages per block N to be read, the number of remaining pages to be read P can be set equal to the product of S and N. At step 604, the host processor initiates the next stride by setting the counter I equal to zero. In step 606, P is compared to S. If P is 0, the operation 600 ends. If P > S, then in step 608, the host processor issues a StrideRead command with an address. If P ≦ S, then in step 610, the host processor issues a LastStrideRead command with an address.
At step 612, counter I is incremented by 1. In step 614, I is compared to S. If I < S, operation 600 returns to step 606. If I ═ S, operation 600 begins the transfer of pages in the span, as described with reference to FIG. 6B.
Referring to step 616 in FIG. 6B, the counter T is set equal to zero. At step 618, the host processor performs a wait for the sequence of states with the address until the NVM package provides a state indicating that the address is ready to be read. At step 620, the host processor issues an acknowledge command with the address. At step 622, the NVM package transmits the PPS byte of data to the host processor. At step 624, the counter T is incremented by 1. At step 626, counter T is compared to S. If T < S, operation 600 returns to step 618. If T ═ S, the number of remaining pages P to be read is decreased by S in step 628, and operation 600 returns to step 604.
An example addressed stridread of eight pages that extend across two CAUs and four spans (run 2 and span 4) may operate as follows:
(StrideRead) [ Block 0 Page 0]
(StrideRead) [ Block 1 Page 0]
(StrideRead) [ Block 2 Page 0]
(StrideRead) [ Block 3 Page 0]
(StrideRead) [ Block 4 Page 0]
(StrideRead) [ Block 5 Page 0]
(LastStrideRead) [ Block 6 Page 0]
(LastStrideRead) [ Block 7 Page 0]
(GetPageStatus) [ Block 0 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 1 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 2 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 3 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 4 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 5 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 6 Page 0] W4R { data + metadata }
(GetPageStatus) [ Block 7 Page 0] W4R { data + metadata }
FIG. 7 is a flowchart of example operations 700 using a StrideWrite command with an address. Referring to step 702, given the number of blocks S in a span of NVM devices and the number of pages per block to be written N, the number of remaining pages to be written P can be set equal to the product of S and N. In step 704, the host processor compares P to S. If P is 0, operation 700 ends. If P > S, then in step 706, the host processor issues a StrideWrite command with address. If P ≦ S, then in step 708, the host processor issues a LastStrideWrite command with an address.
At step 710, the host processor transfers the PPS byte of data to the NVM package. At step 712, the host processor issues an acknowledge command with address to commit the write to the memory array. At step 714, the host processor performs a wait for the addressed state until the NVM package provides a state indicating that data has been committed to memory. At step 716, the number of pages remaining to write is decremented by 1 and operation 700 returns to step 704.
An example addressed stridwrite operation for eight pages that extend across two CAUs and four spans (run 2 and span 4) may be as follows:
(StrideWrite) [ Block 0 Page 0] < data + metadata >
(StrideWrite) [ Block 1 Page 0] < data + metadata >
(GetPageStatus) [ Block 0 Page 0] W4R { State }
(StrideWrite) [ Block 2 Page 0] < data + metadata >
(GetPageStatus) [ Block 1 Page 0] W4R { State }
(StrideWrite) [ Block 3 Page 0] < data + metadata >
(GetPageStatus) [ Block 2 Page 0] W4R { State }
(StrideWrite) [ Block 4 Page 0] < data + metadata >
(GetPageStatus) [ Block 3 Page 0] W4R { State }
(StrideWrite) [ Block 5 Page 0] < data + metadata >
(GetPageStatus) [ Block 4 Page 0] W4R { State }
(LastStrideWrite) [ Block 6 Page 1] < data + metadata >
(GetPageStatus) [ Block 5 Page 0] W4R { State }
(LastStrideWrite) [ Block 7 Page 1] < data + metadata >
(GetPageStatus) [ Block 6 Page 0] W4R { State }
(GetPageStatus) [ Block 7 Page 0] W4R { State }
Example queue configuration
FIG. 8 illustrates the use of a command queue in an NVM package. In some implementations, the NVM package 800 can include one or more queues 804 accessible by the controller 802. The queue may be a FIFO queue. Commands received by the host controller may be stored in a queue 804. In the illustrated example, there are three queues. Each queue is for a read command, a program command, and an erase command, respectively. In response to a triggering event, the controller 802 may record one or more commands in one or more of the queues 804 to optimize performance during memory operations. For example, one triggering event may be: if the top entry in the queue (and buffer) goes to a plane or CAU that is busy with another operation.
FIG. 9 is a flow diagram of an example process 900 for recording commands in the command queue shown in FIG. 8. In some implementations, the process 900 begins by receiving a command from a host controller (902). These commands are used for boot operations on the non-volatile memory. The commands are stored in one or more queues (904). For example, three separate queues may store read commands, program commands, and erase commands. The command is recorded 906 in the non-volatile storage device by the controller in response to the triggering event.
While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while the operations in the figures are described in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be beneficial. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments have been described. Other embodiments are within the scope of the following claims.

Claims (15)

1. A non-volatile memory (NVM) package, comprising:
an interface configured to receive a block address;
a plurality of simultaneously addressable units, each simultaneously addressable unit containing a plurality of blocks; and
a processor coupled to the interface and the plurality of simultaneously addressable units, the processor operable to map a block address to a block in one of the plurality of simultaneously addressable units identified by the block address; and
a host interface operable to receive a host chip enable signal from a host processor coupled to the host interface, wherein the processor is operable to map the host chip enable signal to an internal chip enable signal within one of the simultaneously addressable units, the internal chip enable signal for activating the simultaneously addressable unit;
wherein the processor is further configured to map the block address to a block in the one of the plurality of simultaneously addressable units according to a mapping relationship comprising a run parameter and a stride parameter, wherein the run parameter comprises a number of simultaneously addressable units accessible using the host chip enable signal, and the stride parameter comprises a number of blocks within one simultaneously addressable unit for an operation command.
2. The enclosure of claim 1, wherein the processor receives a command from the interface for a read or write operation that is a sequence of read or write commands to perform simultaneous primitive transactions on one or more simultaneously addressable units.
3. The enclosure of claim 2, wherein an amount of data read from or written to the simultaneously addressable unit is equal to a product of a span parameter N of the simultaneously addressable unit and a number of bytes: the number of bytes equals the page size plus the number of bytes associated with the page allowed for metadata, where N is a positive integer representing the number of pages to read or write, and the span is the number of blocks within the simultaneously addressable unit for the operation command.
4. The package of claim 1, further comprising:
an error correction engine to apply error correction to a block of data read from or written to the simultaneously addressable units.
5. The enclosure of claim 4, wherein the error correction engine is included in one or more of the simultaneously addressable units.
6. The package of claim 1, further comprising:
a pipeline management engine to manage throughput of simultaneously addressable units.
7. The enclosure of claim 1, wherein the processor performs simultaneous read or write operations on two or more simultaneously addressable units.
8. A method performed by a non-volatile memory (NVM) package coupled to a host processor, comprising:
receiving a block address from the host processor; and
mapping the block address to a block in one of a plurality of concurrently addressable units identified by the block address;
receiving a host chip enable signal from the host processor; and
mapping the host chip enable signal to an internal chip enable signal within one of the simultaneously addressable units; and
activating the internal chip enable signal;
wherein mapping the block address further comprises mapping the block address according to a mapping relationship including a run parameter and a stride parameter, wherein the run parameter includes a number of simultaneously addressable units accessible using the host chip enable signal, and the stride parameter includes a number of blocks within one simultaneously addressable unit for the operation command.
9. The method of claim 8, further comprising:
receiving a command for a read or write operation; and
one or more simultaneous primitive transactions are performed on one or more simultaneously addressable units in accordance with the command.
10. The method of claim 9, wherein the amount of data read from or written to the simultaneously addressable unit is equal to the product of the span parameter N of the simultaneously addressable unit and the number of bytes: the number of bytes equals the page size plus the number of bytes associated with the page allowed for metadata, where N is a positive integer representing the number of pages to read or write, and the span is the number of blocks within the simultaneously addressable unit for the operation command.
11. A system for operating on data stored in a non-volatile memory (NVM) package, comprising:
an interface to send a request for parameters to the NVM package, the NVM package including a plurality of simultaneously addressable units, and the interface to receive a run parameter and a stride parameter, wherein the run parameter indicates a number of simultaneously addressable units within the NVM package that are accessible with a single chip enable signal provided by a host processor, and wherein the stride parameter indicates a number of blocks for an operation command within one simultaneously addressable unit; and
a processor coupled to the interface, the processor operable to send a command sequence to the NVM package for simultaneous execution of primitive transactions on one or more simultaneously addressable units, the command sequence including an address generated by the host processor based on the run parameter and the stride parameter.
12. The system of claim 11, wherein the processor is operable to send data with a write command to the NVM package, wherein the size of the data is equal to the product of the span N and the number of bytes: the number of bytes is equal to the page size plus the number of bytes associated with each page size allowed for metadata, where N is a positive integer representing the number of pages to write.
13. The system of claim 11, wherein the processor is operable to send a read command to the NVM package, wherein a size of data to read is equal to a product of span N and a number of bytes: the number of bytes is equal to the page size plus the number of bytes associated with each page size allowed for metadata, where N is a positive integer representing the number of pages to read.
14. A method performed by a host processor coupled to a non-volatile memory (NVM) package, comprising:
sending a request for parameters to the NVM package, the NVM package including a plurality of simultaneously addressable units;
receiving, in response to the request, a run parameter and a stride parameter, wherein the run parameter indicates a number of simultaneously addressable units within the NVM package that are accessible using a single chip enable signal provided by the host processor, and wherein the stride parameter indicates a number of blocks within one simultaneously addressable unit for the operation command; and
sending a command sequence to the NVM package for concurrently performing a primitive transaction on one or more concurrently addressable units, the command sequence including an address generated by the host processor based on the run parameter and the span parameter.
15. The method of claim 14, further comprising:
sending data with a write command to the NVM package, wherein a size of the data is equal to a product of the span N and a number of bytes: the number of bytes is equal to the page size plus the number of bytes associated with each page size allowed for metadata, where N is a positive integer representing the number of pages to write.
HK12106897.7A 2008-12-23 2009-11-24 Architecture for address mapping of managed non-volatile memory HK1166386B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US14043608P 2008-12-23 2008-12-23
US61/140,436 2008-12-23
US12/614,369 2009-11-06
US12/614,369 US8370603B2 (en) 2008-12-23 2009-11-06 Architecture for address mapping of managed non-volatile memory
PCT/US2009/065804 WO2010074876A1 (en) 2008-12-23 2009-11-24 Architecture for address mapping of managed non-volatile memory

Publications (2)

Publication Number Publication Date
HK1166386A1 HK1166386A1 (en) 2012-10-26
HK1166386B true HK1166386B (en) 2015-07-24

Family

ID=

Similar Documents

Publication Publication Date Title
CN102326154B (en) Architecture for Address Mapping of Managed Non-Volatile Memory
US8806151B2 (en) Multipage preparation commands for non-volatile memory systems
CN102414666B (en) Low latency read operation for managed non-volatile memory
US9152556B2 (en) Metadata rebuild in a flash memory controller following a loss of power
US8612791B2 (en) Method of selective power cycling of components in a memory device independently by turning off power to a memory array or memory controller
US20100287329A1 (en) Partial Page Operations for Non-Volatile Memory Systems
HK1166386B (en) Architecture for address mapping of managed non-volatile memory
HK1172443A (en) Flash memory device comprising host interface for processing a multi-command descriptor block in order to exploit concurrency