US10895991B2 - Solid state device with improved sustained data writing speed - Google Patents
Solid state device with improved sustained data writing speed Download PDFInfo
- Publication number
- US10895991B2 US10895991B2 US16/191,193 US201816191193A US10895991B2 US 10895991 B2 US10895991 B2 US 10895991B2 US 201816191193 A US201816191193 A US 201816191193A US 10895991 B2 US10895991 B2 US 10895991B2
- Authority
- US
- United States
- Prior art keywords
- data
- storing
- addresses
- nvm
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/17—Embedded application
- G06F2212/171—Portable consumer electronics, e.g. mobile phone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
- G06F2212/214—Solid state disk
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7202—Allocation control and policies
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7205—Cleaning, compaction, garbage collection, erase control
Definitions
- the present invention relates to solid state devices (SSDs), and more particularly to a solid state device (SSD) that includes improved sustained data writing speed.
- SSDs solid state devices
- NVMs non-volatile memories
- SSDs solid state devices
- NVMs non-volatile memories
- These non-volatile memories may include one or more flash memory devices, such as NAND flash memories.
- flash memory devices such as NAND flash memories.
- SSDs provide very fast writing speeds relative to hard disk drives, there is ongoing need to improve the sustained writing speeds of SSDs.
- One example where writing speed is important is when a host records video data.
- Hosts such as digital video cameras, are capable of capturing high density and high-quality videos. Recording high density videos creates a lot of data that has to be stored.
- SSDs must have writing speeds that can keep up with the speed at which data is created when the host is recording high density videos, such as 4K video. Otherwise, data that is created by the host, may be lost and not get stored at the SSD.
- SSDs can be fabricated with memory cells that have very fast writing speeds, SSDs are not optimized to take full advantage of the maximum writing speeds of these memory cells. These SSDs are often slowed down by the need to perform garbage collection during a writing operation at the SSDs, which reduces the effective writing speed of the SSDs.
- An improved SSD is proposed that provides improved sustained maximum writing speeds.
- Such an improved SSD may be implemented as a memory card that can be used to support real time recording and storing of high density and high-quality videos.
- a data storage apparatus includes a non-volatile memory (NVM) and a controller.
- the NVM includes a first NVM portion and a second NVM portion.
- the first NVM portion includes a plurality of first cell types.
- the first NVM portion includes a first sub-portion that is allocated to store file management data.
- the second NVM portion includes a plurality of second cell types.
- the controller is coupled to the NVM.
- the controller is configured to receive a plurality of payload data and a plurality of file management data; store the plurality of file management data at the first sub-portion of the first NVM portion; and store the plurality of payload data at the NVM.
- a method for operating a data storage apparatus receives data, at a controller coupled to a non-volatile memory (NVM).
- the NVM includes a plurality of first cell types and a plurality of second cell types.
- the plurality of first cell types includes a first plurality of addresses allocated to store only file management data; and a second plurality of addresses allocated to store only payload data.
- the plurality of second cell types includes a third plurality of addresses.
- the method determines whether the received data includes payload data or file management data.
- the method stores the received data at one or more addresses from the first plurality of addresses, when the received data includes file management data.
- the method stores the received data at one or more addresses from the second plurality of addresses, when the received data includes payload data.
- a data storage apparatus includes means for non-volatile storing of data, and means for controlling the means for non-volatile storing of data.
- the means for non-volatile storing of data includes means for first non-volatile storing of data and means for second non-volatile storing of data.
- the means for first non-volatile storing of data includes a first plurality of addresses allocated to store file management data; and a second plurality of addresses allocated to store payload data.
- the means for second non-volatile storing of data includes a third plurality of addresses.
- the means for controlling the means for non-volatile storing of data includes: means for receiving data; means for determining whether the received data includes payload data or file management data; means for storing the received data at one or more addresses from the first plurality of addresses, when the received data includes file management data; and means for storing the received data at one or more addresses from the second plurality of addresses, when the received data includes payload data.
- FIG. 1 illustrates a block diagram of a solid state device (SSD) in accordance with embodiments of the present disclosure.
- FIG. 2 illustrates a block diagram of a non-volatile memory (NVM) with several portions and sub-portions.
- NVM non-volatile memory
- FIG. 3 illustrates a block diagram of different data being queued to be transmitted from a host to an SSD in accordance with embodiments of the present disclosure.
- FIG. 4 illustrates a block diagram of a translation table in an SSD in accordance with embodiments of the present disclosure.
- FIG. 5 illustrates a block diagram of different types of data being routed to different locations of an NVM of an SSD in accordance with embodiments of the present disclosure.
- FIG. 6 illustrates a graph of an exemplary writing speed of an SSD in accordance with embodiments of the present disclosure.
- FIG. 7 illustrates a graph of an exemplary writing speed of an SSD different routing schemes in accordance with embodiments of the present disclosure.
- FIG. 8 illustrates a block diagram of garbage collection being performed on an NVM of an SSD in accordance with embodiments of the present disclosure.
- FIG. 9 illustrates a block diagram of a queue of different types of data to be transmitted from a host to an SSD in accordance with embodiments of the present disclosure.
- FIG. 10 illustrates an exemplary flow diagram of a method for writing to an SSD in accordance with embodiments of the present disclosure.
- FIG. 11 illustrates an exemplary flow diagram of a method for routing different types of data to different portions of an NVM of an SSD in accordance with embodiments of the present disclosure.
- FIG. 12 illustrates a block diagram of an NVM with several portions and sub-portions.
- FIG. 13 illustrates a block diagram of blocks for different portions and sub-portions of an NVM of an SSD in accordance with embodiments of the present disclosure.
- the present disclosure provides a data storage device/apparatus.
- the data storage device/apparatus may be a solid state device (SSD).
- the SSD may be a memory card.
- a data storage apparatus, such as an SSD (e.g., memory card) may include a non-volatile memory (NVM) and a controller.
- the NVM includes a first NVM portion and a second NVM portion.
- the first NVM portion includes a plurality of first cell types (e.g., plurality of single level cells (SLCs)).
- the first NVM portion includes a first sub-portion that is allocated to store file management data (e.g., File Allocation Table (FAT) data).
- file management data e.g., File Allocation Table (FAT) data
- the second NVM portion includes a plurality of second cell types (e.g., MLCs, TLCs).
- the controller is coupled to the NVM.
- the controller is configured to receive a plurality of payload data and a plurality of file management data; store the plurality of file management data at the first sub-portion of the first NVM portion; and store the plurality of payload data at the NVM.
- an SSD e.g., memory card
- different types of data may be initially routed to different portions and/or sub-portions of the SSD. These different portions and/or sub-portions may be allocated to store only certain types of data.
- routing different types of data to different portions of the SSD may prevent garbage collection from being triggered during the writing of data by the SSD when a host is recording video.
- the SSD is able to provide sustained high writing speeds that can at least match the speed at which a host is capturing video data, thus providing support for real time recording and storing of video data.
- FIG. 1 illustrates a block diagram of a device 100 that includes a solid state device (SSD).
- the device 100 includes a solid state device (SSD) 102 and a host 104 .
- the SSD 102 may be an example of a data storage apparatus.
- the SSD 102 may be implemented as a memory card.
- the SSD 102 may be implemented as a solid state drive.
- the SSD 102 is coupled to the host 104 . Commands and data that travels between the SSD 102 and the host 104 may be referred as I/O overhead.
- the SSD 102 includes a controller 130 , a non-volatile memory (NVM) interface 140 and a non-volatile memory (NVM) 150 , such as NAND Flash memory.
- the controller 130 includes a host interface 120 , a processor 132 (or alternatively, an NVM processor 132 ) and a memory 134 (e.g., random access memory (RAM)).
- the NVM interface 140 may be implemented within the controller 130 .
- the host interface 120 may be implemented outside of the controller 130 .
- the controller 130 is configured to control the NVM 150 through the NVM interface 140 .
- the controller 130 may be implemented in a System on Chip (SoC).
- SoC System on Chip
- the processor 132 may be a processor die
- the memory 134 may be a memory die.
- two or more of the above components e.g., processor, memory
- the host interface 120 facilitates communication between the host 104 and other components of the SSD 102 , such as the controller 130 , the processor 132 , and/or the memory 134 .
- the host interface 120 may be any type of communication interface, such as an Integrated Drive Electronics (IDE) interface, a Universal Serial Bus (USB) interface, a Serial Peripheral (SP) interface, an Advanced Technology Attachment (ATA) or Serial Advanced Technology Attachment (SATA) interface, a Small Computer System Interface (SCSI), an IEEE 1394 (Firewire) interface, Non Volatile Memory Express (NVMe), or the like.
- the host interface 120 of the SSD 102 may be in communication with the SSD interface 160 of the host 104 .
- the processor 132 is coupled to the RAM memory 134 .
- the processor 132 is also coupled to the NVM 150 via the NVM interface 140 .
- the processor 132 controls operation of the SSD 102 .
- the processor 132 receives commands from the host 104 through the host interface 120 and performs the commands to transfer data between the host 104 and the NVM 150 .
- the processor 132 may manage reading from and writing to the memory 134 for performing the various functions effected by the processor 132 and to maintain and manage cached information stored in memory 134 .
- the processor 132 may receive data through a buffer (not shown) and/or send data through the buffer (not shown).
- the buffer may be part of the memory 134 or separate from the memory 134 .
- the processor 132 may include any type of processing device, such as a microprocessor, a microcontroller, an embedded controller, a logic circuit, software, firmware, or the like, for controlling operation of the SSD 102 .
- some or all of the functions described herein as being performed by the processor 132 may instead be performed by another component of the SSD 102 .
- the SSD 102 may include a microprocessor, a microcontroller, an embedded controller, a logic circuit, software, firmware, or any kind of processing device, for performing one or more of the functions described herein as being performed by the processor 132 .
- one or more of the functions described herein as being performed by the processor 132 are instead performed by the host 104 .
- some or all of the functions described herein as being performed by the processor 132 may instead be performed by another component such as a processor in a hybrid drive including both non-volatile memory elements and magnetic storage elements.
- the memory 134 may be any memory, computing device, or system capable of storing data.
- the memory 134 may be a random-access memory (RAM), a dynamic random-access memory (DRAM), a double data rate (DDR) DRAM, a static random-access memory (SRAM), a synchronous dynamic random-access memory (SDRAM), a flash storage, an erasable programmable read-only-memory (EPROM), an electrically erasable programmable read-only-memory (EEPROM), or the like.
- the processor 132 uses the memory 134 , or a portion thereof, to store data during the transfer of data between the host 104 and the NVM 150 .
- the memory 134 or a portion of the memory 134 may be a cache memory.
- the memory 134 may be a shared memory that is accessible by different components, such the processor 132 .
- the NVM 150 receives data from the processor 132 via the NVM interface 140 and stores the data.
- the NVM 150 may be any type of non-volatile memory, such as a flash storage system, a NAND-type flash memory, a solid state storage device, a flash memory card, a secure digital (SD) card, a universal serial bus (USB) memory device, a CompactFlash card, a SmartMedia device, a flash storage array, or the like.
- the NVM interface 140 may be a flash memory interface.
- the NVM 150 may include a first NVM portion and a second NVM portion.
- the second NVM portion may include a first sub-portion and a second sub-portion.
- the NVM 150 may include different cell types. Examples of cell types include a single level cell (SLC), a multi-level cell (MLC), and a triple level cell (TLC). These and other cell types are further described below.
- the host 104 may be any device and/or system having a need for data storage or retrieval and a compatible interface for communicating with the SSD 102 .
- the host 104 may include a computing device, a personal computer, a portable computer, a workstation, a server, a router, a network device, a personal digital assistant, a digital camera, a digital phone, a digital video camera, or combinations thereof.
- the host 104 can include several hosts.
- the host 104 may be a separate (e.g., physically separate) device from the SSD 102 .
- the host 104 includes the SSD 102 .
- the SSD 102 may be a memory card that is inserted in the host 104 .
- the SSD 102 is remote with respect to the host 104 or is contained in a remote computing system communicatively coupled with the host 104 .
- the host 104 may communicate with the SSD 102 through a wireless communication link.
- the host 104 may include an SSD interface 160 , a processor 170 , a memory 180 (e.g., random access memory (RAM)).
- the SSD interface 160 is coupled to the processor 170 .
- the processor 170 is coupled to the memory 180 .
- the SSD interface 160 facilitates communication between the SSD 102 and other components of the host 104 , such as the processor 170 and the memory 180 .
- the host 104 provides commands to the SSD 102 for transferring data between the host 104 and the SSD 102 .
- the host 104 may provide a write command to the SSD 102 for writing data to the SSD 102 , or a read command to the SSD 102 for reading data from the SSD 102 .
- the SSD 102 may provide a response, to the write command or the read command, to the host 104 .
- the processor 170 may be similar to the processor 132 .
- the processor 170 may include any type of processing device, such as a microprocessor, a microcontroller, an embedded controller, a logic circuit, software, firmware, or the like, for controlling operation of the host 104 .
- some or all of the functions described herein as being performed by the processor 170 may instead be performed by another component of the host 104 .
- the host 104 may include a microprocessor, a microcontroller, an embedded controller, a logic circuit, software, firmware, or any kind of processing device, for performing one or more of the functions described herein as being performed by the processor 170 .
- one or more of the functions described herein as being performed by the processor 170 are instead performed by the SSD 102 .
- some or all of the functions described herein as being performed by the processor 170 may instead be performed by another.
- the memory 180 may be any memory, computing device, or system capable of storing data.
- the memory 180 may be a random-access memory (RAM), a dynamic random-access memory (DRAM), a double data rate (DDR) DRAM, a static random-access memory (SRAM), a synchronous dynamic random-access memory (SDRAM), a flash storage, an erasable programmable read-only-memory (EPROM), an electrically erasable programmable read-only-memory (EEPROM), or the like.
- the processor 170 uses the memory 180 , or a portion thereof, to store data.
- the memory 180 or a portion of the memory 180 may be a cache memory.
- the memory 180 may be a shared memory that is accessible by different components, such the processor 170 .
- FIG. 2 illustrates a block diagram of the NVM 150 .
- the NVM 150 includes a first NVM portion 210 and a second NVM portion 220 .
- the first NVM portion 210 includes a first sub-portion 212 and a second sub-portion 214 .
- the first NVM portion 210 may include one or more first dies (e.g., memory die), and the second NVM portion 220 may include one or more second dies (e.g., memory die).
- the first NVM portion 210 may be a first physical partition and/or a first logical partition of the NVM 150 .
- the second NVM portion 220 may be a second physical partition and/or a second logical partition of the NVM 150 .
- the first NVM portion 210 may include a plurality of single level cells (SLCs).
- the second NVM portion 220 may include a plurality of multi-level cells (MLCs).
- An MLC may include cells that are configured to 2 or more bits per cell.
- the plurality of MLCs may include a plurality of triple level cells (TLCs).
- the first NVM portion 210 may include a plurality of first physical addresses (e.g., memory physical address), and the second NVM portion 220 may include a plurality of second physical addresses.
- the first NVM portion 210 includes a first plurality of cells (e.g., memory cells) that has a first maximum writing speed
- the second NVM portion includes a second plurality of cells that has a second maximum writing speed that is lower than the first maximum writing speed
- the second plurality of cells may include MLCs and/or TLCs may have a second maximum writing speed that is lower than the first maximum writing speed of SLCs (which is an example of the first plurality of cells).
- the first plurality of cells may be MLCs and the second plurality of second cells may be TLCs, and the TLCs have a second maximum writing speed that is lower than the first maximum writing speed of the MLCs.
- the first plurality of cells and the second plurality of cells may be the same type of cells that have different maximum writing speeds.
- the first plurality of cells may be a first plurality of SLCs with a first maximum writing speed
- the second plurality of cells may be a second plurality of SLCs with a second maximum writing speed.
- the first NVM portion 210 includes a first sub-portion 212 and a second sub-portion 214 .
- the first sub-portion 212 may include a subset of dies from the first dies of the first NVM portion 210 .
- the first sub-portion 212 may be a physical partition and/or a logical partition of the first NVM portion 210 .
- the first sub-portion 212 may include a first plurality of physical addresses from the first physical addresses of the first NVM portion 210 .
- the second sub-portion 214 may include a subset of dies from the first dies of the first NVM portion 210 .
- the second sub-portion 214 may be a physical partition and/or a logical partition of the first NVM 210 . In some implementations, the second sub-portion 214 may include a second plurality of physical addresses from the first physical addresses of the first NVM portion 210 .
- FIG. 2 illustrates an example of how the NVM 150 may be divided into different portions and/or partitions.
- the first sub-portion 212 may be allocated to store (e.g., only store, initially store) file management data (e.g., File Allocation Table (FAT) data).
- the second sub-portion 214 may be allocated to store (e.g., only store, initially store) payload data (e.g., audio video data).
- the second NVM portion 220 may store file management data and/or payload data.
- the NVM 150 may have different configurations of portions and/or partitions, which may be reserved or allocated to store for different types of data.
- the NVM 150 may include more than two NVM portions and/or more than two sub-portions.
- different types of data may be stored (e.g., initially stored) in different portions of the NVM 150 to provide an SSD 102 that is capable of sustained high performance writing speeds. Moreover, storing different types of data in different portions may provide reduced data loss and/or reduced errors in data that is stored in the SSD 102 .
- FIG. 3 illustrates a block diagram of how data may be stored in a host memory and then queue up to be transmitted to an SSD.
- FIG. 3 illustrates the memory 180 of a host 104 storing a plurality of data 300 .
- the plurality of data 300 may include a plurality of File Allocation Table (FAT) data 310 , such as a first FAT data 302 and a second FAT data 304 .
- FAT data may include entry data, bitmap data and FAT information.
- the plurality of data 300 may also include a plurality of payload data 320 , such as payload data 321 - 327 . Examples of payload data include audio video data.
- FIG. 1 illustrates a block diagram of how data may be stored in a host memory and then queue up to be transmitted to an SSD.
- FIG. 3 illustrates the memory 180 of a host 104 storing a plurality of data 300 .
- the plurality of data 300 may include a plurality of File Allocation Table (FAT)
- FIG. 3 illustrates that the payload data are stored in recording units (RUs) in the memory 180 , with each recoding unit having a physical address.
- An RU is how memory may be divided in the memory 180 .
- the plurality of FAT data 310 and the plurality of payload data 320 may be stored in the memory 180 in any manner or order (e.g., random order, sequential order). However, in some implementations, the plurality of FAT data 310 may be stored in physical addresses that are near each other.
- FIG. 3 also illustrates how the plurality of FAT data 310 and the plurality of payload data 320 may be queued up to be transmitted to the SSD 102 .
- the submission queue 330 illustrates an exemplary order that the host 104 may use to transmit data to the SSD 204 .
- the plurality of payload data 320 are grouped into several allocation units (AUs). This allows some data to be stored in the same physical address (e.g., physical memory address) or same block of physical addresses of the SSD 012 .
- the AU 1 includes a group of payload data 321 - 324
- the AU 2 includes a group of payload data 325 - 326 .
- the payload data 321 - 324 are transmitted, followed by the first FAT data 311 , then the payload data 325 - 326 , and then the second FAT data 312 . It is noted that the order of transmission or reception of the payload data and the FAT data may vary and be different.
- the SSD 102 may direct different types of data to be stored in different portions of the NVM 150 .
- the data that is received is routed or directed to specific portions and/or sub-portions of the NVM 150 , based on pre-defined routing schemes.
- FIG. 3 illustrates that the first FAT data 311 and the second FAT data are directed to be stored at the first sub-portion 212 of the first NVM portion 210 .
- the first sub-portion 212 may be allocated to store (e.g., store only) file management data.
- the payload data 321 - 327 are directed to be stored at the second sub-portion 214 of the first NVM portion 210 .
- the second sub-portion 214 may be allocated to store (e.g., store only) payload data.
- the destination addresses which are represented as T1, T2, etc., are logical addresses that may be specified by the host 104 .
- a translation table e.g., flash translation table (FTL)
- FAT data may include a logical address specified by the host 104 .
- the FAT data may use a pre-defined set of logical addresses for all FAT data.
- the SSD 102 may specify a physical address that is located in the first sub-portion 212 .
- the SSD 102 may identify data received from the host 104 as being FAT data using various methods.
- data may be identified as FAT data based on the logical block addressing (LBA) and/or the command size (e.g., CMD size) of the data.
- LBA logical block addressing
- CMD size e.g., CMD size
- data that is associated with a particular pre-defined logical address may be considered FAT data.
- data that is below a certain threshold size may be considered FAT data.
- data that is associated with a certain command may be considered FAT data.
- the SSD 102 may use one or more of the above methods for determining whether data is FAT data.
- combinations of the above methods may be used to determine whether data is FAT data.
- the SSD 102 may use other methods for determining that data is FAT data.
- the SSD 102 may determine that the data is payload data.
- these routing schemes allow the SSD 102 to provide sustained high-performance writing speeds during a recording of video (e.g., high density (HD) video, 4K video) by the host 104 . This enables high quality video to be recorded and stored in real time. Moreover, these routing schemes reduce the amount of data that is loss and/or errors in data.
- video e.g., high density (HD) video, 4K video
- FIG. 4 illustrates an example of a translation table 400 that may be used to convert logical addresses to physical addresses.
- the translation table 400 may be a flash translation table (FTL).
- the translation table 400 may include instructions and/or commands that convert a logical address to a physical address in the SSD 102 .
- the translation table 400 may manage the NVM 150 in terms of blocks (e.g., memory blocks) for ease of management. Every block has a physical address which may be pre-determined or pre-defined in the NVM 150 .
- the assignment of the logical address to the physical address may be done at flash management unit (FMU) level, which is generally 4K.
- the translation table 400 may assign the next available physical memory to the incoming logical address. Once the physical address is assigned to a logical address, the physical address is stored in the translation table 400 . Whenever the data in a physical address X, is moved to a physical address Y, the translation table 400 is updated with the updated physical address for the logical address. Every I/O operation that requires a physical address for a user data may use the translation table 400 .
- FIG. 4 illustrates an example where there is a command to write to logical address T7.
- the command may be from the host 104 .
- the logical address T7 is associated with the physical address 3 of the NVM 150 .
- the physical address 3 is part of the block 0 (e.g., memory block) of the NVM 150 .
- the host 104 specifies that data be written to the logical address T7, the data is stored in the physical address 3 of block 0 of the NVM 150 .
- FIG. 5 illustrates a block diagram of how data may be stored in the NVM 150 during a recording of video by a host that is coupled to the SSD.
- one or more FAT data from the plurality of FAT data 310 is directed by the SSD 102 , to be stored (e.g., initially stored) at the first sub-portion 212 of the first NVM portion 210 of the NVM 150 .
- the SSD 102 may direct one or more FAT data from the plurality of FAT data 310 , to be stored at the second NVM portion 220 .
- the SSD 102 may make a determination as to whether there is available space at the first sub-portion 212 , to store FAT data. When there is available space (e.g., when the first sub-portion 212 is not full), the SSD 102 may store the FAT data at the first sub-portion 212 . However, when there is not available space at the first sub-portion 212 , the SSD 102 may store the FAT data at the second NVM portion 220 . This process may be iteratively performed for each FAT data that is received by the SSD 102 .
- FIG. 5 also illustrates one or more payload data from the plurality of payload data 320 being directed by the SSD 102 , to be stored (e.g., initially stored) at the second sub-portion 214 of the first NVM portion 210 of the NVM 150 .
- the SSD 102 may direct one or more payload data from the plurality of payload data 320 , to be stored at the second NVM portion 220 .
- the SSD 102 may make a determination as to whether there is available space at the second sub-portion 214 , to store payload data.
- the SSD 102 may store the payload data at the second sub-portion 214 . However, when there is not available space at the second sub-portion 213 , the SSD 102 may store the payload data at the second NVM portion 220 . This process may be iteratively performed for each payload data that is received by the SSD 102 .
- the SSD 102 may first attempt to write payload data and/or FAT data (which is an example of file management data) at a portion (e.g., portion that includes SLCs) of the NVM 150 that has a first maximum writing speed before attempting to write data at another portion (e.g., portion that includes MLCs and/or TLCs) of the NVM 150 that has a second maximum writing speed that is lower than the first maximum writing speed.
- FAT data which is an example of file management data
- FIG. 5 illustrates that the NVM 150 includes portions and sub-portions that are reserved or allocated for a particular type of data.
- particular physical addresses or blocks of physical addresses of the NVM 150 are reserved or allocated to store only a particular type of data.
- reserving and/or allocating physical addresses or blocks of physical addresses of the NVM 150 for storing only a particular type of data may be done during and/or after a formatting of the NVM 150 .
- formatting or reformatting the NVM 150 may result in different physical addresses or blocks of physical addresses to be reserved and/or allocated to store a particular type of data.
- the NVM 150 is divided into different portions, sub-portions, partitions (e.g., physical partition, logical partitions) or combinations thereof, where some of the portions and/or sub-portions may have different properties (e.g., storage capabilities per cell, writing speeds, reliability). Some of these different portions are made of different configurations of cells that store data.
- the first NVM portion 210 may include SLCs
- the second NCM portion 220 may include MLCs and/or TLCs.
- An SLC is a memory cell that can store a single bit of data per cell.
- An SLC is faster than other cells at storing and retrieving data, is more reliable (e.g., less error prone) and longer lasting than other cells. However, an SLC is more expensive than other cells.
- An MLC is a memory cell that can store multiple bits of data per cell (e.g., two or more bits of data per cell). An MLC is not a fast as an SLC; is more error prone than an SLC, but an MLC is cheaper to fabricate than an SLC.
- An TLC is a memory cell that can store 3 bits of data per cell. An TLC cell is cheaper to fabricate than an SLC, but is not as reliable (e.g., more error prone) than an SLC.
- the first NVM portion 210 includes a plurality of SLCs. Thus, when data is initially stored, a routing scheme will first attempt to store data at the first NVM portion 210 , which is faster at storing data than the second NVM portion 220 .
- the NVM 150 is divided in such a way that garbage collection may not be needed to be performed when writing to the NVM 150 during a video recording by the host 104 .
- about 1 percent (%) or more of the total capacity of the NVM 150 is reserved for the first NVM portion 210 .
- about 1 percent (%) or more of the total capacity of the NVM 150 is reserved for the first sub-portion 212 of the first NVM portion 210 .
- saving about 1 percent or more of the total storage for storing FAT data is enough so that garbage collection is not triggered during video recording by the host 104 . This is because in a worst-case scenario, FAT data will not be more than about 1 percent of the total audio video data.
- FIGS. 6 and 7 illustrates two graphs that show exemplary writing performance by an SSD under different scenarios.
- FIGS. 6 and 7 assume the SLCs have a maximum writing speed of about 1200 megabytes per second (MBps) and TLCs that have a maximum writing speed of about 850 MBps. These speeds are merely examples. Other SLCs, TLCs and MLCs may have different maximum writing speeds.
- FIG. 6 illustrates a graph 600 that shows writing speeds relative to how much data is stored in the NVM for an SSD that doesn't use a specialized routing scheme. As shown in FIG. 6 , the SSD is able to sustain a high writing speed (e.g., about 850 MBps) for up to a particular amount of data stored in the SSD.
- a high writing speed e.g., about 850 MBps
- the high writing speed is no longer sustainable because the SSD has to perform garbage collection. This causes the writing speed to drop to around 110 MBps.
- the writing speed picks up again for a short period time, until the SSD has to perform garbage collection again. This process may repeat itself several times until the SSD is full. As shown in FIG. 6 , the SSD is not able to sustain for a long period of time, high writing speeds.
- the assumption in FIG. 6 is there is no pre-defined area for the FAT data.
- the payload data and the FAT data are both directed to a common first type of cells (e.g., SLC), which leads to utilization of the first type of cells at a fast rate (in this case of FIG. 6 the mark is around 110 GB of user data for an SSD with storage of around 512 GB) which in turn leads to reaching the first type of cells threshold number of blocks.
- the first type of cells threshold is defined to have a minimum number of blocks to be always available in the SSD 102 to accept FAT data and/or data associated with Forced Unit Access (FUA) commands for faster turnaround time from the SSD 102 .
- the minimum number of blocks may vary with different embodiments.
- a garbage collection mechanism is triggered. This is a mechanism where the contents of first type of cells blocks are transferred to a second type of cells (e.g., TLC) blocks in order to free up the first type of cells blocks and maintain the first type of cells threshold. Since the first type of cells is common for all kinds of host data, this activity comes in the foreground which thereby directly impacts the SSD 102 performance to accept host data leading to performance dips as shown in FIG. 6 . Foreground garbage collection may occur until the number of available blocks is returned back above the minimum number of blocks. FIG. 6 illustrates that after about 110 GB, the SSD 102 alternates between garbage collection and storage, which is highly inefficient.
- the first type of cells may have smaller block sizes than the block sizes for the second type of cells (e.g., TLCs).
- SLCs may have block sizes of about 128 MB
- TLCs may have block sizes of about 384 MB.
- the smaller block sizes of the SLCs means that it is more likely that the minimum number of blocks available for FAT data and/or data associated with Forced Unit Access (FUA) commands will be reached, and thus more likely that garbage collection is triggered.
- FAT Forced Unit Access
- FIG. 7 illustrates a graph 700 that shows writing speeds relative to how much data is stored in the NVM for an SSD uses routing schemes.
- the NVM of FIG. 7 may use the same configuration of SLCs and TLCs as that of FIG. 6 .
- the graph 700 shows that by routing certain types of data to certain locations, a high writing speed may be achieve even when the SSD is full or near capacity.
- the term full or full capacity of an SSD may mean when data can no longer be stored in the SSD. This specialized routing scheme avoids or reduces the triggering of garbage collection by the SSD (e.g., during a video recording by a host).
- FIG. 7 illustrates a graph 700 that shows writing speeds relative to how much data is stored in the NVM for an SSD uses routing schemes.
- the NVM of FIG. 7 may use the same configuration of SLCs and TLCs as that of FIG. 6 .
- the graph 700 shows that by routing certain types of data to certain locations, a high writing speed may be achieve even when the SSD is full or near capacity.
- the SSD is able to sustain a high writing speed (e.g., about 850 MBps) while the SSD is being used to store data received from a host (e.g., video recording data from the host).
- a high writing speed e.g., about 850 MBps
- the graph 700 illustrates writing speeds using some or all of the routing methods and schemes described in the present disclosure.
- FIG. 7 shows an initial writing speed of 1200 MBps, which may be attributed to the fact that data may be stored initially in the SLCs.
- the routing of data is performed so that a certain type of data (e.g., FAT data) is stored in a first sub-portion 212 and another type of data (e.g., payload data) is first stored in a second sub-portion 214 .
- FAT data a certain type of data
- payload data another type of data
- the other type of data e.g., payload data
- subsequent other type of data e.g., payload data
- the partition or division of the NVM 150 is such, that even in a worse-case scenario, FAT data will always be stored in the first sub-portion 212 .
- the first sub-portion 212 may represent about 1 percent of all the storage of the NVM 150 , so that all of the FAT data and/or command can be stored in first sub-portion 212 . The end result is that foreground garbage collection should be triggered during recording.
- garbage collection e.g., foreground garbage collection
- the SSD 102 may nonetheless perform garbage collection (e.g., background garbage collection) when the host 104 is idle (e.g., not video recording, not storing data at the SSD).
- FIG. 8 illustrates a block diagram of garbage collection (e.g., background garbage collection) being performed at the NVM 150 , when the host 104 is idle (e.g., not performing video recording).
- the SSD 102 may move or relocate data from the first sub-portion 212 to the second NVM portion 220 ; and/or move or relocate data from the second sub-portion 214 to the second NVM portion 220 .
- data from a first location of the second NVM portion 220 may be moved or relocated to a second location of the second NVM portion 220 .
- Moving or relocating data may include moving or relocating data a first physical address (e.g. physical memory address) to a second physical address.
- garbage collection may move data within the respective sub-portions. For example, garbage collection may include moving data at a first location of the first sub-portion 212 to a second location of the first sub-portion 212 . Similarly, garbage collection may include moving data at a first location of the second sub-portion 214 to a second location of the second sub-portion 214 . However, different implementations may perform garbage collection differently.
- FIG. 9 illustrates how data is processed by the host 104 and the SSD 102 , and how that may affect data loss and/or errors in data.
- FIG. 9 illustrates a submission queue 900 (may also me known as transmission queue) that includes a plurality of payload data and file management data (e.g., FAT data).
- the host 104 may queue up the data in such as way that data that are related to each other stay as close as possible in the submission queue or transmission queue.
- FIG. 9 also illustrates a cache 910 (e.g., FAT entry cache) that is used to group FAT data together, so that they can be stored together in the same block of the NVM 150 .
- the cache 910 maybe used by the SSD 102 .
- FAT data or any other file management data, does not usually take up a lot of space (relative to payload data), and to optimize space usage, FAT data may be grouped together for storage by the SSD 102 . So as the host 104 is transmitting various types of data, FAT data is stored in a cache until there is enough FAT data (e.g., FAT data 1, FAT data 2, FAT data 3) to store in the NVM.
- payload data (e.g., AU 1, AU 2, AU 3) may not be stored in the NVM until the corresponding FAT data is also stored.
- payload data e.g., AU 1, AU 2, AU 3
- UGSD ungraceful shutdown
- the above issue can be reduced by storing the FAT data in a portion of the NVM 150 that includes SLCs, which are faster and more reliable than other memory cells, like MLCs and TLCs.
- the SSD 102 is reducing the likelihood of data loss and/or errors in data.
- FIG. 10 illustrates a flow chart of a method 1000 for writing data at a solid state device (SSD).
- the method 1000 shown in FIG. 10 may be performed by any of the SSDs described in the present disclosure, such as the SSD 102 . Also, for purpose of clarity, the method shown in FIG. 10 does not necessarily show all the operations performed by the SSD. In some implementations, the method shown in FIG. 10 may include other operations that can be performed by the SSD. In some implementations, the order of the methods may be changed or rearranged.
- the method 1000 may be performed by a controller or a processor of the SSD, as described above. Some parts or all of the method 1000 may be performed by the SSD 102 , when the host 104 is recording video.
- the method receives (at 1002 ) data.
- the data may be received from a host (e.g., 104 ) through the host interface 120 .
- the data may include various types of data, such as file management data (e.g., FAT data) and audio video data.
- the method determines (at 1004 ) the type of data that has been received.
- Data can include payload data (e.g., audio video data) and file management data, such FAT data.
- Data can also include data associated with a Force Unit Access (FUA) command and/or a read modify write (RMW) command.
- FAT Force Unit Access
- RMW read modify write
- Different implementations may use different methods for determining the type of data that is received.
- the SSD 102 may identify data received from the host 104 as being FAT data using various methods.
- data may be identified as FAT data based on the logical block addressing (LB A) and/or the command size (e.g., CMD size) of the data.
- LB A logical block addressing
- CMD size command size
- data that is below a certain threshold size may be considered FAT data.
- data that is associated with a certain command may be considered FAT data.
- the SSD 102 may use one or more of the above methods for determining whether data is FAT data. In some implementations, combinations of the above methods may be used to determine whether data is FAT data. However, it is noted that the SSD 102 may use other methods for determining that data is FAT data. For example, the method may determine that data is payload data or FAT data by looking at the header of the data and/or looking at the size of the data. The host 104 may specify the type of data that is transmitted to the SSD 102 . In some implementations, when the SSD 102 determines that data is not FAT data, the SSD 102 may determine that the data is payload data.
- the method stores (at 1006 ) the received data at an appropriate location based (i) on the type of data received and (ii) how much space or capacity is available at one or more of the portions and/or sub-portions of the NVM 150 .
- the method 1000 may determine whether the first sub-portion 212 is full or if there is enough available space at the first sub-portion 212 . When there is available space (e.g., when the first sub-portion 212 is not full), the method 1000 may store the FAT data at the first sub-portion 212 . However, when the first sub-portion 212 is full or there is not enough available space at the first sub-portion 212 , the method 1000 may direct the FAT data to be stored at the second NVM portion 220 .
- the method 1000 may determine whether the second sub-portion 214 is full or if there is enough available space at the second sub-portion 214 . When there is available space (e.g., when the second sub-portion 214 is not full), the method may store the payload data at the second sub-portion 214 . However, when the second sub-portion 214 is full or there is not enough available space at the second sub-portion 214 , the method 1000 may direct the payload data to be stored at the second NVM portion 220 .
- the method 1000 may determine whether the first sub-portion 212 is full or if there is enough available space at the first sub-portion 212 . When there is available space (e.g., when the first sub-portion 212 is not full), the method 1000 may store the data associated with the FUA command or the RMW command at the first sub-portion 212 . However, when the first sub-portion 212 is full or there is not enough available space at the first sub-portion 212 , the method 1000 may direct the data associated with the FUA command or the RMW command to be stored at the second NVM portion 220 .
- the first sub-portion 212 and/or the second sub-portion 214 may be full or near capacity, and the method 1000 may perform (at 1008 ) a foreground garbage collection (e.g., garbage collection performed while host is recording video) to free up space in the first sub-portion 212 and/or the second sub-portion 214 .
- a foreground garbage collection e.g., garbage collection performed while host is recording video
- the method 1000 may perform foreground garbage collection when the data that is received is associated with the FUA command or the RMW command, and space (e.g., physical addresses) in the first sub-portion 212 that are allocated for storing data associated with the FUA command or the RMW command is full or near capacity.
- the method determines (at 1010 ) whether there is more data. If so, the method proceeds back to receive (at 1002 ) more data. If not, the method 1000 may determine that the host is idle, and the method 1000 may perform (at 1012 ) a background garbage collection. As mentioned above, background garbage collection may occur when the host is idle (e.g., not recording video, not capturing image). Garbage collection may include moving or relocating data (e.g., FAT data, payload data) from a first physical address to a second physical address. The second physical address may be located within the same sub-portion or portion of the first physical address, or the second physical address can be located in a different sub-portion or different portion of the first physical address. Examples of garbage collections are described in FIG. 8 .
- FIG. 11 illustrates a flow chart of a method 1100 for writing data at a solid state device (SSD).
- the method shown in FIG. 11 may be performed by any of the SSDs described in the present disclosure, such as the SSD 102 . Also, for purpose of clarity, the method shown in FIG. 11 does not necessarily show all the operations performed by the SSD. In some implementations, the method shown in FIG. 11 may include other operations that can be performed by the SSD. In some implementations, the order of the method may be changed or rearranged.
- the method 1100 may be performed by a controller or a processor of the SSD, as described above. Some parts or all of the method 1100 may be performed by the SSD 102 , when the host 104 is recording video.
- the method receives (at 1102 ) data.
- the data may be received from a host (e.g., 104 ) through the host interface 120 .
- the method determines (at 1104 ) the type of data that has been received.
- Data can include payload data (e.g., audio video data) and file management data, such as FAT data. Examples of how to determine the type of data are described in FIG. 10 .
- the method 1100 proceeds to determine (at 1106 ) whether the first sub-portion 212 is full or if there is enough available space at the first sub-portion 212 . When there is available space (e.g., when the first sub-portion 212 is not full), the method 1100 stores (at 1108 ) the file management data at the first sub-portion 212 , which may include storing data at one or more addresses from a first plurality of addresses of the first NVM portion 210 .
- file management data e.g., FAT data
- the method proceeds to store (at 1110 ) the file management data at the second NVM portion 220 , which may include storing data at one or more addresses from a third plurality of addresses from the second NVM portion 220 .
- the method 1100 proceeds to determine (at 1112 ) whether the second sub-portion 214 is full or if there is enough available space at the second sub-portion 214 . When there is available space (e.g., when the second sub-portion 214 is not full), the method 1100 stores (at 1114 ) the payload data at the second sub-portion 214 , which may include storing data at one or more addresses from a second plurality of addresses of the first NVM portion 210 .
- payload data e.g., audio video data
- the method 1100 proceeds to determine (at 1112 ) whether the second sub-portion 214 is full or if there is enough available space at the second sub-portion 214 . When there is available space (e.g., when the second sub-portion 214 is not full), the method 1100 stores (at 1114 ) the payload data at the second sub-portion 214 , which may include storing data at one or more addresses from a second plurality of addresses of the first NVM portion 210 .
- the method proceeds to store (at 1116 ) the payload data at the second NVM portion 220 , which may include storing data at one or more addresses from a third plurality of addresses from the second NVM portion 220 .
- the method 1100 determines (at 1118 ) whether there is more data. If so, the method proceeds back to receive (at 1102 ) more data. If not, the method 1100 may end or wait for more data.
- SSD Solid State Device
- FIG. 12 illustrates a block diagram of the NVM 150 .
- the NVM 150 includes the first NVM portion 210 and the second NVM portion 220 .
- the first NVM portion 210 includes the first sub-portion 212 and the second sub-portion 214 .
- the first NVM portion 210 may include a plurality of single level cells (SLCs).
- the second NVM portion 220 may include a plurality of multi-level cells (MLCs).
- the plurality of MLCs may include a plurality of triple level cells (TLCs).
- the first NVM portion 210 may include a plurality of first physical addresses (e.g., memory physical address), and the second NVM portion 220 may include a plurality of second physical addresses.
- the first NVM portion 210 includes the first sub-portion 212 and the second sub-portion 214 .
- the first sub-portion 212 may include a first plurality of physical addresses from the first physical addresses of the first NVM portion 210 .
- the second sub-portion 214 may include a second plurality of physical addresses from the first physical addresses of the first NVM portion 210 .
- FIG. 12 illustrates an example of how the NVM 150 may be divided for a particular storage size.
- FIG. 12 illustrates that the second NVM portion has about 512 GB of storage space or more.
- the first sub-portion 212 has about 6 GB of storage of space.
- About 5 GB of the storage space of the first sub-portion 212 is allocated for storing (e.g., storing only) file management data (e.g., FAT data).
- file management data e.g., FAT data
- about 1 GB of the storage space of the first sub-portion 212 is allocated for storing (e.g., only storing) special writing data (e.g., data associated with FUA command or RMW command).
- the second sub-portion 214 has about 1 GB of storage or more that is allocated for storing (e.g., storing only) payload data.
- this allocation of storage space enables the SSD to provide a full card write of the SSD without having to perform garbage collection.
- this configuration assumes that the FAT data will not take up more than 1 percent of the payload data. Thus, if the SSD is capable of storing about 512 GB of data, then this configuration assumes that no more than about 5 GB is needed for the FAT data.
- FIG. 12 illustrates one example of how the NVM 150 may be partitioned and/or divided for a particular storage size. However, different implementations may use a NVM 150 with different storage sizes and/or different partitions and/or divisions.
- FIG. 13 illustrates a block diagram of different blocks for the SSD.
- Each block may represent pages and/or physical addresses of the NVM 150 .
- the blocks may have different sizes (e.g., 128 MB).
- the blocks may be managed by a translation table (e.g., Flash Translation Table (FTL).
- FTL Flash Translation Table
- the translation table is configured to convert a logical address of a particular data into a physical address at the NVM 150 .
- the translation table may help route data towards a particular block (e.g., memory block) of the NVM 150 .
- the blocks of memory may be specified and allocated during and/or after a formatting of the NVM 150 .
- reformatting the NVM 150 may result in different blocks (e.g., different physical addresses) being allocated and/or reserved for different data types.
- FIG. 13 illustrates one or more payload data from the plurality of payload data 320 that are first stored (at 1302 ) at blocks at the second sub-portion 214 of the first NVM portion 210 of the NVM 150 .
- 9 blocks (1-9) may be allocated for the second sub-portion 214 .
- one or more payload data from the plurality of payload data 320 are stored (at 1304 ) at the second NVM portion 220 .
- FIG. 13 also illustrates that one or more FAT data from the plurality of FAT data 310 is stored (at 1312 ) at the first sub-portion 212 of the first NVM portion 210 of the NVM 150 .
- 50 blocks (1-50) may be allocated for the second sub-portion 214 .
- one or more FAT data from the plurality of FAT data 310 is stored (at 1314 ) at the second NVM portion 220 .
- FIG. 13 illustrates that about 17 blocks (51-66) of the first sub-portion 212 may be allocated as buffer for storing special write data, such as data associated with the FUA command and/or the RMW command.
- special write data such as data associated with the FUA command and/or the RMW command.
- a foreground garbage collection may be performed (at 1322 ) to free up space for more data associated with the FUA command and/or the RMW command.
- Background garbage collection may be performed (at 1330 ), when the host 104 is idle (e.g., not recording video) or when no data is received by the SSD 102 . Background garbage collection may move or relocate data from blocks of the second sub-portion 214 and/or the first sub-portion 212 to blocks of the second NVM portion 220 .
- blocks that are freed may be added (at 1340 ) to a list of free blocks 1300 that keeps tracks of which blocks are available to store data.
- blocks are freed (e.g., delete a file)
- the freed blocks may be added to the list of free blocks 1300 .
- the list of free blocks 1300 helps the SSD 102 manage and determine where data can be stored.
- the list of free blocks 1300 may also help the SSD 102 ensure that one block is not storing data more often than other blocks.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/191,193 US10895991B2 (en) | 2018-11-14 | 2018-11-14 | Solid state device with improved sustained data writing speed |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/191,193 US10895991B2 (en) | 2018-11-14 | 2018-11-14 | Solid state device with improved sustained data writing speed |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200150873A1 US20200150873A1 (en) | 2020-05-14 |
| US10895991B2 true US10895991B2 (en) | 2021-01-19 |
Family
ID=70551365
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/191,193 Active 2039-04-06 US10895991B2 (en) | 2018-11-14 | 2018-11-14 | Solid state device with improved sustained data writing speed |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10895991B2 (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11288185B2 (en) | 2019-01-03 | 2022-03-29 | Silicon Motion, Inc. | Method and computer program product for performing data writes into a flash memory |
| US11086737B2 (en) * | 2019-01-16 | 2021-08-10 | Western Digital Technologies, Inc. | Non-volatile storage system with rapid recovery from ungraceful shutdown |
| KR102817611B1 (en) * | 2019-08-02 | 2025-06-11 | 삼성전자주식회사 | Memory device including a plurality of buffer area for supporting fast write and fast read and storage device including the same |
| JP7362349B2 (en) * | 2019-08-23 | 2023-10-17 | キヤノン株式会社 | Control device |
| US12099742B2 (en) * | 2021-03-15 | 2024-09-24 | Pure Storage, Inc. | Utilizing programming page size granularity to optimize data segment storage in a storage system |
| US11960735B2 (en) * | 2021-09-01 | 2024-04-16 | Micron Technology, Inc. | Memory channel controller operation based on data types |
| CN116417025B (en) * | 2023-03-01 | 2025-09-16 | 超聚变数字技术有限公司 | Power failure processing method, solid state disk and computing device |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080126680A1 (en) * | 2006-11-03 | 2008-05-29 | Yang-Sup Lee | Non-volatile memory system storing data in single-level cell or multi-level cell according to data characteristics |
| US20160110126A1 (en) * | 2014-10-16 | 2016-04-21 | Futurewei Technologies, Inc. | All-flash-array primary storage and caching appliances implementing triple-level cell (tlc)-nand semiconductor microchps |
-
2018
- 2018-11-14 US US16/191,193 patent/US10895991B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080126680A1 (en) * | 2006-11-03 | 2008-05-29 | Yang-Sup Lee | Non-volatile memory system storing data in single-level cell or multi-level cell according to data characteristics |
| US20160110126A1 (en) * | 2014-10-16 | 2016-04-21 | Futurewei Technologies, Inc. | All-flash-array primary storage and caching appliances implementing triple-level cell (tlc)-nand semiconductor microchps |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200150873A1 (en) | 2020-05-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10895991B2 (en) | Solid state device with improved sustained data writing speed | |
| CN114510434B (en) | Data aggregation in ZNS drives | |
| KR101486987B1 (en) | Semiconductor memory device including nonvolatile memory and commnand scheduling method for nonvolatile memory | |
| US8171239B2 (en) | Storage management method and system using the same | |
| US7962685B2 (en) | Portable data storage device incorporating multiple flash memory units | |
| US10102118B2 (en) | Memory system and non-transitory computer readable recording medium | |
| US20190251039A1 (en) | Methods and apparatus for implementing a logical to physical address mapping in a solid state drive | |
| US8458394B2 (en) | Storage device and method of managing a buffer memory of the storage device | |
| JP2013242908A (en) | Solid state memory, computer system including the same, and operation method of the same | |
| KR101204163B1 (en) | Semiconductor memory device | |
| US8819350B2 (en) | Memory system | |
| US20120159050A1 (en) | Memory system and data transfer method | |
| US11853565B2 (en) | Support higher number of active zones in ZNS SSD | |
| US20180081799A1 (en) | Memory device and non-transitory computer readable recording medium | |
| KR20200032404A (en) | Data Storage Device and Operation Method Thereof, Storage System Having the Same | |
| US12072797B2 (en) | Memory system and non-transitory computer readable recording medium | |
| KR101070511B1 (en) | Solid state drive controller and method for operating of the solid state drive controller | |
| US11640254B2 (en) | Controlled imbalance in super block allocation in ZNS SSD | |
| US20220229775A1 (en) | Data storage device and operating method thereof | |
| CN112947845B (en) | Thermal data identification methods and storage devices | |
| CN106205707B (en) | memory device | |
| US20200327069A1 (en) | Data storage device and operation method thereof, controller using the same | |
| CN106205708A (en) | Cache device | |
| CN106201326B (en) | Information processing apparatus | |
| US12475032B2 (en) | Efficient consolidation for two layer FTL |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS AGENT, ILLINOIS Free format text: SECURITY INTEREST;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:052915/0566 Effective date: 20200113 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST AT REEL 052915 FRAME 0566;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:059127/0001 Effective date: 20220203 |
|
| AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., ILLINOIS Free format text: PATENT COLLATERAL AGREEMENT - A&R LOAN AGREEMENT;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:064715/0001 Effective date: 20230818 Owner name: JPMORGAN CHASE BANK, N.A., ILLINOIS Free format text: PATENT COLLATERAL AGREEMENT - DDTL LOAN AGREEMENT;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:067045/0156 Effective date: 20230818 |
|
| AS | Assignment |
Owner name: SANDISK TECHNOLOGIES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:067567/0682 Effective date: 20240503 Owner name: SANDISK TECHNOLOGIES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNOR:WESTERN DIGITAL TECHNOLOGIES, INC.;REEL/FRAME:067567/0682 Effective date: 20240503 |
|
| AS | Assignment |
Owner name: SANDISK TECHNOLOGIES, INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SANDISK TECHNOLOGIES, INC.;REEL/FRAME:067982/0032 Effective date: 20240621 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS THE AGENT, ILLINOIS Free format text: PATENT COLLATERAL AGREEMENT;ASSIGNOR:SANDISK TECHNOLOGIES, INC.;REEL/FRAME:068762/0494 Effective date: 20240820 |
|
| AS | Assignment |
Owner name: SANDISK TECHNOLOGIES, INC., CALIFORNIA Free format text: PARTIAL RELEASE OF SECURITY INTERESTS;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS AGENT;REEL/FRAME:071382/0001 Effective date: 20250424 Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, ILLINOIS Free format text: SECURITY AGREEMENT;ASSIGNOR:SANDISK TECHNOLOGIES, INC.;REEL/FRAME:071050/0001 Effective date: 20250424 |