US20220107835A1 - Time to Live for Memory Access by Processors - Google Patents
Time to Live for Memory Access by Processors Download PDFInfo
- Publication number
- US20220107835A1 US20220107835A1 US17/553,051 US202117553051A US2022107835A1 US 20220107835 A1 US20220107835 A1 US 20220107835A1 US 202117553051 A US202117553051 A US 202117553051A US 2022107835 A1 US2022107835 A1 US 2022107835A1
- Authority
- US
- United States
- Prior art keywords
- memory
- command
- processor
- signal
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1673—Details of memory controller using buffers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4418—Suspend and resume; Hibernate and awake
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0026—PCI express
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- At least some embodiments disclosed herein relate to processors and memory systems in general, and more particularly, but not limited to time to live for memory access by processors.
- a memory sub-system can include one or more memory components that store data.
- a memory sub-system can be a data storage system, such as a solid-state drive (SSD), or a hard disk drive (HDD).
- a memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SODIMM), or a non-volatile dual in-line memory module (NVDIMM).
- the memory components can be, for example, non-volatile memory components and volatile memory components. Examples of memory components include memory integrated circuits. Some memory integrated circuits are volatile and require power to maintain stored data. Some memory integrated circuits are non-volatile and can retain stored data even when not powered. Examples of non-volatile memory include flash memory,
- ROM Read-Only Memory
- PROM Programmable Read-Only Memory
- EPROM Programmable Read-Only Memory
- EEPROM Electronically Erasable Programmable Read-Only Memory
- volatile memory examples include Dynamic Random-Access Memory (DRAM) and Static Random-Access Memory (SRAM).
- DRAM Dynamic Random-Access Memory
- SRAM Static Random-Access Memory
- a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.
- a computer can include a host system and one or more memory sub-systems attached to the host system.
- the host system can have a central processing unit (CPU) in communication with the one or more memory sub-systems to store and/or retrieve data and instructions.
- Instructions for a computer can include operating systems, device drivers, and application programs.
- An operating system manages resources in the computer and provides common services for application programs, such as memory allocation and time sharing of the resources.
- a device driver operates or controls a particular type of devices in the computer; and the operating system uses the device driver to offer resources and/or services provided by the type of devices.
- a central processing unit (CPU) of a computer system can run an operating system and device drivers to provide the services and/or resources to application programs.
- the central processing unit (CPU) can run an application program that uses the services and/or resources.
- an application program implementing a type of applications of computer systems can instruct the central processing unit (CPU) to store data in the memory components of a memory sub-system and retrieve data from the memory components.
- a host system can communicate with a memory sub-system in accordance with a pre-defined communication protocol, such as Non-Volatile Memory Host Controller Interface Specification (NVMHCI), also known as NVM Express (NVMe), which specifies the logical device interface protocol for accessing non-volatile memory via a Peripheral Component Interconnect Express (PCI Express or PCIe) bus.
- NVMHCI Non-Volatile Memory Host Controller Interface Specification
- NVMe NVM Express
- PCI Express Peripheral Component Interconnect Express
- Some commands manage the infrastructure in the memory sub-system and/or administrative tasks such as commands to manage namespaces, commands to attach namespaces, commands to create input/output submission or completion queues, commands to delete input/output submission or completion queues, commands for firmware management, etc.
- FIG. 1 shows a system having a processor controlling time to live for accessing a memory sub-system.
- FIGS. 2 and 3 show methods of implementing time to live for a processor to load data from a memory sub-system.
- FIG. 4 illustrates an example computing system in which time to live techniques can be implemented.
- FIG. 5 is a block diagram of an example computer system in which embodiments of the present disclosure can operate.
- At least some aspects of the present disclosure are directed to time to live for processors to load data from memory devices.
- a parameter can be stored in a processor to indicate the desired time to live for loading data from a memory system for the processor.
- the processor sends a load command to the memory system to load an item from a memory address
- the memory system may or may not be able to provide the data from the memory address to the process within the desired time to live specified by the parameter, especially when the memory system has multiple memory components that have different latencies in memory access.
- the processor determines that the memory system fails to provide the item from the memory address to the processor within the time duration specified by the time to live parameter, the processor can terminate its processing of the command in the processor and send a signal to the memory system to abort the command, instead of having to wait for the completion of the load operation on the low speed memory component.
- the signal to abort the command causes the memory system to adjust the data hosting of the memory address.
- the memory address can be moved from the low speed memory component in the memory system to the high speed memory component.
- the data item at the memory address specified in the load command can be moved from the low speed memory component to the high speed memory component; and the memory address can be remapped from the low speed memory component to the high speed memory component.
- the data item at the memory address specified in the load command can be retrieved from the low speed memory component and cached in the high speed memory component.
- the memory system can select the high speed memory component for hosting the memory address based on the time gap between the load command and the signal to abort the command, which is indicative of the desired time to live of the processor.
- the high speed memory component can be selected to meet the desired time to live of the processor.
- the processor can resend the command to the memory system to load the item from the memory address. Since the memory system has adjusted the data hosting of the memory address to meet the desired time to live of the processor, the memory system can now provide the data from the memory address to the processor within the desired time to live of the processor.
- the processor can free the resource associated with the command such that the freed resource can be used to perform other operations.
- the memory system can have NAND flash, NVRAM, and DRAM that have different memory access speeds. If the memory system maps the memory address to a lower speed memory (e.g., NAND flash) and the processor aborts the load command in accordance with its time to live parameter, the memory system can relocate the item and the memory address to a higher speed memory (e.g., DRAM, NVRAM) from the lower speed memory (e.g., NAND flash) such that when the processor resend the load command, the memory system can provide the item form the memory address within the desired time to live of the processor.
- a higher speed memory e.g., DRAM, NVRAM
- the processor can send other commands to the memory system to load other items from the memory system.
- Such memory load operations with a time to live requirement provides the processor with the flexibility to optionally skip, or postpone, the processing of certain non-critical data (e.g., temporarily) without having to wait for an excessive amount of time.
- the processor can optionally relax the time to live parameter.
- the technique can improve the efficiency of resource usage during the processor accessing memory have different speeds.
- FIG. 1 shows a system having a register 101 storing a time to live parameter 109 in the processor 100 for loading data from a memory sub-system 110 .
- the memory sub-system 110 have different types of memory, such as dynamic random access memory (DRAM) 117 , non-volatile random access memory (NVRAM) 119 , and/or NAND flash memory ( 111 ).
- DRAM dynamic random access memory
- NVRAM non-volatile random access memory
- 111 NAND flash memory
- the different types of memory in the memory sub-system 110 can be addressed using a load command 107 specifying a memory address.
- the time to live parameter/requirement 109 is stored/specified in a processor 100 (e.g., a System on Chip (SoC) or a central processing unit (CPU)).
- SoC System on Chip
- CPU central processing unit
- a register 101 can be used to store the time to live parameter/requirement 109 ; and the content of the register 101 can be updated to adjust the time to live requirement 109 of how much time the memory sub-system 110 has in providing the data at the specified memory address from the memory sub-system 110 to the processor 100 .
- the processor 100 can have the one or more registers to hold instructions, operands and results.
- the processor 100 can further have one or more execution units (e.g., 103 ) to perform predefined operations defined in an instruction set.
- the memory controller 105 of the processor 100 can generate the load command 107 and transmit the load command to the memory sub-system 110 .
- the memory sub-system 110 retrieves data from one of the memory components (e.g., 117 , 119 , 111 ), and provides the data to the processor 100 over a memory bus 113 .
- the memory address in the load command can be initially mapped by the memory sub-system 110 to a low speed memory (e.g., NAND flash 111 ).
- the desired time to live 109 can be shorter than the time required to retrieve the data from the low speed memory (e.g., NAND flash 111 ).
- the processor 100 determines that the memory sub-system 110 has failed to make the data available within the time to live 109 of the processor.
- the processor 100 can abort execution of the instruction and/or the load command.
- the memory controller 105 can send, over the memory bus 113 , a signal to the memory sub-system 110 to abort execution of the command 107 .
- the memory sub-system 110 in response to receiving the signal to abort the command 107 from the processor 100 , can be configured to change hosting of the memory address in a lower speed memory (e.g., NAND Flash 111 ) to hosting of the memory address in a higher speed memory (e.g., DRAM 117 , NVRAM 119 ).
- a lower speed memory e.g., NAND Flash 111
- a higher speed memory e.g., DRAM 117 , NVRAM 119
- the higher speed memory e.g., DRAM 117 , NVRAM 119
- the abort signal can cause the memory sub-system 110 to complete loading the data from the lower speed memory (e.g., NAND flash 111 ), and instead of providing the data to the memory controller 105 through the memory bus 113 , storing the loaded data into the higher speed memory (e.g., DRAM 117 , NVRAM 119 ) (e.g., to buffer the data in the higher speed memory, to cache the data in the higher speed memory, or to remap the memory address to the higher speed memory by swapping a page of memory addresses from the lower speed memory to the higher speed memory).
- the lower speed memory e.g., NAND flash 111
- the higher speed memory e.g., DRAM 117 , NVRAM 119
- the memory sub-system 110 can identify a desired latency for the item, select the higher speed component (e.g., DRAM 117 , NVRAM 119 ) based on the desired latency, and remap the memory address to the higher speed component (e.g., DRAM 117 , NVRAM 119 ).
- the higher speed component e.g., DRAM 117 , NVRAM 119
- the memory sub-system 110 can select the higher speed component (e.g., DRAM 117 , NVRAM 119 ) based on the time gap between the receiving of the command 107 and the receiving of the signal to abort the command 107 , which is indicative of the current time to live 109 of the processor 100 .
- the higher speed component e.g., DRAM 117 , NVRAM 119
- the higher speed component (e.g., DRAM 117 , NVRAM 119 ) can be selected to host or cache the memory address such that, after storing of the data item in the higher speed memory (e.g., DRAM 117 , NVRAM 119 ), when the memory sub-system 110 receives the command 107 resent by the processor 100 to load the item from the same memory address, the memory sub-system 110 can provide the item from the higher speed component within a time period shorter than the time gap between the previous receiving of the command 107 and the receiving of the signal to abort the previously sent command 107 .
- the processor 100 can free up the relevant resource (e.g., the memory controller 105 ) for the execution of other instructions.
- the memory controller 105 can be used to generate a second command for the memory sub-system 110 during the execution another load instruction; and the memory sub-system 110 can receive the second command sent from the processor 100 to load a second item from a second memory address.
- the processor can resend the first command that was previously aborted; and the second command can be received and executed in the memory sub-system 110 between the transmitting of the signal to abort the first command and the resending of the first command.
- the processor 100 can optionally postpone the processing of the data/instruction at the memory address when the data/instruction is non-critical.
- the processor can reissue the load command 107 after a period of time, with the anticipation that the memory sub-system 110 is likely to make arrangements to make the data/instruction available according to the time to live 109 .
- the memory sub-system 110 can make the arrangements through buffering, caching, and/or changing a memory address map that maps the memory address to a physical memory address of a memory unit in the memory sub-system 110 .
- FIGS. 2 and 3 show methods of implementing time to live for a processor to load data from a memory sub-system.
- the methods of FIGS. 2 and 3 can be performed in a system of FIG. 1 and, in general, by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- FIGS. 2 and 3 show methods of implementing time to live for a processor to load data from a memory sub-system.
- the methods of FIGS. 2 and 3 can be performed in a system of FIG. 1 and, in general, by processing logic that can include hardware (e.
- a processor 100 can store a time to live parameter 109 specifying a time duration.
- the time to live parameter 109 can be stored in the register 101 in the processor 100 .
- a processor 100 can send a command 107 to a memory sub-system 110 to load an item from a memory address.
- the processor 100 can have the registers to hold instructions, operands and results.
- the processor 100 can have the execution unit 103 to perform predefined operations defined in an instruction set.
- the memory controller 105 can convert a logical address in the load instruction into a physical memory address to generate the load command 107 .
- the memory controller 105 sends the load command 107 over a memory bus 113 to the memory sub-system 110 , and wait for a response from the memory bus 113 according to a predefined communication protocol for the memory bus 113 .
- the processor 100 can determine that the memory system 110 fails to provide, as a response to the command 107 , the item from the memory address to the processor within the time duration.
- the memory address can be mapped to a memory component (e.g., 117 , 119 , or 111 ) among the multiple memory components 117 to 111 of the memory system 110 . If the memory address of the data is currently mapped to the high-speed type memory device (e.g., DRAM 117 , NVRAM 119 ), the data can be provided to the processor 100 within the time duration. However, if the memory address of the data is currently in the low-speed type memory device (e.g., NAND Flash 111 ), the memory system 110 can fail to provide the data to the processor within the time duration.
- the high-speed type memory device e.g., DRAM 117 , NVRAM 119
- the processor 100 can terminate the processing of the command in the processor. For example, when the processor 100 determines that the data cannot be made available with the specified time, the processor 100 can terminate the operations.
- the processor 100 can transmit a signal to the memory system 110 to abort the command 107 .
- the processor 100 can free a resource (e.g., the memory controller) from the command 107 during a time period between the signal and the resending of the command.
- a resource e.g., the memory controller
- the processor 100 can perform one or more operations that are not associated with the command 107 using the above freed resource.
- the processor 100 can execute further instructions, including one or more instructions to load data items that are hosted in the fast memory (e.g., 117 or 119 ) of the memory sub-system 110 .
- the aborted command is a first command for retrieving data from a first memory address.
- the processor 100 can optionally send a second command to the memory system to load a second item from a second memory address that is different from the first memory address. In this way, the processor can process other operations (e.g., the second command) instead of having to wait for the completion of the load operation on the low speed memory (e.g., the first command).
- the processor 100 can resend the command to the memory system to load the item from the memory address after at least a predetermined period of time following the signal to abort the command.
- the predetermined period of time is configured to be longer than a time period for the memory system to remap the memory address from the first component to the second component.
- FIG. 4 illustrates an example computing system 400 in which time to live techniques can be implemented.
- the time to live requirement 109 of FIG. 1 can be imposed in the processor in the host system 420 upon the time period between a memory sub-system 410 receiving a load command 107 and the memory sub-system 410 providing the data retrieved at the memory address specified in the load command 107 .
- a memory sub-system can also be referred to as a “memory device.”
- An example of a memory sub-system is a memory module that is connected to a central processing unit (CPU) via a memory bus.
- Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a nonvolatile dual in-line memory module (NVDIMM), etc.
- a memory sub-system is a data memory/system that is connected to the central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network).
- peripheral interconnect e.g., an input/output bus, a storage area network.
- Examples of memory include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD).
- SSD solid-state drive
- USB universal serial bus
- HDD hard disk drive
- the memory sub-system is a hybrid memory/storage sub-system that provides both memory functions and storage functions.
- a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.
- the memory sub-system 410 can include media, such as media units/memory components 409 A to 409 N.
- the media units/memory components 409 A to 409 N can be volatile memory components, non-volatile memory components, or a combination of such.
- Each of the media units/memory components 409 A to 409 N can perform operations to store, record, program, write, or commit new data independent of the operations of other media units/memory components 409 A to 409 N.
- the media units/memory components 409 A to 409 N can be used in parallel in executing write commands.
- the memory sub-system is a storage system.
- An example of a storage system is a solid state drive (SSD).
- the memory sub-system 410 is a memory module. Examples of a memory module includes a DIMM, NVDIMM, and NVDIMM-P. In some embodiments, the memory sub-system 410 is a hybrid memory/storage sub-system.
- the computing environment can include a host system 420 that uses the memory sub-system 410 . For example, the host system 420 can write data to the memory sub-system 410 and read data from the memory sub-system 410 .
- the host system 420 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device.
- the host system 420 can include or be coupled to the memory sub-system 410 so that the host system 420 can read data from or write data to the memory sub-system 110 .
- the host system 420 can be coupled to the memory sub-system 410 via a physical host interface.
- “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.
- Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc.
- the physical host interface can be used to transmit data between the host system 420 and the memory sub-system 410 .
- the host system 420 can further utilize an NVM Express (NVMe) interface to access the memory components 409 A to 409 N when the memory sub-system 410 is coupled with the host system 420 by the PCIe interface.
- the physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 410 and the host system 420 .
- FIG. 4 illustrates a memory sub-system 410 as an example.
- the host system 420 can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.
- the host system 420 includes a processing device 418 and a controller 416 .
- the processing device 418 of the host system 420 can be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, etc.
- the controller 416 can be referred to as a memory controller, a memory management unit, and/or an initiator.
- the controller 416 controls the communications over a bus coupled between the host system 420 and the memory sub-system 410 .
- the controller 416 can send commands or requests to the memory sub-system 410 for desired access to memory components 409 A to 409 N.
- the controller 416 can further include interface circuitry to communicate with the memory sub-system 410 .
- the interface circuitry can convert responses received from memory sub-system 410 into information for the host system 420 .
- the controller 416 of the host system 420 can communicate with controller 415 of the memory sub-system 410 to perform operations such as reading data, writing data, or erasing data at the memory components 409 A to 409 N and other such operations.
- the controller 416 is integrated within the same package of the processing device 418 . In other instances, the controller 416 is separate from the package of the processing device 418 .
- the controller 416 and/or the processing device 418 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof.
- the controller 416 and/or the processing device 418 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- the media units/memory components 409 A to 409 N can include any combination of the different types of non-volatile memory components and/or volatile memory components.
- An example of non-volatile memory components includes a negative- and (NAND) type flash memory.
- Each of the memory components 409 A to 409 N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)).
- a specific memory component can include both an SLC portion and an MLC portion of memory cells.
- Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system 420 .
- the memory components 409 A to 409 N can be based on any other type of memory such as a volatile memory.
- the memory components 409 A to 409 N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, ferroelectric random-access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), nanowire-based non-volatile memory, memory that incorporates memristor technology, and a cross-point array of non-volatile memory cells.
- RAM random access memory
- ROM read-only memory
- DRAM dynamic random access memory
- SDRAM synchronous dynamic random access memory
- a cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the nonvolatile memory cell being previously erased. Furthermore, the memory cells of the memory components 409 A to 409 N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.
- the controller 415 of the memory sub-system 110 can communicate with the memory components 409 A to 409 N to perform operations such as reading data, writing data, or erasing data at the memory components 409 A to 409 N and other such operations (e.g., in response to commands scheduled on a command bus by controller 416 ).
- the controller 415 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof.
- the controller 415 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
- the controller 415 can include a processing device 417 (processor) configured to execute instructions stored in local memory 419 .
- the local memory 419 of the controller 415 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 410 , including handling communications between the memory sub-system 410 and the host system 420 .
- the local memory 419 can include memory registers storing memory pointers, fetched data, etc.
- the local memory 419 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 410 in FIG.
- a memory sub-system 410 cannot include a controller 415 , and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).
- external control e.g., provided by an external host, or by a processor or controller separate from the memory sub-system.
- the controller 415 can receive commands or operations from the host system 420 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 409 A to 409 N.
- the controller 415 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 409 A to 409 N.
- the controller 415 can further include host interface circuitry to communicate with the host system 420 via the physical host interface.
- the host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 409 A to 409 N as well as convert responses associated with the memory components 409 A to 409 N into information for the host system 420 .
- the memory sub-system 410 can also include additional circuitry or components that are not illustrated.
- the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 415 and decode the address to access the memory components 409 A to 409 N.
- a cache or buffer e.g., DRAM
- address circuitry e.g., a row decoder and a column decoder
- FIG. 5 illustrates an example machine of a computer system 500 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed.
- At least some of operations configured to implement the time to live requirement and/or configured to implement a response according to the time to live requirement can be implemented using instructions stored as a data transfer manager 513 .
- the computer system 500 can correspond to a host system (e.g., the host system 420 of FIG. 4 ) that includes, is coupled to, or utilizes a processor (e.g., the processor 502 of FIG. 5 ) or can be used to perform the operations of a data transfer manager 513 (e.g., to execute instructions to perform operations described with reference to FIGS. 1-4 ).
- the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet.
- the machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
- the machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- STB set-top box
- STB set-top box
- a cellular telephone a web appliance
- server a server
- network router a network router
- switch or bridge or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- the example computer system 500 includes a processing device 502 , a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), static random access memory (SRAM), etc.), and a data storage system 518 , which communicate with each other via a bus 530 (which can include multiple buses).
- main memory 504 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), static random access memory (SRAM), etc.
- DRAM dynamic random access memory
- SDRAM synchronous DRAM
- RDRAM Rambus DRAM
- SRAM static random access memory
- Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 502 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein.
- the computer system 500 can further include a network interface device 508 to communicate over the network 520 .
- the data storage system 518 can include a machine-readable storage medium 524 (also known as a computer-readable medium) on which is stored one or more sets of instructions 526 or software embodying any one or more of the methodologies or functions described herein.
- the instructions 526 can also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500 , the main memory 504 and the processing device 502 also constituting machine-readable storage media.
- the machine-readable storage medium 524 , data storage system 518 , and/or main memory 504 can correspond to the memory sub-system 410 of FIG. 4 .
- the instructions 526 include instructions to implement functionality corresponding to a data transfer manager 513 (e.g., to execute instructions to perform operations described with reference to FIGS. 1-4 ). While the machine-readable storage medium 524 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
- the present disclosure also relates to an apparatus for performing the operations herein.
- This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
- a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- the present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure.
- a machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer).
- a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Memory System (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Advance Control (AREA)
Abstract
Description
- The present application is a continuation application of U.S. patent application Ser. No. 16/688,245 filed Nov. 19, 2019, the entire disclosures of which application are hereby incorporated herein by reference.
- At least some embodiments disclosed herein relate to processors and memory systems in general, and more particularly, but not limited to time to live for memory access by processors.
- A memory sub-system can include one or more memory components that store data. A memory sub-system can be a data storage system, such as a solid-state drive (SSD), or a hard disk drive (HDD). A memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SODIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory components can be, for example, non-volatile memory components and volatile memory components. Examples of memory components include memory integrated circuits. Some memory integrated circuits are volatile and require power to maintain stored data. Some memory integrated circuits are non-volatile and can retain stored data even when not powered. Examples of non-volatile memory include flash memory,
- Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable
- Programmable Read-Only Memory (EPROM) and Electronically Erasable Programmable Read-Only Memory (EEPROM) memory, etc. Examples of volatile memory include Dynamic Random-Access Memory (DRAM) and Static Random-Access Memory (SRAM). In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.
- For example, a computer can include a host system and one or more memory sub-systems attached to the host system. The host system can have a central processing unit (CPU) in communication with the one or more memory sub-systems to store and/or retrieve data and instructions. Instructions for a computer can include operating systems, device drivers, and application programs. An operating system manages resources in the computer and provides common services for application programs, such as memory allocation and time sharing of the resources. A device driver operates or controls a particular type of devices in the computer; and the operating system uses the device driver to offer resources and/or services provided by the type of devices. A central processing unit (CPU) of a computer system can run an operating system and device drivers to provide the services and/or resources to application programs. The central processing unit (CPU) can run an application program that uses the services and/or resources. For example, an application program implementing a type of applications of computer systems can instruct the central processing unit (CPU) to store data in the memory components of a memory sub-system and retrieve data from the memory components.
- A host system can communicate with a memory sub-system in accordance with a pre-defined communication protocol, such as Non-Volatile Memory Host Controller Interface Specification (NVMHCI), also known as NVM Express (NVMe), which specifies the logical device interface protocol for accessing non-volatile memory via a Peripheral Component Interconnect Express (PCI Express or PCIe) bus. In accordance with the communication protocol, the host system can send commands of different types to the memory sub-system; and the memory sub-system can execute the commands and provide responses to the commands. Some commands instruct the memory sub-system to store data items at addresses specified in the commands, or to retrieve data items from addresses specified in the commands, such as read commands and write commands. Some commands manage the infrastructure in the memory sub-system and/or administrative tasks, such as commands to manage namespaces, commands to attach namespaces, commands to create input/output submission or completion queues, commands to delete input/output submission or completion queues, commands for firmware management, etc.
- The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
-
FIG. 1 shows a system having a processor controlling time to live for accessing a memory sub-system. -
FIGS. 2 and 3 show methods of implementing time to live for a processor to load data from a memory sub-system. -
FIG. 4 illustrates an example computing system in which time to live techniques can be implemented. -
FIG. 5 is a block diagram of an example computer system in which embodiments of the present disclosure can operate. - At least some aspects of the present disclosure are directed to time to live for processors to load data from memory devices.
- For example, a parameter can be stored in a processor to indicate the desired time to live for loading data from a memory system for the processor. After the processor sends a load command to the memory system to load an item from a memory address, the memory system may or may not be able to provide the data from the memory address to the process within the desired time to live specified by the parameter, especially when the memory system has multiple memory components that have different latencies in memory access. If the processor determines that the memory system fails to provide the item from the memory address to the processor within the time duration specified by the time to live parameter, the processor can terminate its processing of the command in the processor and send a signal to the memory system to abort the command, instead of having to wait for the completion of the load operation on the low speed memory component.
- In some implementation, the signal to abort the command causes the memory system to adjust the data hosting of the memory address. For example, the memory address can be moved from the low speed memory component in the memory system to the high speed memory component. For example, the data item at the memory address specified in the load command can be moved from the low speed memory component to the high speed memory component; and the memory address can be remapped from the low speed memory component to the high speed memory component. For example, the data item at the memory address specified in the load command can be retrieved from the low speed memory component and cached in the high speed memory component. When the memory system has more than two tiers of memory components of different access speeds, the memory system can select the high speed memory component for hosting the memory address based on the time gap between the load command and the signal to abort the command, which is indicative of the desired time to live of the processor. The high speed memory component can be selected to meet the desired time to live of the processor. Following the signal to abort the command and after at least a predetermined period of time sufficient for the memory system to adjust the data hosting of the memory address, the processor can resend the command to the memory system to load the item from the memory address. Since the memory system has adjusted the data hosting of the memory address to meet the desired time to live of the processor, the memory system can now provide the data from the memory address to the processor within the desired time to live of the processor. Between the signal to abort the command and resending the command, the processor can free the resource associated with the command such that the freed resource can be used to perform other operations.
- For example, the memory system can have NAND flash, NVRAM, and DRAM that have different memory access speeds. If the memory system maps the memory address to a lower speed memory (e.g., NAND flash) and the processor aborts the load command in accordance with its time to live parameter, the memory system can relocate the item and the memory address to a higher speed memory (e.g., DRAM, NVRAM) from the lower speed memory (e.g., NAND flash) such that when the processor resend the load command, the memory system can provide the item form the memory address within the desired time to live of the processor.
- Optionally, between the signal to abort the command and the resending of the command, the processor can send other commands to the memory system to load other items from the memory system. Such memory load operations with a time to live requirement provides the processor with the flexibility to optionally skip, or postpone, the processing of certain non-critical data (e.g., temporarily) without having to wait for an excessive amount of time. Alternatively, when the processing of the requested data is required or desirable (e.g., with minimal delay), the processor can optionally relax the time to live parameter.
- The technique can improve the efficiency of resource usage during the processor accessing memory have different speeds.
-
FIG. 1 shows a system having aregister 101 storing a time to liveparameter 109 in theprocessor 100 for loading data from amemory sub-system 110. InFIG. 1 , thememory sub-system 110 have different types of memory, such as dynamic random access memory (DRAM) 117, non-volatile random access memory (NVRAM) 119, and/or NAND flash memory (111). The different types of memory in thememory sub-system 110 can be addressed using aload command 107 specifying a memory address. In some implementations, the time to live parameter/requirement 109 is stored/specified in a processor 100 (e.g., a System on Chip (SoC) or a central processing unit (CPU)). For example, aregister 101 can be used to store the time to live parameter/requirement 109; and the content of theregister 101 can be updated to adjust the time to liverequirement 109 of how much time thememory sub-system 110 has in providing the data at the specified memory address from thememory sub-system 110 to theprocessor 100. - For example, the
processor 100 can have the one or more registers to hold instructions, operands and results. Theprocessor 100 can further have one or more execution units (e.g., 103) to perform predefined operations defined in an instruction set. In response to execution of a load instruction, thememory controller 105 of theprocessor 100 can generate theload command 107 and transmit the load command to thememory sub-system 110. In response, thememory sub-system 110 retrieves data from one of the memory components (e.g., 117, 119, 111), and provides the data to theprocessor 100 over amemory bus 113. - For example, the memory address in the load command can be initially mapped by the
memory sub-system 110 to a low speed memory (e.g., NAND flash 111). The desired time to live 109 can be shorter than the time required to retrieve the data from the low speed memory (e.g., NAND flash 111). Thus, before thememory sub-system 110 can provide the data to thememory controller 105 over thememory bus 113, theprocessor 100 determines that thememory sub-system 110 has failed to make the data available within the time to live 109 of the processor. In response, theprocessor 100 can abort execution of the instruction and/or the load command. For example, thememory controller 105 can send, over thememory bus 113, a signal to thememory sub-system 110 to abort execution of thecommand 107. - In some implementations, in response to receiving the signal to abort the
command 107 from theprocessor 100, thememory sub-system 110 can be configured to change hosting of the memory address in a lower speed memory (e.g., NAND Flash 111) to hosting of the memory address in a higher speed memory (e.g., DRAM 117, NVRAM 119). Preferable, the higher speed memory (e.g., DRAM 117, NVRAM 119) has a memory access latency shorter than the lower speed memory (e.g., NAND Flash 111) and can meet the time to live 109 as indicated by the time gap between theload command 107 and the signal to abort thecommand 107. - For example, the abort signal can cause the
memory sub-system 110 to complete loading the data from the lower speed memory (e.g., NAND flash 111), and instead of providing the data to thememory controller 105 through thememory bus 113, storing the loaded data into the higher speed memory (e.g., DRAM 117, NVRAM 119) (e.g., to buffer the data in the higher speed memory, to cache the data in the higher speed memory, or to remap the memory address to the higher speed memory by swapping a page of memory addresses from the lower speed memory to the higher speed memory). - For example, based on the signal to abort the
command 107, thememory sub-system 110 can identify a desired latency for the item, select the higher speed component (e.g., DRAM 117, NVRAM 119) based on the desired latency, and remap the memory address to the higher speed component (e.g., DRAM 117, NVRAM 119). - In some implementation, the
memory sub-system 110 can select the higher speed component (e.g., DRAM 117, NVRAM 119) based on the time gap between the receiving of thecommand 107 and the receiving of the signal to abort thecommand 107, which is indicative of the current time to live 109 of theprocessor 100. - The higher speed component (e.g., DRAM 117, NVRAM 119) can be selected to host or cache the memory address such that, after storing of the data item in the higher speed memory (e.g., DRAM 117, NVRAM 119), when the
memory sub-system 110 receives thecommand 107 resent by theprocessor 100 to load the item from the same memory address, thememory sub-system 110 can provide the item from the higher speed component within a time period shorter than the time gap between the previous receiving of thecommand 107 and the receiving of the signal to abort the previously sentcommand 107. - Once the
load command 107 that takes a time longer than the time to live 109 of theprocessor 100 to be executed in thememory sub-system 110 is aborted, theprocessor 100 can free up the relevant resource (e.g., the memory controller 105) for the execution of other instructions. For example, thememory controller 105 can be used to generate a second command for thememory sub-system 110 during the execution another load instruction; and thememory sub-system 110 can receive the second command sent from theprocessor 100 to load a second item from a second memory address. After the execution of the second command to provide the second item from the second memory address to theprocessor 100, the processor can resend the first command that was previously aborted; and the second command can be received and executed in thememory sub-system 110 between the transmitting of the signal to abort the first command and the resending of the first command. - Optionally, when the
memory sub-system 110 fails to provide the item from the memory address to theprocessor 100 within the time duration corresponding to time to live 109, theprocessor 100 can optionally postpone the processing of the data/instruction at the memory address when the data/instruction is non-critical. Thus, the processor can reissue theload command 107 after a period of time, with the anticipation that thememory sub-system 110 is likely to make arrangements to make the data/instruction available according to the time to live 109. For example, thememory sub-system 110 can make the arrangements through buffering, caching, and/or changing a memory address map that maps the memory address to a physical memory address of a memory unit in thememory sub-system 110. -
FIGS. 2 and 3 show methods of implementing time to live for a processor to load data from a memory sub-system. For example, the methods ofFIGS. 2 and 3 can be performed in a system ofFIG. 1 and, in general, by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. - At
block 201, aprocessor 100 can store a time to liveparameter 109 specifying a time duration. For example, the time to liveparameter 109 can be stored in theregister 101 in theprocessor 100. - At
block 203, aprocessor 100 can send acommand 107 to amemory sub-system 110 to load an item from a memory address. For example, theprocessor 100 can have the registers to hold instructions, operands and results. In some implementations, theprocessor 100 can have theexecution unit 103 to perform predefined operations defined in an instruction set. When a load instruction in a register is executed, thememory controller 105 can convert a logical address in the load instruction into a physical memory address to generate theload command 107. Thememory controller 105 sends theload command 107 over amemory bus 113 to thememory sub-system 110, and wait for a response from thememory bus 113 according to a predefined communication protocol for thememory bus 113. - At
block 205, theprocessor 100 can determine that thememory system 110 fails to provide, as a response to thecommand 107, the item from the memory address to the processor within the time duration. For example, the memory address can be mapped to a memory component (e.g., 117, 119, or 111) among the multiple memory components 117 to 111 of thememory system 110. If the memory address of the data is currently mapped to the high-speed type memory device (e.g., DRAM 117, NVRAM 119), the data can be provided to theprocessor 100 within the time duration. However, if the memory address of the data is currently in the low-speed type memory device (e.g., NAND Flash 111), thememory system 110 can fail to provide the data to the processor within the time duration. - At
block 207, when theprocessor 100 determines that providing the item identified via the memory address to the processor takes longer than the time duration specified in theparameter 109 in the processor, theprocessor 100 can terminate the processing of the command in the processor. For example, when theprocessor 100 determines that the data cannot be made available with the specified time, theprocessor 100 can terminate the operations. - After the
processor 100 determines that providing the item identified via the memory address to the processor takes longer than the time duration, atblock 301 ofFIG. 3 , theprocessor 100 can transmit a signal to thememory system 110 to abort thecommand 107. - At
block 303, theprocessor 100 can free a resource (e.g., the memory controller) from thecommand 107 during a time period between the signal and the resending of the command. - At block 305, the
processor 100 can perform one or more operations that are not associated with thecommand 107 using the above freed resource. For example, theprocessor 100 can execute further instructions, including one or more instructions to load data items that are hosted in the fast memory (e.g., 117 or 119) of thememory sub-system 110. - For example, the aborted command is a first command for retrieving data from a first memory address. At
block 307, theprocessor 100 can optionally send a second command to the memory system to load a second item from a second memory address that is different from the first memory address. In this way, the processor can process other operations (e.g., the second command) instead of having to wait for the completion of the load operation on the low speed memory (e.g., the first command). - At
block 309, theprocessor 100 can resend the command to the memory system to load the item from the memory address after at least a predetermined period of time following the signal to abort the command. In some implementations, the predetermined period of time is configured to be longer than a time period for the memory system to remap the memory address from the first component to the second component. -
FIG. 4 illustrates anexample computing system 400 in which time to live techniques can be implemented. For example, the time to liverequirement 109 ofFIG. 1 can be imposed in the processor in thehost system 420 upon the time period between amemory sub-system 410 receiving aload command 107 and thememory sub-system 410 providing the data retrieved at the memory address specified in theload command 107. - In general, a memory sub-system can also be referred to as a “memory device.” An example of a memory sub-system is a memory module that is connected to a central processing unit (CPU) via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a nonvolatile dual in-line memory module (NVDIMM), etc.
- Another example of a memory sub-system is a data memory/system that is connected to the central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of memory include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD).
- In some embodiments, the memory sub-system is a hybrid memory/storage sub-system that provides both memory functions and storage functions. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.
- The
memory sub-system 410 can include media, such as media units/memory components 409A to 409N. In general, the media units/memory components 409A to 409N can be volatile memory components, non-volatile memory components, or a combination of such. Each of the media units/memory components 409A to 409N can perform operations to store, record, program, write, or commit new data independent of the operations of other media units/memory components 409A to 409N. Thus, the media units/memory components 409A to 409N can be used in parallel in executing write commands. In some embodiments, the memory sub-system is a storage system. An example of a storage system is a solid state drive (SSD). In some embodiments, thememory sub-system 410 is a memory module. Examples of a memory module includes a DIMM, NVDIMM, and NVDIMM-P. In some embodiments, thememory sub-system 410 is a hybrid memory/storage sub-system. In general, the computing environment can include ahost system 420 that uses thememory sub-system 410. For example, thehost system 420 can write data to thememory sub-system 410 and read data from thememory sub-system 410. - The
host system 420 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. Thehost system 420 can include or be coupled to thememory sub-system 410 so that thehost system 420 can read data from or write data to thememory sub-system 110. Thehost system 420 can be coupled to thememory sub-system 410 via a physical host interface. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc. The physical host interface can be used to transmit data between thehost system 420 and thememory sub-system 410. Thehost system 420 can further utilize an NVM Express (NVMe) interface to access the memory components 409A to 409N when thememory sub-system 410 is coupled with thehost system 420 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between thememory sub-system 410 and thehost system 420.FIG. 4 illustrates amemory sub-system 410 as an example. In general, thehost system 420 can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. - The
host system 420 includes aprocessing device 418 and acontroller 416. Theprocessing device 418 of thehost system 420 can be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, etc. In some instances, thecontroller 416 can be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, thecontroller 416 controls the communications over a bus coupled between thehost system 420 and thememory sub-system 410. - In general, the
controller 416 can send commands or requests to thememory sub-system 410 for desired access to memory components 409A to 409N. Thecontroller 416 can further include interface circuitry to communicate with thememory sub-system 410. The interface circuitry can convert responses received frommemory sub-system 410 into information for thehost system 420. - The
controller 416 of thehost system 420 can communicate withcontroller 415 of thememory sub-system 410 to perform operations such as reading data, writing data, or erasing data at the memory components 409A to 409N and other such operations. In some instances, thecontroller 416 is integrated within the same package of theprocessing device 418. In other instances, thecontroller 416 is separate from the package of theprocessing device 418. Thecontroller 416 and/or theprocessing device 418 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof. Thecontroller 416 and/or theprocessing device 418 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. - In general, the media units/memory components 409A to 409N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative- and (NAND) type flash memory. Each of the memory components 409A to 409N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a specific memory component can include both an SLC portion and an MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the
host system 420. Although non-volatile memory components such as NAND type flash memory are described, the memory components 409A to 409N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 409A to 409N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, ferroelectric random-access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), nanowire-based non-volatile memory, memory that incorporates memristor technology, and a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the nonvolatile memory cell being previously erased. Furthermore, the memory cells of the memory components 409A to 409N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data. - The
controller 415 of thememory sub-system 110 can communicate with the memory components 409A to 409N to perform operations such as reading data, writing data, or erasing data at the memory components 409A to 409N and other such operations (e.g., in response to commands scheduled on a command bus by controller 416). Thecontroller 415 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. Thecontroller 415 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. Thecontroller 415 can include a processing device 417 (processor) configured to execute instructions stored inlocal memory 419. In the illustrated example, thelocal memory 419 of thecontroller 415 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of thememory sub-system 410, including handling communications between thememory sub-system 410 and thehost system 420. In some embodiments, thelocal memory 419 can include memory registers storing memory pointers, fetched data, etc. Thelocal memory 419 can also include read-only memory (ROM) for storing micro-code. While theexample memory sub-system 410 inFIG. 4 has been illustrated as including thecontroller 415, in another embodiment of the present disclosure, amemory sub-system 410 cannot include acontroller 415, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). - In general, the
controller 415 can receive commands or operations from thehost system 420 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 409A to 409N. Thecontroller 415 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 409A to 409N. Thecontroller 415 can further include host interface circuitry to communicate with thehost system 420 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 409A to 409N as well as convert responses associated with the memory components 409A to 409N into information for thehost system 420. - The
memory sub-system 410 can also include additional circuitry or components that are not illustrated. In some embodiments, thememory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from thecontroller 415 and decode the address to access the memory components 409A to 409N. -
FIG. 5 illustrates an example machine of acomputer system 500 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. - For example, at least some of operations configured to implement the time to live requirement and/or configured to implement a response according to the time to live requirement can be implemented using instructions stored as a
data transfer manager 513. - In some embodiments, the
computer system 500 can correspond to a host system (e.g., thehost system 420 ofFIG. 4 ) that includes, is coupled to, or utilizes a processor (e.g., theprocessor 502 ofFIG. 5 ) or can be used to perform the operations of a data transfer manager 513 (e.g., to execute instructions to perform operations described with reference toFIGS. 1-4 ). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. - The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- The
example computer system 500 includes aprocessing device 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), static random access memory (SRAM), etc.), and adata storage system 518, which communicate with each other via a bus 530 (which can include multiple buses). -
Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets.Processing device 502 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Theprocessing device 502 is configured to executeinstructions 526 for performing the operations and steps discussed herein. Thecomputer system 500 can further include anetwork interface device 508 to communicate over thenetwork 520. - The
data storage system 518 can include a machine-readable storage medium 524 (also known as a computer-readable medium) on which is stored one or more sets ofinstructions 526 or software embodying any one or more of the methodologies or functions described herein. Theinstructions 526 can also reside, completely or at least partially, within themain memory 504 and/or within theprocessing device 502 during execution thereof by thecomputer system 500, themain memory 504 and theprocessing device 502 also constituting machine-readable storage media. The machine-readable storage medium 524,data storage system 518, and/ormain memory 504 can correspond to thememory sub-system 410 ofFIG. 4 . - In one embodiment, the
instructions 526 include instructions to implement functionality corresponding to a data transfer manager 513 (e.g., to execute instructions to perform operations described with reference toFIGS. 1-4 ). While the machine-readable storage medium 524 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. - Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result.
- The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
- It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
- The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.
- The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.
- In this description, various functions and operations are described as being performed by or caused by computer instructions to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the computer instructions by one or more controllers or processors, such as a microprocessor. Alternatively, or in combination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.
- In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/553,051 US20220107835A1 (en) | 2019-11-19 | 2021-12-16 | Time to Live for Memory Access by Processors |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/688,245 US11243804B2 (en) | 2019-11-19 | 2019-11-19 | Time to live for memory access by processors |
| US17/553,051 US20220107835A1 (en) | 2019-11-19 | 2021-12-16 | Time to Live for Memory Access by Processors |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/688,245 Continuation US11243804B2 (en) | 2019-11-19 | 2019-11-19 | Time to live for memory access by processors |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20220107835A1 true US20220107835A1 (en) | 2022-04-07 |
Family
ID=75909480
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/688,245 Active 2040-02-14 US11243804B2 (en) | 2019-11-19 | 2019-11-19 | Time to live for memory access by processors |
| US17/553,051 Pending US20220107835A1 (en) | 2019-11-19 | 2021-12-16 | Time to Live for Memory Access by Processors |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/688,245 Active 2040-02-14 US11243804B2 (en) | 2019-11-19 | 2019-11-19 | Time to live for memory access by processors |
Country Status (6)
| Country | Link |
|---|---|
| US (2) | US11243804B2 (en) |
| EP (1) | EP4062274A1 (en) |
| JP (1) | JP7445368B2 (en) |
| KR (1) | KR20220070539A (en) |
| CN (1) | CN114631083B (en) |
| WO (1) | WO2021101757A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11687282B2 (en) | 2019-11-19 | 2023-06-27 | Micron Technology, Inc. | Time to live for load commands |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102912504B1 (en) | 2022-06-10 | 2026-01-14 | 주식회사 엘지에너지솔루션 | Battery pack and vehicle comprising the same |
Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5355475A (en) * | 1990-10-30 | 1994-10-11 | Hitachi, Ltd. | Method of relocating file and system therefor |
| JPH0991194A (en) * | 1995-09-27 | 1997-04-04 | Canon Inc | Arbitration system and arbitration method |
| US6052696A (en) * | 1998-04-27 | 2000-04-18 | International Business Machines Corporation | Adaptive time-based journal bundling |
| US6799283B1 (en) * | 1998-12-04 | 2004-09-28 | Matsushita Electric Industrial Co., Ltd. | Disk array device |
| US20050249060A1 (en) * | 2004-04-13 | 2005-11-10 | Funai Electric Co., Ltd. | Optical disk reading apparatus |
| US20060031565A1 (en) * | 2004-07-16 | 2006-02-09 | Sundar Iyer | High speed packet-buffering system |
| US20100329052A1 (en) * | 2009-06-26 | 2010-12-30 | Wei-Jen Chen | Word line defect detecting device and method thereof |
| US20110179414A1 (en) * | 2010-01-18 | 2011-07-21 | Vmware, Inc. | Configuring vm and io storage adapter vf for virtual target addressing during direct data access |
| US20140297971A1 (en) * | 2013-03-26 | 2014-10-02 | Fujitsu Limited | Control program of storage control device, control method of storage control device and storage control device |
| US20140340974A1 (en) * | 2013-05-16 | 2014-11-20 | Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd | Apparatus and method for writing data into storage of electronic device |
| US20160203046A1 (en) * | 2013-09-10 | 2016-07-14 | Kabushiki Kaisha Toshiba | Memory device, server device, and memory control method |
| US20170212840A1 (en) * | 2016-01-21 | 2017-07-27 | Qualcomm Incorporated | Providing scalable dynamic random access memory (dram) cache management using tag directory caches |
| US9823968B1 (en) * | 2015-08-21 | 2017-11-21 | Datadirect Networks, Inc. | Data storage system employing a variable redundancy distributed RAID controller with embedded RAID logic and method for data migration between high-performance computing architectures and data storage devices using the same |
| US20190303226A1 (en) * | 2018-03-27 | 2019-10-03 | Samsung Electronics Co., Ltd. | Semiconductor memory module and memory system including the same |
| US20200065240A1 (en) * | 2018-08-27 | 2020-02-27 | SK Hynix Inc. | Memory system |
| US20200133567A1 (en) * | 2018-10-24 | 2020-04-30 | Western Digital Technologies, Inc. | Bounded latency and command non service methods and apparatus |
Family Cites Families (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7093256B2 (en) | 2002-12-13 | 2006-08-15 | Equator Technologies, Inc. | Method and apparatus for scheduling real-time and non-real-time access to a shared resource |
| US7222224B2 (en) | 2004-05-21 | 2007-05-22 | Rambus Inc. | System and method for improving performance in computer memory systems supporting multiple memory access latencies |
| US20060174431A1 (en) | 2005-02-09 | 2006-08-10 | Dr. Fresh, Inc. | Electric toothbrush |
| US7694071B1 (en) | 2005-07-12 | 2010-04-06 | Seagate Technology Llc | Disk drives and methods allowing configurable zoning |
| US7626572B2 (en) * | 2006-06-15 | 2009-12-01 | Microsoft Corporation | Soap mobile electronic human interface device |
| US7707379B2 (en) | 2006-07-13 | 2010-04-27 | International Business Machines Corporation | Dynamic latency map for memory optimization |
| US7496711B2 (en) * | 2006-07-13 | 2009-02-24 | International Business Machines Corporation | Multi-level memory architecture with data prioritization |
| US8380951B1 (en) * | 2008-10-01 | 2013-02-19 | Symantec Corporation | Dynamically updating backup configuration information for a storage cluster |
| WO2010061588A1 (en) | 2008-11-28 | 2010-06-03 | パナソニック株式会社 | Memory control device, data processor, and data read method |
| US8321627B1 (en) | 2011-10-06 | 2012-11-27 | Google Inc. | Memory operation command latency management |
| US9740485B2 (en) | 2012-10-26 | 2017-08-22 | Micron Technology, Inc. | Apparatuses and methods for memory operations having variable latencies |
| US9754648B2 (en) | 2012-10-26 | 2017-09-05 | Micron Technology, Inc. | Apparatuses and methods for memory operations having variable latencies |
| US10691344B2 (en) | 2013-05-30 | 2020-06-23 | Hewlett Packard Enterprise Development Lp | Separate memory controllers to access data in memory |
| JP6210742B2 (en) * | 2013-06-10 | 2017-10-11 | オリンパス株式会社 | Data processing device and data transfer control device |
| CN104536910B (en) * | 2014-12-12 | 2017-12-12 | 成都德芯数字科技股份有限公司 | A kind of MPEG TS streams PID, which is remapped, realizes system and method |
| US9952850B2 (en) * | 2015-07-28 | 2018-04-24 | Datadirect Networks, Inc. | Automated firmware update with rollback in a data storage system |
| US10019174B2 (en) | 2015-10-27 | 2018-07-10 | Sandisk Technologies Llc | Read operation delay |
| US10162558B2 (en) * | 2015-10-30 | 2018-12-25 | Micron Technology, Inc. | Data transfer techniques for multiple devices on a shared bus |
| US10515671B2 (en) | 2016-09-22 | 2019-12-24 | Advanced Micro Devices, Inc. | Method and apparatus for reducing memory access latency |
| US10146444B2 (en) * | 2016-10-03 | 2018-12-04 | Samsung Electronics Co., Ltd. | Method for read latency bound in SSD storage systems |
| US10528268B2 (en) | 2017-09-12 | 2020-01-07 | Toshiba Memory Corporation | System and method for channel time management in solid state memory drives |
| US11289137B2 (en) * | 2017-11-16 | 2022-03-29 | Micron Technology, Inc. | Multi-port storage-class memory interface |
| JP2019175292A (en) * | 2018-03-29 | 2019-10-10 | 東芝メモリ株式会社 | Electronic device, computer system, and control method |
| US11366753B2 (en) | 2018-07-31 | 2022-06-21 | Marvell Asia Pte Ltd | Controlling performance of a solid state drive |
| US12366988B2 (en) | 2020-09-25 | 2025-07-22 | Intel Corporation | Apparatus, systems, articles of manufacture, and methods for data lifecycle management in an edge environment |
-
2019
- 2019-11-19 US US16/688,245 patent/US11243804B2/en active Active
-
2020
- 2020-11-10 EP EP20889008.7A patent/EP4062274A1/en not_active Withdrawn
- 2020-11-10 JP JP2022528937A patent/JP7445368B2/en active Active
- 2020-11-10 KR KR1020227015377A patent/KR20220070539A/en active Pending
- 2020-11-10 CN CN202080076450.2A patent/CN114631083B/en active Active
- 2020-11-10 WO PCT/US2020/059843 patent/WO2021101757A1/en not_active Ceased
-
2021
- 2021-12-16 US US17/553,051 patent/US20220107835A1/en active Pending
Patent Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5355475A (en) * | 1990-10-30 | 1994-10-11 | Hitachi, Ltd. | Method of relocating file and system therefor |
| JPH0991194A (en) * | 1995-09-27 | 1997-04-04 | Canon Inc | Arbitration system and arbitration method |
| US6052696A (en) * | 1998-04-27 | 2000-04-18 | International Business Machines Corporation | Adaptive time-based journal bundling |
| US6799283B1 (en) * | 1998-12-04 | 2004-09-28 | Matsushita Electric Industrial Co., Ltd. | Disk array device |
| US20050249060A1 (en) * | 2004-04-13 | 2005-11-10 | Funai Electric Co., Ltd. | Optical disk reading apparatus |
| US20060031565A1 (en) * | 2004-07-16 | 2006-02-09 | Sundar Iyer | High speed packet-buffering system |
| US20100329052A1 (en) * | 2009-06-26 | 2010-12-30 | Wei-Jen Chen | Word line defect detecting device and method thereof |
| US20110179414A1 (en) * | 2010-01-18 | 2011-07-21 | Vmware, Inc. | Configuring vm and io storage adapter vf for virtual target addressing during direct data access |
| US20140297971A1 (en) * | 2013-03-26 | 2014-10-02 | Fujitsu Limited | Control program of storage control device, control method of storage control device and storage control device |
| US20140340974A1 (en) * | 2013-05-16 | 2014-11-20 | Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd | Apparatus and method for writing data into storage of electronic device |
| US20160203046A1 (en) * | 2013-09-10 | 2016-07-14 | Kabushiki Kaisha Toshiba | Memory device, server device, and memory control method |
| US9823968B1 (en) * | 2015-08-21 | 2017-11-21 | Datadirect Networks, Inc. | Data storage system employing a variable redundancy distributed RAID controller with embedded RAID logic and method for data migration between high-performance computing architectures and data storage devices using the same |
| US20170212840A1 (en) * | 2016-01-21 | 2017-07-27 | Qualcomm Incorporated | Providing scalable dynamic random access memory (dram) cache management using tag directory caches |
| US20190303226A1 (en) * | 2018-03-27 | 2019-10-03 | Samsung Electronics Co., Ltd. | Semiconductor memory module and memory system including the same |
| US20200065240A1 (en) * | 2018-08-27 | 2020-02-27 | SK Hynix Inc. | Memory system |
| US20200133567A1 (en) * | 2018-10-24 | 2020-04-30 | Western Digital Technologies, Inc. | Bounded latency and command non service methods and apparatus |
Non-Patent Citations (2)
| Title |
|---|
| Matsui, JPH0991194A Translation, 04/04/1997, <https://www.j-platpat.inpit.go.jp/s0100>, pgs. 1-13 (Year: 1997) * |
| Matsui, Nobuaki, JPH0991194A Human Translation, 4/4/1997, Translated by Steven M. Spar (Year: 1997) * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11687282B2 (en) | 2019-11-19 | 2023-06-27 | Micron Technology, Inc. | Time to live for load commands |
Also Published As
| Publication number | Publication date |
|---|---|
| JP7445368B2 (en) | 2024-03-07 |
| US11243804B2 (en) | 2022-02-08 |
| JP2023503027A (en) | 2023-01-26 |
| US20210149711A1 (en) | 2021-05-20 |
| KR20220070539A (en) | 2022-05-31 |
| EP4062274A1 (en) | 2022-09-28 |
| CN114631083A (en) | 2022-06-14 |
| CN114631083B (en) | 2024-05-17 |
| WO2021101757A1 (en) | 2021-05-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11704210B2 (en) | Custom error recovery in selected regions of a data storage device | |
| US20240264938A1 (en) | Address map caching for a memory system | |
| US11650755B2 (en) | Proactive return of write credits in a memory system | |
| US20250138752A1 (en) | Controller command scheduling in a memory system to increase command bus utilization | |
| US12461680B2 (en) | Operation based on consolidated memory region description data | |
| US11768613B2 (en) | Aggregation and virtualization of solid state drives | |
| WO2020033151A1 (en) | Throttle response signals from a memory system | |
| US20230161509A1 (en) | Dynamic selection of cores for processing responses | |
| US11687282B2 (en) | Time to live for load commands | |
| US20220107835A1 (en) | Time to Live for Memory Access by Processors | |
| US20250053344A1 (en) | Efficient command fetching in a memory sub-system | |
| US20230027877A1 (en) | Notifying memory system of host events via modulated reset signals | |
| US11914890B2 (en) | Trim value loading management in a memory sub-system | |
| WO2026030059A1 (en) | Sub block access via configuring a data size of logical block addressing in a memory sub-system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: MICRON TECHNOLOGY, INC., IDAHO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ENO, JUSTIN M.;REEL/FRAME:058533/0314 Effective date: 20191112 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |