[go: up one dir, main page]

US20090019234A1 - Cache memory device and data processing method of the device - Google Patents

Cache memory device and data processing method of the device Download PDF

Info

Publication number
US20090019234A1
US20090019234A1 US12/146,950 US14695008A US2009019234A1 US 20090019234 A1 US20090019234 A1 US 20090019234A1 US 14695008 A US14695008 A US 14695008A US 2009019234 A1 US2009019234 A1 US 2009019234A1
Authority
US
United States
Prior art keywords
cache memory
data
memory region
received
transmitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/146,950
Inventor
Kwang Seok IM
Hye Young Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IM, KWANG SEOK, KIM, HYE YOUNG
Publication of US20090019234A1 publication Critical patent/US20090019234A1/en
Priority to US14/561,470 priority Critical patent/US9262079B2/en
Priority to US15/007,584 priority patent/US10095436B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Definitions

  • the present disclosure relates to a semiconductor device, and more particularly, to a cache memory device capable of improving the performance of writing/reading data between a host and a non-volatile memory device, and a data processing method of the cache memory device.
  • Serial Advanced Technology Attachment (SATA) oriented Solid State Disk may be higher than in systems that make use of other memory devices, such as a NAND type Electrically Erasable and Programmable Read Only Memory (EEPROM) based non-volatile memory device. Accordingly, a system including the SATA oriented SSD may require a large-capacity buffer for smooth data transmission.
  • SATA Serial Advanced Technology Attachment
  • EEPROM Electrically Erasable and Programmable Read Only Memory
  • FIG. 1 is a block diagram of a conventional non-volatile memory system that includes a buffer.
  • the non-volatile memory system 10 includes a host 20 , a buffer 30 , and a non-volatile memory device 40 .
  • the buffer 30 in the non-volatile memory system 10 has a large capacity for storing data transmitted from the host 20 to the non-volatile memory device 40 , because the data processing speed of the host 20 is much faster than that of the non-volatile memory device 40 .
  • the buffer 30 temporarily stores data that is received from the host 20 and data from the non-volatile memory device 40 that is destined for the host 20 .
  • the buffer 30 may be embodied as a volatile memory device such as a Synchronous Dynamic Random Access Memory (SDRAM).
  • SDRAM Synchronous Dynamic Random Access Memory
  • the non-volatile memory device 40 receives and stores data output from the buffer 30 .
  • the non-volatile memory device 40 includes a memory cell array 41 having non-volatile memory cells such as a NAND type EEPROM, and a page buffer 43 .
  • the memory cell array 41 exchanges data with the buffer 30 through the page buffer 43 .
  • the non-volatile memory 10 is less efficient because the buffer 30 is only used for buffering data transmitted to the non-volatile memory device 40 .
  • the page buffer 43 reads or writes data in the units of a page.
  • Each page may include n sectors, where n is a natural number and equals 8.
  • n is a natural number and equals 8.
  • the non-volatile memory system 10 becomes less efficient since some of the four channels may not be used.
  • a cache memory device capable of improving performance of writing/reading data between a host and a non-volatile memory device, a method of operating the cache memory device, and a system that includes the cache memory device.
  • An exemplary embodiment of the present invention includes a data processing method of a cache memory device.
  • the method includes: determining a type of data to be received and performing at least one of transmitting a head of received data to a first cache memory area, transmitting a body of the received data to a second cache memory region, and transmitting a tail of the received data to the first cache memory region based on the determined type of data.
  • the determining includes receiving a logical block address value and a sector count value, calculating an offset based on the received logical block address value and a super page value, and determining the type of the data to be received based on the calculated offset and a ratio of the received sector count value to the super page value.
  • the performing may include, based on the calculated offset and the determined type of data, performing at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by a second pointer.
  • the data processing method of the cache memory device may further include transmitting the body stored in the second cache memory region to an external non-volatile memory device through a channel.
  • the offset may be a remainder obtained by dividing the received logical block address value by the super page value.
  • the super page value may be obtained by multiplying a number of channels between the cache memory device and an external non-volatile memory device by a number of sectors, which may be stored in a page buffer in the external non-volatile memory device.
  • An exemplary embodiment of the present invention includes a data processing method of a cache memory device.
  • the method includes determining a data type of received data that indicates whether the received data includes a body, and transmitting one of the received data excluding the body to a first cache memory region or transmitting the received data including the body to a second cache memory area based on the determined type of the received data.
  • the determining of the data type includes receiving a logical block address value and a sector count value, calculating an offset based on the received logical block address value and a super page value, and generating the data type based on the calculated offset and a ratio of the received sector count value to the super page value.
  • the data processing method of the cache memory may further include transmitting the data including the body stored in the second cache memory region to an external non-volatile memory device through a channel.
  • An exemplary embodiment of the present invention includes a cache memory device.
  • the cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • the control block determines the type of data to be received, and performs at least one of transmitting a head of the received data to a first cache memory region, transmitting a body of the received data to a second memory region, and transmitting a tail of the received data to the first cache memory region based on the type of data to be received.
  • the control block may include an offset calculator, a determination unit, and a controller.
  • the offset calculator calculates an offset based on a logical block address value and a super page value.
  • the determination unit determines the type of the data to be received based on the calculated offset and a ratio of a sector count value to the super page value.
  • the controller based on the offset calculated by the offset calculator and a determination result output from the determination unit, controls at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by the second pointer.
  • An exemplary embodiment of the present invention includes a cache memory device.
  • the cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • the control block determines whether data to be received includes a body, and transmits the received data excluding the body to a first cache memory region or transmits the received data including the body to a second cache memory region based on the determination.
  • the control block may include an offset calculator, a determination unit, and a controller.
  • the offset calculator calculates an offset based on a logical block address value and a super page value.
  • the determination unit determines whether the data to be received includes a body based on calculated offset and a ratio of a sector count value to the super page value.
  • the controller receives the data, and transmits the received data excluding the body to the first cache memory region designated by a first pointer or transmits the received data including the body to the second cache memory region designated by a second pointer based on a determination result output from the determination unit that indicates whether the data to be received includes the body.
  • An exemplary embodiment of the present invention includes a system, including a cache memory device, a non-volatile memory device, and a plurality of channels connected between the cache memory device and the non-volatile memory device.
  • the cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • the control block determines a type of data to be received and controls at least one of transmitting a head of the received data to the first cache memory region, transmitting a body of the received data to the second cache memory region, or transmitting a tail of the received data to the first cache memory region based on the type of the received data.
  • the control block transmits the body stored in the second cache memory device to the non-volatile memory device through at least one of the plurality of channels.
  • An exemplary embodiment of the present invention includes a system, including a cache memory device, a non-volatile memory device, and a plurality of channels connected between the cache memory device and the non-volatile memory device.
  • the cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • the control block determines whether data to be received includes a body, and transmits received data excluding the body to a first cache memory region or transmits the received data including the body to a second cache memory region based on a result of the determining.
  • the control block transmits the data including the body stored in the second cache memory device to the non-volatile memory device through at least one of the plurality of channels.
  • FIG. 1 is a block diagram of a conventional non-volatile memory system that includes a buffer
  • FIG. 2 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention
  • FIG. 3 is a schematic diagram that is used to explain a data classifying method according to an exemplary embodiment of the present invention
  • FIG. 4 is a block diagram of a cache memory device including a control block illustrated in FIG. 2 ;
  • FIG. 5 is a flowchart showing a data processing method of the cache memory device according to an exemplary embodiment of the present invention.
  • FIG. 6 is a flowchart that is used to explain an operation of writing and reading data on a non-volatile memory device by using the cache memory device illustrated in FIG. 2 ;
  • FIG. 7 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention.
  • FIG. 8 is a block diagram of the cache memory device including a control block illustrated in FIG. 7 ;
  • FIG. 9 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention.
  • the non-volatile memory system 100 includes a host 110 , a cache memory device 120 , and a non-volatile memory device 150 .
  • the non-volatile memory system 100 may be, for example, a computer system, an audio system, home automation, or a mobile electronic device.
  • the host 110 and the cache memory device 120 may exchange data by using a SATA protocol.
  • a SATA oriented SSD includes the cache memory device 120 and the non-volatile memory device 150 .
  • the host 110 exchanges data with the non-volatile memory device 150 through the cache memory device 120 .
  • the host 110 outputs a logical block address (LBA) value and a sector count value to the memory device 120 , and outputs write data DATA to the cache memory device 120 .
  • LBA logical block address
  • the cache memory device 120 temporarily stores data transmitted between the host 110 and non-volatile memory devices 161 to 168 .
  • the cache memory device 120 includes a control block 130 and a memory 140 .
  • the control block 130 receives an LBA value and a sector count value, calculates an offset based on the received LBA value and a super page value, calculates a ratio of the received sector count value to the super page value, and determines a type (or the configuration) of data, which will be received, based on the calculated offset and the calculated ratio.
  • the types of data may be divided into seven different types.
  • the data types may include: (1) data including only a head, (2) data including only a body, (3) data including only a tail, (4) data including a head and a body, (5) data including a head and a tail, (6) data including a body and a tail, and (7) data including a head, a body, and a tail.
  • the control block 130 can receive data DATA after determining the type of data to be received. Based on the calculated offset and the calculated ratio, the control block 130 transmits a head included in the received data DATA to a first cache memory region 141 designated by a first pointer Pointer 1 , transmits a body included in the received data DATA to a second cache memory region 143 designated by a second pointer Pointer 2 , or transmits a tail included in the received data DATA to a first cache memory region 141 designated by a first pointer Pointer 1 .
  • the memory 140 may be embodied as a volatile memory such as an SDRAM or a double data rate (DDR) SDRAM.
  • the memory 140 includes a first cache memory region 141 storing at least one of a head and a tail, and a second cache memory region 143 storing at least a body.
  • There are n channels 151 , 153 , 155 , and 157 between the cache memory device 120 and the non-volatile memory device 150 where n is a natural number (e.g., n may equal 4).
  • the cache memory device 120 and the non-volatile memory device 150 exchange data through at least one channel among the n channels 151 , 153 , 155 , and 157 .
  • a plurality of non-volatile memories 161 and 165 , 162 and 166 , 163 and 167 , and 164 and 168 are connected to each of the n channels 151 , 153 , 155 , and 157 .
  • the plurality of the non-volatile memories 161 and 165 , 162 and 166 , 163 and 167 , and 164 and 168 respectively include a cell array 11 and a page buffer 13 .
  • the memory cell array 11 includes a plurality of EEPROMs, and the plurality of EEPROMs may respectively be embodied as a Single Level Cell (SLC) or a Multi Level Cell (MLC).
  • SLC Single Level Cell
  • MLC Multi Level Cell
  • the memory cell array 11 and the cache memory device 120 exchange data through a channel corresponding to a page buffer 13 .
  • the first cache memory region 141 or the second cache memory region 143 include a plurality of unit memory regions, and the plurality of unit memory regions respectively have a super page size.
  • a super page size according to an exemplary embodiment of the present invention is 32-sectors. With respect to a super page size of 32-sectors, “32” is denoted as a super page size value.
  • a super page size may be the same as a body size.
  • FIG. 3 is a schematic diagram that is used to explain a method of classifying data according to an exemplary embodiment of the present invention.
  • FIG. 4 is a block diagram of a cache memory device 120 including a control block 130 illustrated in FIG. 2
  • FIG. 5 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention. The data processing method of the cache memory device 120 will be described with reference to FIGS. 2 , 3 , 4 , and 5 .
  • the control block 130 includes a setting unit 201 , an offset calculator 203 , a determination unit 205 , and a controller 207 .
  • the cache memory device 120 receives an LBA value (e.g., “6”) and a sector count value (e.g., “80”) output from the host 110 (S 10 of FIG. 5 ).
  • the offset calculator 203 and the determination unit 205 may include a determination block.
  • the determination unit 205 may determine, based on an offset (e.g., 6) calculated by an offset calculator 203 and a calculated ratio, whether data to be received includes a head, a body, and a tail.
  • an offset e.g., 6
  • FIG. 3 corresponds to a memory storage region where each sector can be stored.
  • a first cache memory region for storing at least one of a head and a tail need not be separated from a second cache memory region for storing a body in FIG. 3 , which is different from FIGS. 2 and 4 .
  • an offset calculated by an offset calculator 203 is 6.
  • Data corresponding to a sector count value “80” may then be sequentially stored by sector in regions marked by a figure of “6” to “85”.
  • sectors stored in regions marked by a figure of “6” to “31” may be defined as head (HEAD)
  • sectors stored in regions marked by a figure of “32” to “63” may be defined as body (BODY)
  • sectors stored in regions marked by a figure of “64” to “85” may be defined as tail (TAIL).
  • At least one of a head and a tail is stored in a first cache memory region 141 designated by a first pointer Pointer 1
  • at least one body is stored in a second cache memory region 143 designated by a second pointer Pointer 2 .
  • the determination unit 205 when a received LBA value is 0 and a received sector count value is 64, the determination unit 205 , based on an offset (e.g., 0) calculated by the offset calculator 203 and a calculated ratio, may determine that the data to be received only includes two bodies.
  • an offset e.g., 0
  • the determination unit 205 may determine that the data to be received only includes a head based on an offset (e.g., 6) calculated by the offset calculator 203 and a calculated ratio. In a further exemplary embodiment of the present invention, when a received LBA value is 32 and a received sector count value is 8, the determination unit 205 may determine that the data to be received only includes a tail based on an offset (e.g., 0) calculated by the offset calculator 203 and a calculated ratio
  • a head refers to data having an offset and received data having a data size smaller than a super page size
  • a body refers to data having received data having a data size that is a multiple of a super page size without an offset
  • a tail refers to data having an offset and received data having a data size smaller than a super page size.
  • the controller 207 While receiving data corresponding to a sector count value, e.g., 80, output from the host 110 , the controller 207 transmits a head to a first cache memory region 141 designated by a first pointer Pointer 1 , transmits a body to a second cache memory region 143 designated by a second pointer Pointer 2 , and transmits a tail to the first cache memory region 141 designated by a first pointer Pointer 1 based on an offset (e.g., 6) calculated by the offset calculator 203 and a determination result ITD of the determination unit 205 (S 30 of FIG. 5 ).
  • the determination result ITD may be based on the offset and a ratio of the received sector count value to a super page size value.
  • the controller 207 stores a first sector of the head DATA input in a region marked by a figure of “6” of a first cache memory region 141 designated by a first pointer Pointer 1 based on the activated head flag and an offset output from an offset calculator 203 .
  • the controller 207 changes a first pointer Pointer 1 to a second pointer Pointer 2 in response to the activated body flag. Accordingly, a first sector to a last sector of the body may be sequentially stored in regions marked by a figure of “32” to “63” of the second cache memory region 143 designated by the second pointer Pointer 2 . While the first sector of the body is stored in a region marked by a figure of “32” of the second cache memory region 143 , the body flag is deactivated and a tail flag is activated.
  • the controller 207 determines that a sector to be input next is a first sector of a tail in response to the activated tail flag.
  • the controller 207 changes a second pointer Pointer 2 to a first pointer Pointer 1 in response to the activated tail flag.
  • a first sector to a last sector of the tail may be sequentially stored in regions marked by a figure of “64” to “85” of the second cache memory region 143 designated by the second pointer Pointer 2 .
  • a determination result ITD of the determination unit 205 may include a head flag, a body flag, and a tail flag.
  • the controller 207 may also generate a head flag, a body flag, and a tail flag based on the determination result ITD of the determination unit 205 .
  • the controller 207 may include a storage device such as a register storing a head flag, a body flag, and a tail flag.
  • the controller 207 may transmit a body (e.g., 32 sectors) stored in the second cache memory region 143 to a non-volatile memory device 150 (S 40 of FIG. 5 ).
  • the controller block 130 may control a timing to transmit the body (e.g., 32 sectors) stored in the second cache memory region 143 to the non-volatile memory device 150 .
  • control block 130 may divide the body (e.g., 32 sectors), which is stored in the second cache memory region 143 , by the number of channels (e.g., 4), and transmit every divided 8-sector to each memory 161 , 162 , 163 , and 164 through each of a plurality of channels 151 , 153 , 155 , and 157 .
  • FIG. 6 shows a flowchart that may be used to explain an operation of writing and reading data to/from a non-volatile memory device using the cache memory device illustrated in FIG. 2 .
  • a host 110 outputs a write command, a LBA value “0”, and a sector count value “64”, outputs a read command, a LBA value “38”, and outputs a sector count value “2”, and a read command, a LBA value “32”, and a sector count value “8”, in order, referring to FIGS. 2 , 4 , and 6 , the operation of a cache memory device 120 according to an exemplary embodiment of the present invention can be explained as follows.
  • the control block 130 receives a write command, an LBA value “0”, and a sector count value “64”.
  • the control block 130 calculates an offset (e.g., 0) and determines that data DATA to be received only includes two bodies based on the LBA value “0”, the sector count value “64”, and the super page size value “32”.
  • the control block 130 stores received data, i.e., two bodies including 64 sectors, in regions from 0 to 63 of a second cache memory region 143 designated by a second Pointer Pointer 2 based on an activated body flag (FIG. 6 ( a )).
  • the control block 130 transmits 16-sectors of the 64 sectors respectively to each channel 151 , 153 , 155 , and 157 ( FIG. 6( b )).
  • the amount of sectors transmitted to each channel 151 , 153 , 155 , and 157 is obtained by dividing a whole body size (e.g., 64-sectors) by the number of channels (e.g., 4).
  • the control block 130 receives a write command, a LBA value “38”, and a sector count value “2”.
  • the control block 130 calculates an offset (e.g., 6) and determines that data DATA to be received only includes a head based on the LBA value “38”, the sector count value “2”, and the super page size value “32”.
  • the control block 130 Based on the activated head flag and the offset (e.g., 6), the control block 130 stores a received head, (e.g., two sectors), respectively in a seventh memory region 38 ′ and an eighth memory region 39 ′ of a first cache memory region 141 designated by a first pointer Pointer 1 ( FIG. 6( c )).
  • a received head e.g., two sectors
  • sectors stored in the seventh memory region 38 ′ and the eighth memory region 39 ′ of the first cache memory region 141 are not transmitted to a non-volatile memory device 150 .
  • 8 sectors output from a memory cell array 11 are stored in a page buffer 13 of one of the memories 161 - 168 (e.g., memory 161 ) of the non-volatile memory device 150 ( FIG. 6( c )).
  • the control block 130 receives a read command, a LBA value “32”, and a sector count value “8”.
  • the control block 130 calculates an offset (e.g., 0) and determines that data to be read from the memory 161 of the non-volatile memory device 150 only includes only a tail, based on the LBA value “32”, the sector count value “8”, and a super page size value “32”.
  • the control block 130 may read sectors only stored in regions marked by a figure of “32” to “37” from the page buffer 13 to a first cache memory region 141 .
  • FIG. 7 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention.
  • FIG. 8 is a block diagram of a cache memory device including a control block illustrated in FIG. 7
  • FIG. 9 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention.
  • a process of a cache memory device 121 to transmit a first data excluding a body to a first cache memory region 141 ′ designated by a first pointer Pointer 1 or to transmit a second data including a body to a second cache memory region 143 ′ designated by a second pointer Pointer 2 may be explained as follows.
  • the cache memory device 121 receives a LBA value (e.g., 6) and a sector count value (e.g., 80) output from a host 110 (S 11 of FIG. 9 ).
  • Data DATA having the offset “6” and the sector count value “80” includes a head, a body, and a tail.
  • a controller 207 ′ transmits data DATA having the offset “6” and the sector count value “80” to the second cache memory region 143 ′ designated by a second pointer Pointer 2 (S 31 of FIG. 9 ). Data stored in the second cache memory region 143 ′ are transmitted to a non-volatile memory device 150 under a control of a control block 131 (S 41 of FIG. 9 ).
  • control block 121 when the control block 121 receives an LBA value (e.g., 38) and a sector count value (e.g., 2) from the host 110 , the control block 121 transmits data having an offset “6” and the sector count value “2” to the first cache memory region 141 ′ designated by a first pointer Pointer 1 (S 31 of FIG. 9 ).
  • LBA value e.g., 38
  • sector count value e.g., 2
  • a cache memory device includes a storage region for storing a body and another storage region for storing at least a head or a tail, and may improve performance of writing/reading data between a host and a non-volatile memory device by processing a body or data including a body separately.
  • a cache memory device may improve the performance of writing/reading data between a host and a non-volatile memory device by storing data including a body in a storage region of a memory and transmitting the stored data to a non-volatile memory device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache memory device is provided. The cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block. The control block determines a type of data to be received. The control block also performs at least one of transmitting a head of received data to a first cache memory region, transmitting a body of the received data to a second cache memory region and transmitting a tail of the received data to the first cache memory region based on the type of the data to be received.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims priority under 35 U.S.C. §119 to Korean Patent Application No. 2007-0070369, filed on Jul. 13, 2007, the disclosure of which is incorporated by reference in its entirety herein.
  • BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • The present disclosure relates to a semiconductor device, and more particularly, to a cache memory device capable of improving the performance of writing/reading data between a host and a non-volatile memory device, and a data processing method of the cache memory device.
  • 2. Discussion of Related Art
  • Data transmission speed of a host in a system including a Serial Advanced Technology Attachment (SATA) oriented Solid State Disk (SSD) may be higher than in systems that make use of other memory devices, such as a NAND type Electrically Erasable and Programmable Read Only Memory (EEPROM) based non-volatile memory device. Accordingly, a system including the SATA oriented SSD may require a large-capacity buffer for smooth data transmission.
  • FIG. 1 is a block diagram of a conventional non-volatile memory system that includes a buffer. Referring to FIG. 1, the non-volatile memory system 10 includes a host 20, a buffer 30, and a non-volatile memory device 40.
  • The buffer 30 in the non-volatile memory system 10 has a large capacity for storing data transmitted from the host 20 to the non-volatile memory device 40, because the data processing speed of the host 20 is much faster than that of the non-volatile memory device 40. The buffer 30 temporarily stores data that is received from the host 20 and data from the non-volatile memory device 40 that is destined for the host 20. The buffer 30 may be embodied as a volatile memory device such as a Synchronous Dynamic Random Access Memory (SDRAM).
  • The non-volatile memory device 40 receives and stores data output from the buffer 30. The non-volatile memory device 40 includes a memory cell array 41 having non-volatile memory cells such as a NAND type EEPROM, and a page buffer 43. The memory cell array 41 exchanges data with the buffer 30 through the page buffer 43. The non-volatile memory 10 is less efficient because the buffer 30 is only used for buffering data transmitted to the non-volatile memory device 40.
  • The page buffer 43 reads or writes data in the units of a page. Each page may include n sectors, where n is a natural number and equals 8. For example, when there are four channels between the buffer 30 and the non-volatile memory device 40 and data is transmitted from the buffer 30 to the non-volatile memory device 40 across the channels, pages having 32-sectors(=4*8−sector) may be required for the non-volatile memory system 10 to operate optimally.
  • However, when the size of data transmitted from the buffer 30 to the non-volatile memory device 40 is less than 32-sectors, the non-volatile memory system 10 becomes less efficient since some of the four channels may not be used.
  • Thus, there is a need for a cache memory device capable of improving performance of writing/reading data between a host and a non-volatile memory device, a method of operating the cache memory device, and a system that includes the cache memory device.
  • SUMMARY OF THE INVENTION
  • An exemplary embodiment of the present invention includes a data processing method of a cache memory device. The method includes: determining a type of data to be received and performing at least one of transmitting a head of received data to a first cache memory area, transmitting a body of the received data to a second cache memory region, and transmitting a tail of the received data to the first cache memory region based on the determined type of data.
  • The determining includes receiving a logical block address value and a sector count value, calculating an offset based on the received logical block address value and a super page value, and determining the type of the data to be received based on the calculated offset and a ratio of the received sector count value to the super page value.
  • The performing may include, based on the calculated offset and the determined type of data, performing at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by a second pointer.
  • The data processing method of the cache memory device may further include transmitting the body stored in the second cache memory region to an external non-volatile memory device through a channel.
  • The offset may be a remainder obtained by dividing the received logical block address value by the super page value. The super page value may be obtained by multiplying a number of channels between the cache memory device and an external non-volatile memory device by a number of sectors, which may be stored in a page buffer in the external non-volatile memory device.
  • An exemplary embodiment of the present invention includes a data processing method of a cache memory device. The method includes determining a data type of received data that indicates whether the received data includes a body, and transmitting one of the received data excluding the body to a first cache memory region or transmitting the received data including the body to a second cache memory area based on the determined type of the received data.
  • The determining of the data type includes receiving a logical block address value and a sector count value, calculating an offset based on the received logical block address value and a super page value, and generating the data type based on the calculated offset and a ratio of the received sector count value to the super page value. The data processing method of the cache memory may further include transmitting the data including the body stored in the second cache memory region to an external non-volatile memory device through a channel.
  • An exemplary embodiment of the present invention includes a cache memory device. The cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block. The control block determines the type of data to be received, and performs at least one of transmitting a head of the received data to a first cache memory region, transmitting a body of the received data to a second memory region, and transmitting a tail of the received data to the first cache memory region based on the type of data to be received.
  • The control block may include an offset calculator, a determination unit, and a controller. The offset calculator calculates an offset based on a logical block address value and a super page value. The determination unit determines the type of the data to be received based on the calculated offset and a ratio of a sector count value to the super page value. The controller, based on the offset calculated by the offset calculator and a determination result output from the determination unit, controls at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by the second pointer.
  • An exemplary embodiment of the present invention includes a cache memory device. The cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block. The control block determines whether data to be received includes a body, and transmits the received data excluding the body to a first cache memory region or transmits the received data including the body to a second cache memory region based on the determination.
  • The control block may include an offset calculator, a determination unit, and a controller. The offset calculator calculates an offset based on a logical block address value and a super page value. The determination unit determines whether the data to be received includes a body based on calculated offset and a ratio of a sector count value to the super page value. The controller receives the data, and transmits the received data excluding the body to the first cache memory region designated by a first pointer or transmits the received data including the body to the second cache memory region designated by a second pointer based on a determination result output from the determination unit that indicates whether the data to be received includes the body.
  • An exemplary embodiment of the present invention includes a system, including a cache memory device, a non-volatile memory device, and a plurality of channels connected between the cache memory device and the non-volatile memory device. The cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • The control block determines a type of data to be received and controls at least one of transmitting a head of the received data to the first cache memory region, transmitting a body of the received data to the second cache memory region, or transmitting a tail of the received data to the first cache memory region based on the type of the received data. The control block transmits the body stored in the second cache memory device to the non-volatile memory device through at least one of the plurality of channels.
  • An exemplary embodiment of the present invention includes a system, including a cache memory device, a non-volatile memory device, and a plurality of channels connected between the cache memory device and the non-volatile memory device. The cache memory device includes a memory including a first cache memory region and a second cache memory region, and a control block.
  • The control block determines whether data to be received includes a body, and transmits received data excluding the body to a first cache memory region or transmits the received data including the body to a second cache memory region based on a result of the determining. The control block transmits the data including the body stored in the second cache memory device to the non-volatile memory device through at least one of the plurality of channels.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will become more apparent when describing in detail exemplary embodiments thereof, when taken in conjunction with the accompanying drawings of which:
  • FIG. 1 is a block diagram of a conventional non-volatile memory system that includes a buffer;
  • FIG. 2 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention;
  • FIG. 3 is a schematic diagram that is used to explain a data classifying method according to an exemplary embodiment of the present invention;
  • FIG. 4 is a block diagram of a cache memory device including a control block illustrated in FIG. 2;
  • FIG. 5 is a flowchart showing a data processing method of the cache memory device according to an exemplary embodiment of the present invention;
  • FIG. 6 is a flowchart that is used to explain an operation of writing and reading data on a non-volatile memory device by using the cache memory device illustrated in FIG. 2;
  • FIG. 7 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention;
  • FIG. 8 is a block diagram of the cache memory device including a control block illustrated in FIG. 7; and
  • FIG. 9 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the present invention are shown. Like reference numerals refer to the like elements throughout.
  • FIG. 2 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention. Referring to FIG. 2, the non-volatile memory system 100 includes a host 110, a cache memory device 120, and a non-volatile memory device 150. The non-volatile memory system 100 may be, for example, a computer system, an audio system, home automation, or a mobile electronic device.
  • The host 110 and the cache memory device 120 may exchange data by using a SATA protocol. A SATA oriented SSD includes the cache memory device 120 and the non-volatile memory device 150. The host 110 exchanges data with the non-volatile memory device 150 through the cache memory device 120. The host 110 outputs a logical block address (LBA) value and a sector count value to the memory device 120, and outputs write data DATA to the cache memory device 120.
  • The cache memory device 120 temporarily stores data transmitted between the host 110 and non-volatile memory devices 161 to 168. The cache memory device 120 includes a control block 130 and a memory 140. The control block 130 receives an LBA value and a sector count value, calculates an offset based on the received LBA value and a super page value, calculates a ratio of the received sector count value to the super page value, and determines a type (or the configuration) of data, which will be received, based on the calculated offset and the calculated ratio.
  • The types of data may be divided into seven different types. For example, the data types may include: (1) data including only a head, (2) data including only a body, (3) data including only a tail, (4) data including a head and a body, (5) data including a head and a tail, (6) data including a body and a tail, and (7) data including a head, a body, and a tail.
  • The control block 130 can receive data DATA after determining the type of data to be received. Based on the calculated offset and the calculated ratio, the control block 130 transmits a head included in the received data DATA to a first cache memory region 141 designated by a first pointer Pointer1, transmits a body included in the received data DATA to a second cache memory region 143 designated by a second pointer Pointer2, or transmits a tail included in the received data DATA to a first cache memory region 141 designated by a first pointer Pointer1.
  • The memory 140 may be embodied as a volatile memory such as an SDRAM or a double data rate (DDR) SDRAM. The memory 140 includes a first cache memory region 141 storing at least one of a head and a tail, and a second cache memory region 143 storing at least a body. There are n channels 151, 153, 155, and 157 between the cache memory device 120 and the non-volatile memory device 150, where n is a natural number (e.g., n may equal 4). The cache memory device 120 and the non-volatile memory device 150 exchange data through at least one channel among the n channels 151, 153, 155, and 157.
  • A plurality of non-volatile memories 161 and 165, 162 and 166, 163 and 167, and 164 and 168 are connected to each of the n channels 151, 153, 155, and 157. The plurality of the non-volatile memories 161 and 165, 162 and 166, 163 and 167, and 164 and 168 respectively include a cell array 11 and a page buffer 13.
  • The memory cell array 11 includes a plurality of EEPROMs, and the plurality of EEPROMs may respectively be embodied as a Single Level Cell (SLC) or a Multi Level Cell (MLC).
  • The page buffer 13 may store m-sectors, where m is a natural number (e.g., m=8). For example, a sector may be k bytes, where k is a natural number (e.g., k=512 or 1024). The memory cell array 11 and the cache memory device 120 exchange data through a channel corresponding to a page buffer 13. The first cache memory region 141 or the second cache memory region 143 include a plurality of unit memory regions, and the plurality of unit memory regions respectively have a super page size.
  • For example, a super page size (=n*m) may be determined by multiplying the number of channels (e.g., n=4), connected between the cache memory device 120 and the non-volatile memory device 150, by the number of sectors (e.g., m=8), which may be stored in a page buffer 13 of one of the non-volatile memories 161-168. A super page size according to an exemplary embodiment of the present invention is 32-sectors. With respect to a super page size of 32-sectors, “32” is denoted as a super page size value. A super page size may be the same as a body size.
  • FIG. 3 is a schematic diagram that is used to explain a method of classifying data according to an exemplary embodiment of the present invention. FIG. 4 is a block diagram of a cache memory device 120 including a control block 130 illustrated in FIG. 2, and FIG. 5 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention. The data processing method of the cache memory device 120 will be described with reference to FIGS. 2, 3, 4, and 5.
  • The control block 130 includes a setting unit 201, an offset calculator 203, a determination unit 205, and a controller 207. The offset calculator 203 and the determination unit 205 receive a super page size value (e.g., SPV=32) output from the setting unit 201. The setting unit 201 may be embodied as a data storage device as a register, and a super page size value (e.g., SPV=32) may be set through hardware or software.
  • The cache memory device 120 receives an LBA value (e.g., “6”) and a sector count value (e.g., “80”) output from the host 110 (S10 of FIG. 5). The offset calculator 203 of the control block 130 calculates a remainder (e.g., 6), that is, an offset by dividing the received LBA value (e.g., 6) by a super page size value (e.g., SPV=32) (S20 of FIG. 5).
  • The determination unit 205 of the control block 130 calculates a ratio of the received sector count value (e.g., 80) to a super page size value (e.g., SPV=32), and determines a type of data DATA to be received based on the offset calculated by the offset calculator 203 and the calculated ratio (S20 of FIG. 5). The offset calculator 203 and the determination unit 205 may include a determination block.
  • For example, when a LBA value is 6 and a sector count value is 80, the determination unit 205 may determine, based on an offset (e.g., 6) calculated by an offset calculator 203 and a calculated ratio, whether data to be received includes a head, a body, and a tail.
  • Each figure illustrated in FIG. 3 corresponds to a memory storage region where each sector can be stored. To define a head, a body, and a tail, a first cache memory region for storing at least one of a head and a tail need not be separated from a second cache memory region for storing a body in FIG. 3, which is different from FIGS. 2 and 4.
  • When an LBA value is 6 and a sector count value is 80, an offset calculated by an offset calculator 203 is 6. Data corresponding to a sector count value “80” (i.e., data including 80 sectors) may then be sequentially stored by sector in regions marked by a figure of “6” to “85”. For example, sectors stored in regions marked by a figure of “6” to “31” may be defined as head (HEAD), sectors stored in regions marked by a figure of “32” to “63” may be defined as body (BODY), and sectors stored in regions marked by a figure of “64” to “85” may be defined as tail (TAIL).
  • In an exemplary embodiment of the present invention, at least one of a head and a tail is stored in a first cache memory region 141 designated by a first pointer Pointer1, and at least one body is stored in a second cache memory region 143 designated by a second pointer Pointer2.
  • In another exemplary embodiment of the present invention, when a received LBA value is 0 and a received sector count value is 64, the determination unit 205, based on an offset (e.g., 0) calculated by the offset calculator 203 and a calculated ratio, may determine that the data to be received only includes two bodies.
  • In another exemplary embodiment of the present invention, when a received LBA value is 38 and a received sector count value is 2, the determination unit 205 may determine that the data to be received only includes a head based on an offset (e.g., 6) calculated by the offset calculator 203 and a calculated ratio. In a further exemplary embodiment of the present invention, when a received LBA value is 32 and a received sector count value is 8, the determination unit 205 may determine that the data to be received only includes a tail based on an offset (e.g., 0) calculated by the offset calculator 203 and a calculated ratio
  • For example, a head refers to data having an offset and received data having a data size smaller than a super page size, a body refers to data having received data having a data size that is a multiple of a super page size without an offset, and a tail refers to data having an offset and received data having a data size smaller than a super page size.
  • While receiving data corresponding to a sector count value, e.g., 80, output from the host 110, the controller 207 transmits a head to a first cache memory region 141 designated by a first pointer Pointer1, transmits a body to a second cache memory region 143 designated by a second pointer Pointer2, and transmits a tail to the first cache memory region 141 designated by a first pointer Pointer1 based on an offset (e.g., 6) calculated by the offset calculator 203 and a determination result ITD of the determination unit 205 (S30 of FIG. 5). The determination result ITD may be based on the offset and a ratio of the received sector count value to a super page size value.
  • For example, when an offset is present, a head flag is activated before a first sector of a head is input. Accordingly, the controller 207 stores a first sector of the head DATA input in a region marked by a figure of “6” of a first cache memory region 141 designated by a first pointer Pointer1 based on the activated head flag and an offset output from an offset calculator 203.
  • While a first sector of the head is stored in a region marked by a figure of “6” of a first cache memory region 141, the head flag is deactivated and a body flag is activated. While a last sector of the head is stored in a region marked by a figure of “31” of the first cache memory region 141, the controller 207 determines that a sector to be input next is a first sector of a body in response to the activated body flag.
  • Before the last sector of the head is completely stored and a first sector of the body is input, the controller 207 changes a first pointer Pointer1 to a second pointer Pointer2 in response to the activated body flag. Accordingly, a first sector to a last sector of the body may be sequentially stored in regions marked by a figure of “32” to “63” of the second cache memory region 143 designated by the second pointer Pointer2. While the first sector of the body is stored in a region marked by a figure of “32” of the second cache memory region 143, the body flag is deactivated and a tail flag is activated. While the last sector of the body is stored in a region marked by a figure of “63” of the second cache memory region 143, the controller 207 determines that a sector to be input next is a first sector of a tail in response to the activated tail flag.
  • Before the last sector of the body is completely stored and a first sector of the tail is input, the controller 207 changes a second pointer Pointer2 to a first pointer Pointer1 in response to the activated tail flag.
  • Accordingly, a first sector to a last sector of the tail may be sequentially stored in regions marked by a figure of “64” to “85” of the second cache memory region 143 designated by the second pointer Pointer2. For example, a determination result ITD of the determination unit 205 may include a head flag, a body flag, and a tail flag. The controller 207 may also generate a head flag, a body flag, and a tail flag based on the determination result ITD of the determination unit 205. In this event, the controller 207 may include a storage device such as a register storing a head flag, a body flag, and a tail flag.
  • After the first sector to the last sector of the body are completely stored in the second cache memory region 143, the controller 207 may transmit a body (e.g., 32 sectors) stored in the second cache memory region 143 to a non-volatile memory device 150 (S40 of FIG. 5). The controller block 130 may control a timing to transmit the body (e.g., 32 sectors) stored in the second cache memory region 143 to the non-volatile memory device 150.
  • For example, the control block 130 may divide the body (e.g., 32 sectors), which is stored in the second cache memory region 143, by the number of channels (e.g., 4), and transmit every divided 8-sector to each memory 161, 162, 163, and 164 through each of a plurality of channels 151, 153, 155, and 157.
  • FIG. 6 shows a flowchart that may be used to explain an operation of writing and reading data to/from a non-volatile memory device using the cache memory device illustrated in FIG. 2. When a host 110 outputs a write command, a LBA value “0”, and a sector count value “64”, outputs a read command, a LBA value “38”, and outputs a sector count value “2”, and a read command, a LBA value “32”, and a sector count value “8”, in order, referring to FIGS. 2, 4, and 6, the operation of a cache memory device 120 according to an exemplary embodiment of the present invention can be explained as follows.
  • The control block 130 receives a write command, an LBA value “0”, and a sector count value “64”. The control block 130 calculates an offset (e.g., 0) and determines that data DATA to be received only includes two bodies based on the LBA value “0”, the sector count value “64”, and the super page size value “32”.
  • The control block 130 stores received data, i.e., two bodies including 64 sectors, in regions from 0 to 63 of a second cache memory region 143 designated by a second Pointer Pointer2 based on an activated body flag (FIG. 6(a)). The control block 130 transmits 16-sectors of the 64 sectors respectively to each channel 151, 153, 155, and 157 (FIG. 6( b)). The amount of sectors transmitted to each channel 151, 153, 155, and 157 is obtained by dividing a whole body size (e.g., 64-sectors) by the number of channels (e.g., 4).
  • The control block 130 receives a write command, a LBA value “38”, and a sector count value “2”. The control block 130 calculates an offset (e.g., 6) and determines that data DATA to be received only includes a head based on the LBA value “38”, the sector count value “2”, and the super page size value “32”.
  • Based on the activated head flag and the offset (e.g., 6), the control block 130 stores a received head, (e.g., two sectors), respectively in a seventh memory region 38′ and an eighth memory region 39′ of a first cache memory region 141 designated by a first pointer Pointer1 (FIG. 6( c)). Here, sectors stored in the seventh memory region 38′ and the eighth memory region 39′ of the first cache memory region 141 are not transmitted to a non-volatile memory device 150.
  • Under control of the control block 130, 8 sectors output from a memory cell array 11 are stored in a page buffer 13 of one of the memories 161-168 (e.g., memory 161) of the non-volatile memory device 150 (FIG. 6( c)).
  • The control block 130 receives a read command, a LBA value “32”, and a sector count value “8”. The control block 130 calculates an offset (e.g., 0) and determines that data to be read from the memory 161 of the non-volatile memory device 150 only includes only a tail, based on the LBA value “32”, the sector count value “8”, and a super page size value “32”. The control block 130 may read sectors only stored in regions marked by a figure of “32” to “37” from the page buffer 13 to a first cache memory region 141.
  • FIG. 7 is a block diagram of a non-volatile memory system including a cache memory device according to an exemplary embodiment of the present invention. FIG. 8 is a block diagram of a cache memory device including a control block illustrated in FIG. 7, and FIG. 9 is a flowchart showing a data processing method of a cache memory device according to an exemplary embodiment of the present invention.
  • Referring to FIGS. 7, 8, and 9, a process of a cache memory device 121 to transmit a first data excluding a body to a first cache memory region 141′ designated by a first pointer Pointer1 or to transmit a second data including a body to a second cache memory region 143′ designated by a second pointer Pointer2 may be explained as follows.
  • An offset calculator 203 and a determination unit 205 receive a super page size value (SPV=32) output from a setting unit 201. The cache memory device 121 receives a LBA value (e.g., 6) and a sector count value (e.g., 80) output from a host 110 (S11 of FIG. 9). The offset calculator 203 calculates the remainder, (or an offset) which comes from dividing the received LBA value (e.g., 6) by the super page size value (SPV=32) (S21 of FIG. 9).
  • The determination unit 205 calculates a ratio of a received sector count value (e.g., 80) to a super page size value (SPV=32), and determines whether data DATA to be received includes a body based on the offset calculated by the offset calculator and the calculated ratio (S21 of FIG. 9). Data DATA having the offset “6” and the sector count value “80” includes a head, a body, and a tail.
  • Accordingly, a controller 207′ transmits data DATA having the offset “6” and the sector count value “80” to the second cache memory region 143′ designated by a second pointer Pointer2 (S31 of FIG. 9). Data stored in the second cache memory region 143′ are transmitted to a non-volatile memory device 150 under a control of a control block 131 (S41 of FIG. 9).
  • However, when the control block 121 receives an LBA value (e.g., 38) and a sector count value (e.g., 2) from the host 110, the control block 121 transmits data having an offset “6” and the sector count value “2” to the first cache memory region 141′ designated by a first pointer Pointer1 (S31 of FIG. 9).
  • A cache memory device according to at least one exemplary embodiment of the present invention includes a storage region for storing a body and another storage region for storing at least a head or a tail, and may improve performance of writing/reading data between a host and a non-volatile memory device by processing a body or data including a body separately.
  • A cache memory device according to at least one exemplary embodiment of the present invention may improve the performance of writing/reading data between a host and a non-volatile memory device by storing data including a body in a storage region of a memory and transmitting the stored data to a non-volatile memory device.
  • While the present invention has been shown and described with reference to exemplary embodiments thereof, it will be appreciated by those of ordinary skill in the art that changes may be made in these embodiments without departing from the spirit and scope of the present invention.

Claims (16)

1. A data processing method of a cache memory device comprising:
determining a type of data to be received; and
performing at least one of transmitting a head of received data to a first cache memory region, transmitting a body of the received data to a second cache memory region, and transmitting a tail of the received data to the first cache memory region based on the determined data type.
2. The method of claim 1, wherein the determining comprises:
receiving a logical block address value and a sector count value; and
calculating an offset based on the received logical block address value and a super page size value, and determining the type of the data to be received based on a calculated offset and a ratio of the received sector count value to the super page size value,
wherein the performing comprises:
transmitting the head or the tail to the first cache memory region designated by a first pointer or transmitting the body to the second cache memory region designated by a second pointer based on the calculated offset and the determined data type.
3. The method of claim 1, further comprising transmitting the body stored in the second cache memory region to an external non-volatile memory device through a channel.
4. The method of claim 2, wherein the offset is a remainder obtained by dividing the received logical block address value by the super page size value.
5. The method of claim 2, wherein the super page size value is obtained by multiplying a number of channels between the cache memory device and an external non-volatile memory device by a number of sectors, which is storable in a page buffer in the external non-volatile memory device.
6. A data processing method of a cache memory device comprising:
determining a data type of received data that indicates whether the received data includes a body; and
transmitting one of the received data excluding the body to a first cache memory region or transmitting the received data including the body to a second cache memory region based on the determined data type of the received data.
7. The method of claim 6, wherein determining a data type comprises:
receiving a logical block address value and a sector count value;
calculating an offset based on the received logical block address value and a super page size value; and
generating the data type based on the calculated offset and a ratio of the received sector count value to the super page size value.
8. The method of claim 6, further comprising transmitting the data including the body stored in the second cache memory region to an external non-volatile memory device through a channel.
9. A cache memory device comprising:
a memory including a first cache memory region and a second cache memory region; and
a control block determining a type of data to be received and performing at least one of transmitting a head of the received data to the first cache memory region, transmitting a body of the received data to the second cache memory region, and transmitting a tail of the received data to the first cache memory region based on the type of the data to be received.
10. The device of claim 9, wherein the control block comprises:
an offset calculator calculating an offset based on a logical block address value and a super page size value;
a determination unit determining the type of the data to be received based on the calculated offset and a ratio of a sector count value to the super page size value; and
a controller controlling, based on the offset calculated by the offset calculator and a determination result output from the determination unit, at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by the second pointer.
11. A cache memory device comprising:
a memory including a first cache memory region and a second cache memory region; and
a control block determining whether data to be received includes a body, and based on the determination, transmitting received data excluding the body to a first cache memory region or transmitting the received data including the body to a second cache memory region.
12. The device of claim 11, wherein the control block comprises:
an offset calculator calculating an offset based on a logical block address value and a super page size value;
a determination unit determining if the data to be received includes a body based on the calculated offset and a ratio of a sector count value to the super page size value; and
a controller receiving the data, and transmitting the received data excluding the body to the first cache memory region designated by a first pointer or transmitting the received data including the body to the second cache memory region designated by a second pointer based on a determination result output from the determination unit that indicates whether the data to be received includes the body.
13. A system comprising:
a cache memory device;
a non-volatile memory device; and
a plurality of channels connected between the cache memory device and the non-volatile memory device, wherein the cache memory device comprises:
a memory including a first cache memory region and a second cache memory region; and
a control block determining a type of data to be received, and controlling at least one of transmitting a head of the received data to the first cache memory region, transmitting a body of the received data to the second cache memory region, and transmitting a tail of the received data to the first cache memory region based on a type of the received data,
wherein the control block transmits the body stored in the second cache memory region to the non-volatile memory device through at least one of the plurality of channels.
14. The system of claim 13, wherein the control block comprises:
an offset calculator calculating an offset based on a logical block address value and a super page size value;
a determination unit determining the type of the data to be received based on the calculated offset and a ratio of a sector count value to the super page size value; and
a controller controlling at least one of transmitting the head or the tail to the first cache memory region designated by a first pointer and transmitting the body to the second cache memory region designated by the second pointer based on the offset calculated by the offset calculator and a determination result output from the determination unit that indicates the determined type of the data.
15. A system comprising:
a cache memory device;
a non-volatile memory device; and
a plurality of channels connected between the cache memory device and the non-volatile memory device, wherein the cache memory device comprises:
a memory including a first cache memory region and a second cache memory region; and
a control block determining if data to be received includes a body, and transmitting received data excluding the body to a first cache memory region or transmitting the received data including the body to a second cache memory region based on a result of the determining,
wherein the control block transmits the data including the body stored in the second cache memory device to the non-volatile memory device through at least one of the plurality of channels.
16. The system of claim 15, wherein the control block comprises:
an offset calculator calculating an offset based on a logical block addressing value and a super page size value;
a determination unit determining if the data to be received includes a body based on the calculated offset and a ratio of a sector count value to the super page size value; and
a controller receiving the data, and transmitting the received data excluding the body to the first cache memory region designated by a first pointer or transmitting the received data including the body to the second cache memory region designated by a second pointer based on a result output from the determination unit.
US12/146,950 2007-07-13 2008-06-26 Cache memory device and data processing method of the device Abandoned US20090019234A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/561,470 US9262079B2 (en) 2007-07-13 2014-12-05 Cache memory device and data processing method of the device
US15/007,584 US10095436B2 (en) 2007-07-13 2016-01-27 Cache memory device and data processing method of the device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2007-0070369 2007-07-13
KR1020070070369A KR101431205B1 (en) 2007-07-13 2007-07-13 Cache memory device and data processing method of the device

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US14/561,470 Continuation US9262079B2 (en) 2007-07-13 2014-12-05 Cache memory device and data processing method of the device
US14/561,570 Continuation US9403531B2 (en) 2014-08-13 2014-12-05 System and method of controlling vehicle using turning degree

Publications (1)

Publication Number Publication Date
US20090019234A1 true US20090019234A1 (en) 2009-01-15

Family

ID=40254087

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/146,950 Abandoned US20090019234A1 (en) 2007-07-13 2008-06-26 Cache memory device and data processing method of the device
US14/561,470 Active US9262079B2 (en) 2007-07-13 2014-12-05 Cache memory device and data processing method of the device
US15/007,584 Active 2028-07-09 US10095436B2 (en) 2007-07-13 2016-01-27 Cache memory device and data processing method of the device

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/561,470 Active US9262079B2 (en) 2007-07-13 2014-12-05 Cache memory device and data processing method of the device
US15/007,584 Active 2028-07-09 US10095436B2 (en) 2007-07-13 2016-01-27 Cache memory device and data processing method of the device

Country Status (3)

Country Link
US (3) US20090019234A1 (en)
KR (1) KR101431205B1 (en)
TW (1) TWI525430B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110276757A1 (en) * 2009-02-03 2011-11-10 Hitachi, Ltd. Storage control device, and control method for cache memory
US20130007381A1 (en) * 2011-07-01 2013-01-03 Micron Technology, Inc. Unaligned data coalescing
CN104182701A (en) * 2014-08-15 2014-12-03 华为技术有限公司 Array control unit, array and data processing method
CN105589919A (en) * 2015-09-18 2016-05-18 广州市动景计算机科技有限公司 Method and device for processing webpage resource
US20170102880A1 (en) * 2015-10-13 2017-04-13 Axell Corporation Information Processing Apparatus And Method Of Processing Information
CN106649138A (en) * 2015-10-13 2017-05-10 株式会社艾库塞尔 Information processing apparatus and method of processing information
EP3610380A4 (en) * 2017-04-11 2021-01-06 Micron Technology, Inc. Memory protocol with programmable buffer and cache size

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101431205B1 (en) 2007-07-13 2014-08-18 삼성전자주식회사 Cache memory device and data processing method of the device
KR101374065B1 (en) * 2012-05-23 2014-03-13 아주대학교산학협력단 Data Distinguish Method and Apparatus Using Algorithm for Chip-Level-Parallel Flash Memory
CN107015978B (en) * 2016-01-27 2020-07-07 阿里巴巴(中国)有限公司 Webpage resource processing method and device
CN107122136B (en) * 2017-04-25 2021-02-02 浙江宇视科技有限公司 Capacity obtaining method and device
CN111176582A (en) * 2019-12-31 2020-05-19 北京百度网讯科技有限公司 Matrix storage method, matrix access method, apparatus and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651136A (en) * 1995-06-06 1997-07-22 International Business Machines Corporation System and method for increasing cache efficiency through optimized data allocation
US20050152188A1 (en) * 2004-01-09 2005-07-14 Ju Gi S. Page buffer for flash memory device
US7408834B2 (en) * 2004-03-08 2008-08-05 Sandisck Corporation Llp Flash controller cache architecture

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5574944A (en) * 1993-12-15 1996-11-12 Convex Computer Corporation System for accessing distributed memory by breaking each accepted access request into series of instructions by using sets of parameters defined as logical channel context
US5860091A (en) * 1996-06-28 1999-01-12 Symbios, Inc. Method and apparatus for efficient management of non-aligned I/O write request in high bandwidth raid applications
JPH11272551A (en) 1998-03-19 1999-10-08 Hitachi Ltd Cache memory flush control method and cache memory
US6687158B2 (en) 2001-12-21 2004-02-03 Fujitsu Limited Gapless programming for a NAND type flash memory
US6711635B1 (en) * 2002-09-30 2004-03-23 Western Digital Technologies, Inc. Disk drive employing thresholds for cache memory allocation
KR20060089108A (en) 2005-02-03 2006-08-08 엘지전자 주식회사 Cache buffer device using SDRAM
KR100939333B1 (en) 2005-09-29 2010-01-28 한국전자통신연구원 Method and apparatus for dividing and reconstructing data into arbitrary sizes using counters
US7660911B2 (en) * 2006-12-20 2010-02-09 Smart Modular Technologies, Inc. Block-based data striping to flash memory
KR101431205B1 (en) * 2007-07-13 2014-08-18 삼성전자주식회사 Cache memory device and data processing method of the device
US8924631B2 (en) * 2011-09-15 2014-12-30 Sandisk Technologies Inc. Method and system for random write unalignment handling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651136A (en) * 1995-06-06 1997-07-22 International Business Machines Corporation System and method for increasing cache efficiency through optimized data allocation
US20050152188A1 (en) * 2004-01-09 2005-07-14 Ju Gi S. Page buffer for flash memory device
US7408834B2 (en) * 2004-03-08 2008-08-05 Sandisck Corporation Llp Flash controller cache architecture

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110276757A1 (en) * 2009-02-03 2011-11-10 Hitachi, Ltd. Storage control device, and control method for cache memory
US20130007381A1 (en) * 2011-07-01 2013-01-03 Micron Technology, Inc. Unaligned data coalescing
US9898402B2 (en) * 2011-07-01 2018-02-20 Micron Technology, Inc. Unaligned data coalescing
US10191843B2 (en) 2011-07-01 2019-01-29 Micron Technology, Inc. Unaligned data coalescing
US10853238B2 (en) 2011-07-01 2020-12-01 Micron Technology, Inc. Unaligned data coalescing
CN104182701A (en) * 2014-08-15 2014-12-03 华为技术有限公司 Array control unit, array and data processing method
CN105589919A (en) * 2015-09-18 2016-05-18 广州市动景计算机科技有限公司 Method and device for processing webpage resource
US20170102880A1 (en) * 2015-10-13 2017-04-13 Axell Corporation Information Processing Apparatus And Method Of Processing Information
CN106649138A (en) * 2015-10-13 2017-05-10 株式会社艾库塞尔 Information processing apparatus and method of processing information
US10802712B2 (en) * 2015-10-13 2020-10-13 Axell Corporation Information processing apparatus and method of processing information
EP3610380A4 (en) * 2017-04-11 2021-01-06 Micron Technology, Inc. Memory protocol with programmable buffer and cache size

Also Published As

Publication number Publication date
KR20090006920A (en) 2009-01-16
US9262079B2 (en) 2016-02-16
TW200903250A (en) 2009-01-16
TWI525430B (en) 2016-03-11
US20150081962A1 (en) 2015-03-19
KR101431205B1 (en) 2014-08-18
US10095436B2 (en) 2018-10-09
US20160139814A1 (en) 2016-05-19

Similar Documents

Publication Publication Date Title
US9262079B2 (en) Cache memory device and data processing method of the device
US7076598B2 (en) Pipeline accessing method to a large block memory
US8874826B2 (en) Programming method and device for a buffer cache in a solid-state disk system
US8473811B2 (en) Multi-chip memory system and related data transfer method
TWI473116B (en) Multi-channel memory storage device and control method thereof
US8738842B2 (en) Solid state disk controller and data processing method thereof
US11126369B1 (en) Data storage with improved suspend resume performance
US10503433B2 (en) Memory management method, memory control circuit unit and memory storage device
US11658685B2 (en) Memory with multi-mode ECC engine
CN113468082B (en) Advanced CE Coding for Bus Multiplexer Grids for SSDs
US11775222B2 (en) Adaptive context metadata message for optimized two-chip performance
US20190236020A1 (en) Memory system and operating method thereof
US20190278704A1 (en) Memory system, operating method thereof and electronic apparatus
US8954662B2 (en) SSD controller, and method for operating an SSD controller
US11586379B2 (en) Memory system and method of operating the same
US20250053319A1 (en) Storage device for storing write data or reading read data and electronic device including the same
US20150254011A1 (en) Memory system, memory controller and control method of non-volatile memory
US12487782B2 (en) Raid controller, operating method of raid controller and storage device
US12235800B2 (en) Defrag levels to reduce data loss
US12153514B2 (en) Storage device, electronic device including the same, and operating method thereof
CN110010170A (en) The operating method and its storage system of storage device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, DEMOCRATIC P

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IM, KWANG SEOK;KIM, HYE YOUNG;REEL/FRAME:021156/0062

Effective date: 20080609

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION