WO2007079358A2 - Method and system for accessing non-volatile storage devices - Google Patents
Method and system for accessing non-volatile storage devices Download PDFInfo
- Publication number
- WO2007079358A2 WO2007079358A2 PCT/US2006/062340 US2006062340W WO2007079358A2 WO 2007079358 A2 WO2007079358 A2 WO 2007079358A2 US 2006062340 W US2006062340 W US 2006062340W WO 2007079358 A2 WO2007079358 A2 WO 2007079358A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- file
- interface
- data
- logical
- memory
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1847—File system types specifically adapted to static storage, e.g. adapted to flash memory or SSD
Definitions
- the present invention relates generally to the operation of re-programmable non-volatile memory systems such as semiconductor flash memory, and more particularly, to accessing the flash meinory device via plural interfaces.
- Conventional computer systems typically include several functional components. These components may include a central processing unit (CPU) , main memory, input/output (w I/0") devices, and disk drives.
- CPU central processing unit
- main memory main memory
- input/output (w I/0") devices disk drives
- disk drives disk drives
- main memory is co ⁇ pled to the CPU via a system bus or a local memory bus.
- the main memory is used to provide the CPU access to data and program information that is stored in main memory at execution time.
- the main memory Is composed of random access memory (RAM) circuits.
- RAM random access memory
- a host system interfaces with flash mass storage devices (also referred to as “flash device”, “flash” or “flash card” interchangeably throughout this specification) via an interface.
- flash device also referred to as "flash device”, “flash” or “flash card” interchangeably throughout this specification
- flash device also referred to as "flash device”, “flash” or “flash card” interchangeably throughout this specification
- flash device also referred to as "flash device”, “flash” or “flash card” interchangeably throughout this specification
- flash device also referred to as "flash device”, “flash” or “flash card” interchangeably throughout this specification
- ECC error correction code
- the memory cells in each such group are the minimum number of memory cells that are erasable together.
- Flash memory systems are most commonly provided in the form of a memory card or flash drive that is removably connected with a variety of hosts such as a personal computer, a camera or the like, but may also be embedded within such host systems.
- a host system maintains a file directory and allocates file data to logical clusters.
- a host system that uses a logical interface for readinc f /writing data from/to a flash memory device may be referred to as a legacy host system.
- the term host system in this context includes legacy flash memory card readers and digital cameras and the like.
- a host maintains a file system and allocates file data to logical clusters, where the cluster size is typically fixed.
- a flash- device is divided into plural logical sectors and the host allocates space within the clusters comprising of a plurality of logical sectors.
- a cluster is a subdivision of logical addresses and a cluster map is designated as a file allocation table ("FAT") .
- the FAT is normally stored on a storage device itself.
- the host when writing data to the memory, the host typically assigns unique logical addresses to sectors, clusters or other units of data within a continuous virtual address space of the memory system.
- DOS disk operating system
- the host writes data to, and reads data from, addresses within the logical address space of the memory system.
- a controller wi t hin the memory system translates logical addresses received from the host into physical addresses within the memory array, where the data are actually stored, and then keeps track of these address translations.
- the data storage capacity of the memory system is at least as large as the amount of data that is addressable over the entire logical address space defined for the memory system.
- Other file storage systems (or formats) are being developed so that: a host does not have to perforin the file to logical address mapping. However, these new file systems may still have to be used with legacy host systems for reading/writing data. [0016] Therefore, there is a need for a method and system that allows a flash device to be accessed via a conventional logical interface or these new formats where a host does not perform the file to logical mapping .
- a mass storage memory system includes, re-programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data via a first interface, and a second interface, and data received via the first interface and the second interface is accessible via the first: interface and the second interface even if a file name for the data is not provided by a host system or before a write operation is complete.
- the first interface is a file based interface and the second interface is a logical interface.
- a mass storage memory system includes, re-programmable non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive files of data individually via a first interface, identified by unique identifiers and received file data is stored in one or more memory blocks and indexed based on the unique identifiers; wherein the controller assigns a plurality of logical block addresses to the received file data and updates file allocation table ( W FAT”) entries that are stored in blocks of memory cells such that file data received via the first interface is accessible via a second interface.
- W FAT file allocation table
- a mass storage memory system includes, reprogrammable non- ⁇ 7 olatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by plurality of logical addresses via a first interface which causes the data to be stored in one or more memory cells as a file and is accessible via a second interface even if a file name for the data is not provided by a host system.
- the first interface is a logical interface and the second interface is a file based interface.
- a mass storage memory system includes, re- programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by plurality of logical addresses via a first interface which causes the data to be stored in one or more memory ceils as a file and is accessible via a second interface even if a file name for the data is not provided by a host system, wherein the controller assigns internal file names to the data and merges the internal file names to a single file name based on a file name after a file name is provided by the host system that sends the data via the first interface.
- the first interface is a logical interface and the second interface is a file based interface.
- a method for transferring data between a host system and a re-programmable nonvolatile mass storage system having memory ceils organized into blocks of memory ceils is prcvided. The method comprises receiving individual files of data identified by unique file identifiers, wherein the mass storage system receives the individual files of data via a first interface and stores the received files of data indexed by the unique file identifiers; allocating a plurality of logxcal block addressees to a received file data; and updating a file allocation table ( XX FAT”) entries in the plurality of memory cells, so that the received file data can be accessible via a second interface .
- XX FAT file allocation table
- a method for transferring data between a host system and a re- ⁇ rogrammable nonvolatile mass storage system having memory ceils organized into blocks of memory cells comprises receiving data identifxed by plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; and identifying the data with file identifiers, so that the data can be accessible via a second interface, even if a file name for rhe data is not provided by the host system.
- a method for transferring data between a host system and a re-programmable non- volatile mass storage system having memory cells organized into blocks of memory cells comprises receiving data identified Dy a plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; identifying the data with file identifiers, so that the data can be accessible via a second interface even if a file name is not provided by the host system; storing data as internal files having unique file names; and merging the internal files with unique file names into a single file after a host file name for the data is received,
- a method for transferring data between a host system and a re-programmable nonvolatile mass storage system having memory cells organized into blocks of memory cells comprises receiving data via a first interface and a second interface; and making data accessible via the first interface and the second interface, even if a file name is not provided by a host system or before a write operation is complete.
- Figure IA shows a block diagram of a host system using a flash device
- Figure IB shows a block diagram of a flash device controller,, used according to one aspect of the present invention
- Figure 1C shows an of example physical memory organization for a flash memory system
- Figure ID shows an expanded view of a portion of the physical memory of Figure 1C
- Figure IE shows a further expanded view of a portion of the physical memory of Figure ID
- Figure IF shows a conventional logical address interface between a host and a re-programmable memory system
- Figure IG shows a direct data file storage interface between a host and a re-programmable memory system, according to one aspect of the present invention
- Figure IL shows in a different manner than Figure IG, a direct data file storage interface between a host and a re-programmable memory system, according to one aspect of the present invention
- Figure IM shows a functional hierarchy of an example memory system
- Figure 2 shows a top-level logical block diagram of a system used by a flash device, according to one aspect of the present invention
- E'igure 3A shows a block diagram of a flash memory device that is accessible via a file interface and a logical interface, according to one aspect of the present invention
- Figure 3B shows a data flow/address indexing scheme, according to one aspect of the present invention
- Figure 3C shows a top-level block diagram of a mass storage device, according to one aspect of the present invention
- Figure 3D shows a table with data accessibility rules for the mass storage device; according to one aspect of the present invention
- Figure 4A shows a DOS index table, according to one aspect of the present invention
- Figure 4B shows how a logical to physical table is populated, according to one aspect of the present invention.
- Figure 4C shows an example of a logical to physical table, according to one aspect of the present invention.
- Figure 4D illustrates how a logical to file table is populated, according to one aspect of the present invention
- Figure 4E shows an example of a logical to file table, according to one aspect of the present inventi on; [0047] Figure 4F illustrates how records of updated FAT entries are maintained, according to one aspect of the present invention
- Figure 5 shows an overall flow diagram for the mass storage device, according to one aspect, of the present invention.
- Figure 6 shows a flow diagram for a logical write process, according to one aspect of the present invention.
- Figure 7 shows a flow diagram for the convert to file process, according to one aspect of the present invention
- Figure 8 shows a flow diagram for a convert to logical process, according to one aspect of the present invention
- Figures 9A and 9B show block diagrams of a file access system, according to yet another aspect of the present invention.
- Figure 1OA shows an example of a table used by the system of Figures 9A and 9B, according to one aspect of the present invention
- Figure 1OB shows an example of a file write process using internal file names, according to one aspect of the present invention
- ⁇ 'igures 11 and 12 show process flow diagrams for a write process using the system of Figures 9A and 9B, according to one aspect of the present invention.
- Figure IA shows a block diagram of a typical host system 100 that includes a central processing unit
- CPU central processing unit
- RAM random access main memory
- CPU 101 stores those process steps in RAM 103 and executes the stored process steps out of RAM 103.
- Read only memory ( ⁇ ROM") 102 is provided to store invariant instruction sequences such as start-up instruction sequences or basic Input/output operating system (BIOS) sequences.
- ⁇ ROM Read only memory
- BIOS basic Input/output operating system
- Flash device (or card) 105 also provides non- volatile memory for CPU 101. Flash device 105 includes a controller module 106 (may also be referred to as "memory system controller") and solid state memory modules 107-108 (shown as Memory Module #1 and Memory Module #N) . Controller module 106 interfaces with host system 100 via a bus interface 104 or directly via system bus 101A or another peripheral bus (not shown) .
- controller module 106 interfaces with host system 100 via a bus interface 104 or directly via system bus 101A or another peripheral bus (not shown) .
- flash memory cards There are currently many different flash memory cards that are commercially available, examples being the CompactFlash (CF) , the MultiMediaCard (MMC) , Secure Digital (SD) , miniSD, Memory Stick, SmartMedia and
- TransFlash cards Althougn each of these cards has a unique mechanical and/or electrical interface according to its standardized specifications, the flash memory included in each is very similar. These cards are all available from SanDisk Corporation, assignee of the present application. SanDisk also provides a line of flash drives under its Cruzer trademark, which are hand held memory systems in small packages that have a Universal Serial Bus (USB) plug for connecting with a host by plugging into the host's USB receptacle. Each of these memory cards and flash drives includes controllers that interface with the host and control operation of the flash memory within them. [0063] Host systems that use such memory cards and flash drives are many and varied.
- USB Universal Serial Bus
- a NAND architecture of the memory cell arrays 107-108 is currently preferred, although other architectures, such as NOR, can also be used instead. Examples of NAND flash memories and their operation as part of a memory system may be had by reference to United States patents nos .
- FIG. 10 shows a block diagram of the internal architecture of controller module 106.
- Controller module 106 includes a microcontroller 109 that interfaces with various other components via interface logic 111.
- Memory 110 stores firmware and software instructions that are used by microcontroller 109 to control the operation of flash device 105.
- Memory 110 may be volatile re-programmable random access memory (“RAM”), a non-volatile memory that is not reprogrammable (“ROM”), a one-time programmable memory or a re-programmable flash electrically-erasable and programmable read-only memory (“EEPROM”) .
- RAM re-programmable random access memory
- ROM non-volatile memory that is not reprogrammable
- EEPROM electrically-erasable and programmable read-only memory
- FIG. 1C conceptually illustrates an organization of the flash memory cell array (107-108) that is used as an example in further descriptions below.
- Four planes or sub-arrays 131 - 134 of memory cells may be on a single integrated memory ceil chip, on two chips (two of the planes on each chip) or on four separate chips. The specific arrangement is not important to the discussion below. Of course, other numbers of planes, such as 1, 2, 8, 16 or more may exist in a system.
- the planes are individually divided into blocks of memory cells shown in Figure 1C by rectangles, such as blocks 137, 138, 139 and 140, located in respective planes 131 - 134. There can be dozens or hundreds of blocks in each plane.
- a block of memory cells is the unit of erase, the smallest number of memory cells that are physically erasable together. For increased parallelism, however, the blocks are operated in larger metablock units.
- One block from each plane is logically linked together to form a metablock.
- the four blocks 137 - 140 are shown to form one metablock 141. All of the cells within a metablock are typically erased together.
- the blocks used to form a metablock need not be restricted to the same relative locations within their respective planes, as is shown in a second metablock 143 made up of blocks 145 - 148.
- the memory system can be operated with the ability to dynamically form metabiocks of any or all of one, two or three blocks in different planes. This allows the size of the metablock to be more closely matched with the amount of data available for storage in one programming operation.
- the individual blocks are in turn divided for operational purposes into pages of memory cells, as illustrated in Figure ID.
- the memory cells of each of the blocks 131 - 134 are each divided into eight pages PO - P7. Alternatively, there may be 16, 32 or more pages of memory cells within each block.
- the page is the unit of data programming and reading within a block, containing the minimum amount of data that are programmed at one time.
- a page is formed of memory cells along a word line within a block.
- such pages within two or more blocks may be logically linked into madoreqes.
- a metapage 151 is illustrated in Figure ID, being formed of one physical page from each of the four blocks 131 - 134.
- the metapage 151 for example, includes the page P2 in of each of the four blocks but the pages of a metapage need not necessarily have the same relative position within each of the blocks.
- the memory system can also be operated to form metapages of any or all of one, two or three pages in separate blocks in different planes. This allows the programming and reading operations to adaptively match the amount of data that: may be conveniently handled in parallel and reduces the occasions when part of a metapage remains unprograiraned with data.
- a metapage formed of physical pages of multiple planes contains memory cells along word line rows of those multiple planes. Rather than programming all of the cells in one word line row at the same time, they are rrore commonly alternately programmed in two or more interleaved groups, each group storing a page of data (in a single block) or a metapage of data (across multiple blocks) .
- a unit of peripheral circuits including data registers and a sense amplifier need not be provided for each bit line but rather are time-shared between adjacent bit lines. This economizes on the amount of substrate space required for the peripheral circuits and allows the memory cells to be packed with an increased density along the rows. Otherwise, it is preferable to simultaneously program every cell along a row in order to maximize the parallelism available from a given memory system.
- Figure IE shows a logical data page of two sectors 153 and 155 of data of a page or metapage.
- Each sector usually contains a portion 157 of 512 bytes of user or system data being stored and another number of bytes 159 for overhead data related either to the data in the portion 157 or to the physical page or block in which it is stored.
- the number of bytes of overhead data is typically 16 bytes, making the total 528 bytes for each of the sectors 153 and 155.
- the overhead portion 159 may contain an ECC calculated from the data portion 157 during programming, its logical address, an experience count of the number of times the block has been erased and re-programmed, one or more control flags, operating voltage levels, and the like, plus an ECC calculated from such overhead data 159.
- the overhead data 159, or a portion of it may be stored in different pages in other blocks.
- Figure IF illustrates the most common interface between a host and a mass memory system (for example, a flash device) .
- the host deals with data files generated or used by application software or firmware programs executed by the host.
- a word processing data file is an example, and a drawing file of computer aided design (CAD) software is another, found mainly in general computer hosts such as PCs, laptop computers and the like.
- CAD computer aided design
- a document in the pdf format is also such a file.
- a still digital video camera ⁇ enerates a data file for each picture that is stored on a memory card.
- a cellular telephone utilizes data from files on an internal memory card, such as a telephone directory.
- a PDA stores and uses several different files, such as an address file, a calendar file, and the like. In any such application, the memory card may also contain software that operates the host.
- a common logical interface between the host and the memory system is illustrated in Figure IF.
- a continuous logical address space 161 is large enough to provide addresses for all the data that may be stored in the memory system.
- the host address space is typically divided into increments of clusters of data. Each cluster may be designed in a given host system to contain a number of sectors of data, somewhere between 4 and 64 sectors being typical .
- a standard sector contains 512 bytes of data.
- Files 1, 2 and 3 are shown in the example of Figure IF to have been created.
- An application program running on the host system creates each file as an ordered set of data and identifies it by a unique name or other reference. Enough available logical address space not already allocated to other files is assigned by the host to File 1.
- File 1 is shown to have been assigned a contiguous range of available logical addresses. Ranges of addresses are also commonly allocated for specific purposes, such as a particular range for the host operating software, which are then avoided for storing data even if these addresses have not been utilized at the time the host is assigning logical addresses to the data.
- the host keeps track of the memory logical address space by maintaining a file allocation table (FAT) , where the logical addresses the host assigns to the various host fxles are maintained.
- FAT table is typically stored in the non-volatile memory, as well as in a host memory, and is frequently updated by the host as new files are stored, other files deleted, files modified and the like.
- the host de-allocates the logical addresses previously allocated to the deleted file by updating the FAT table to show that they are now available for use with other data files.
- the host is not concerned about the physical locations where the memory system controller chooses to store the files.
- the typical host only knows its logical address space and the logical addresses that it has allocated to its various files.
- the memory system through a typical host/card interface, only knows the portions of the logical address space to which data have been written but does not know the logical addresses allocated tc specific host files, or even the number of host files.
- the memory system controller 106 converts the logical addresses provided by the host for the storage or retrieval of data into unique physical addresses within the flash memory ceil array where host data are stored.
- a block 163 represents a working table of these logical-to-physical address conversions, which is maintained by the memory system controller 106.
- the memory system controller 106 is programmed to store data files within the blocks and metablocks of a memory array 165 in a manner to maintain the performance of the system at a high level.
- Four planes or sub-arrays are used in this illustration. Data are preferably programmed and read with the maximum degree of parallelism that the system allows, across an entire metablock formed of a block from each of the planes.
- At least one metablock 167 is usually allocated as a reserved block for storing operating firmware and data used by the memory controller.
- Another metablock 169, or multiple metablocks, may be allocated for storage of host operating software, the host FAT table and the like. Most of the physical storage space remains for the storage of data files.
- the memory system controller 106 does not know, however, how the data received has been allocated by the host among its various file objects. Ail the memory controller 106 typically knows from interacting with the host is that data written by the host to specific logical addresses are stored in corresponding physical addresses as maintained by the controller's logical-to-physical address table 163.
- the memory controller 106 typically learns that data at a given logical address has been rendered obsolete by the host only when the host writes new data to their same logical address. Many blocks of the memory can therefore be storing such invalid data for a time. [0089]
- the sizes of blocks and metablocks are increasing in order to efficiently use the area of the integrated circuit memory chip. This results in a large proportion of individual data writes storing an amount of data that is less than the storage capacity of a metablock, and in many cases even less than that of a block. Since the memory system controller 106 normally directs new data to an erased pool metablock, this can result in portions of metablocks going unfilled.
- the new data are updates of some data stored in another metablock
- remaining valid metapages of data from that other metablock having logical addresses contiguous with those of the new data metapages are also desirably copied in logical address order into the new metablock.
- the old metablock may retain other valid data metapages. This results over time in data of certain metapages of an individual metablock being rendered obsolete and invalid, and replaced by new data with the same logical address being written to a different metablock.
- E)irect Data File Storage (“DFS”): [0092]
- a direct data file storage (“DFS”) methodology/system is disclosed in co-pending patent application serial number, 11/060,249; Filed on February 16, 2005; Attorney Docket Number SDK0380.US0, entitled “Direct Data File Storage in Flash Memories” and also in other the Direct Data File Storage Applications referenced above.
- a DFS device data is accessed by host system 100 on a file-by-file basis as described in the aforementioned patent application, that is, data is identified by a file identifier and an offset address within the file. No logical address space is defined for the device. Host system 100 does not allocate file data to logical clusters, and directory/index table information for files is generated by flash device 105. [0094] The host addresses each file by a unique file ID (or other unique reference) and offset addresses of units of data (such as bytes) within the file. This file address is qiven directly to the memory system controller 106, which then keeps its own table of where the data of each host file are physically stored.
- FIG. IG This file-based interface is illustrated in Figure IG, which should be compared with the logical address interface of Figure IF.
- An identification of each of the Files 1, 2 and 3 and offsets of data within the files of Figure IG are passed directly to the memory controller 106.
- This logical address information is then translated by a memory controller function 173 into physical addresses of metablocks and metapages of the memory 165.
- Figure IL The file-based interface is also illustrated by Figure IL, which should be compared with the logical address interface of Figure IH.
- the logical address space and hcst maintained FAT table of Figure IH are not present in Figure IL. Rather, data files generated by the host are identified to the memory system by file number and offsets of data within the file. The memory system then directly maps the files to the physical blocks of the memory cell array.
- the "Direct File Storage Back End System” communicates through a "Direct-File Interface” and a "File-Based Front-End System” with a host system over a file-based interface channel,
- Each host file is uniquely identified, such as by a file name.
- Data within a file are identified by an offset address within a linear address space that is unique to the file.
- DFS devices will be used advantageously by host systems, legacy host systems will need to use a logical interface to read and write data files . Therefore, it is advantageous to have a direct file storage device accessible for read and write operations via dual interfaces, namely, a file interface and a conventional logical interface.
- Direct Data File Access [0101] Details of direct data file access, i.e., when flash device 105 operates as a direct data file storage device, are described in the aforementioned co- pending patent application.
- FIG. 2 shows a block diagram of an indexing scheme of a direct data file storage system used according to one aspect of the present invention.
- Host 100 provides a path, filename and offset 203A ( ⁇ fileld> parameter) to flash device 105 via file interface (shown as 300 in Figure 3A) .
- the path points to a file directory 203 that stores the directory information, for example, Directory A and B.
- the ⁇ fileID> parameter can be either a full pathname for the file, or some shorthand identifier for the file, and may be referenced as a file_handle.
- a file pathname is provided to the direct data file interface of Figure IM in association with certain commands.
- the file pathname syntax may conform to the standard used by the DOS file system.
- the pathname describes a hierarchy of directories and a file within the lowest level of directory. Path segments imay be delimited by " ⁇ ". A path prefixed by " ⁇ ” is relative to the root directory. A path not prefixed by ⁇ ⁇ " is relative to the current directory. A segment of "..” indicates the parent directory of the current directory.
- File directory 203 records file attribute information and a pointer to a first entry in a file index table 204 defining data groups for a file.
- File directory 203 and the file index table (may also be referred to as "FIT") 204 are generated by flash device 105.
- File index table 204 contains an entry for each valid data group within a file with contiguous file offset addresses. Entries for a data group include a file offset address and a physical address.
- Every file in a directory points ⁇ o an entry in FIT 204 (for example, 203B points to 204D) .
- FIT 204 includes an entry for every data group and each entry (for example, 204D) includes an offset value 204A, block value 204B and byte value 204C.
- the offset value 204A shows offset: address within the file corresponding to the start of a data group (for example, 205 ⁇ ) .
- the block value 204B provides the actual physical address of the data block and the byte value 204C points to a byte where the data group begins in flash block 205B.
- Dual Access Mode [0109]
- a mass storage system is provided that is accessible via a file interface and a logical interface for both read and write operations, i.e. as a DFS device and a logical device.
- Figure 3A shows a top-level functional block diagram of flash device 105 where it can be used both as a DFS device and a logical device.
- Figure 3B shows how data and indexing is handled by device 105 operating as a direct data file storage device or logical device.
- Figure 3C shows a top-level block diagram for device 105 used both as a direct data file storage device and logical storage device.
- Figure 3D shows a table of various data accessibility rules for device 105 when it is used as a DFS device and as a logical storage device.
- file interface 300 (C)
- file storage manager 301 interfaces with file directory 203 and FIT 204 maintained in flash memory 107-108, as described above.
- File interface 300 and file storage manager 301 include the plural modules of a direct data file storage device as shown in Figures IG, IL and IM described above and described in more detail in die aforementioned co-pending patent application .
- Data received via file interface 300 is mapped to physical memory (shown as 315, Figure 3B) .
- the data is organized and stored in the order the data is received.
- File data exists in flash memory 107/108 as variable length data groups (shown as 304, Figure 3B) , where a data group includes contiguous file offset addresses.
- FIT 204 indexes the location of the data groups as described above and shown as 315A.
- Logical interface 302 and a logical store manager module (“LSM”) 303 facilitate access to device 105 via a logical path 302A.
- Logical interface 302 interfaces with a host system to receive host commands/data .
- LSM 303 interfaces with file directory (shown as FDIR) 203, FIT 204, a logical to physical mapping table V "LPT”) 308, a logical to file table (“LFT”) 309, a DOS index table (“DOSIT”) 310 and file storage manager 301, as described below with respect to Figure 3B.
- File data/logical data 304, DOS sectors 305, FDIR 203, FIT 204, LPT 308, LFT 309 and DOSlT 310 are information structures stored in memory cells 107/108, [0117] Peferring to Figure 3B, for data via logical interface 332, the host system provides a logical block address with a sector count. The logical data when received by device 105 may not be associated with any particular file. Device 105 receives the logical data and maps that information to actual physical memory location, shown as 313. The data is snored in memory cells 107/108 (shown as 304).
- LPT 308 xndexes data that is written via logical interface 302 and is not associated with a file, at a given time.
- Figure 4B shows an example of how LPT 308 is built.
- LPT 308 includes an entry for plural LBA runs, shown as 1-4. Conti ⁇ uous data received for each LBA runs (1-4) is stored in flash blocks 1 and 2 in flash memory 107/108. Every logical run is handled in the order it is received.
- Figure 4C shows an example of LPT 308 format/fields that are used to index the data received from the host system via logical interface 302.
- LPT 308 maintains an index of the logical runs as associated with the logical block address for the first sector of the logical run, the length of the logical run, the physical block address and the physical sector address where the data is stored by flash device 105. LPT 308 identifies the physical locations for logical data runs with contiguous addresses.
- logical block address is intended to identify a sector, while a logical block includes more than one sector and each sector is identified by a logical block address.
- DOSIT 310 When logical address for data received from the host is lower than a corresponding end of a root directory, then data is designated as a directory sector or a FAT sector. This data is then stored by device 105 in a dedicated block 305.
- DOSIT 310 maintains an index for the stored DOS and FAT sectors.
- Figure 4A shows an example of DOSIT 310 that maintains information for every logical run received from the host system. DOSIT 310 includes the length of each logical run with the associated LBA, the physical block address and the physical sector address where the FAT sector is stored. [0123] If the logical address is higher than the end of the root directory, then data is written as a logical update block that is mapped to an equivalent block of logical addresses.
- Logical data runs may be updated sequentially or chaotically and LFT 308 can handle both situations, as described below, hence separate indexing mechanisms are not needed. This is possible because an LPT 308 entry has the same format as a FIT 204 entry, except that the LPT 308 entry relates to a logical block address rather than file offset address. Entries in 5 LPT 308 define logical runs where the logical block addresses for plural sectors are sequential. However, multiple LPT 308 entries may be used to define a logical block. Address runs in a logical block may be out of order (i.e. chaotic) and LPT 308 can index the
- Logical update blocks may exist concurrently and hence may need garbage collection operations .
- garbage collection data groups are copied from other flash blocks to complete a block. If the copied
- 15 data group is indexed by FIT 204 and then FIT 204 entry is modified to reflect the new location. If a copied data group is indexed by LPT 308, then the copy operation itself is a part of a logical block consolidation .
- Logical to File table (“LFT") 309 maps a LBA run to a file indexed by FIT 204.
- Figure 4D illustrates how individual logicai
- LFT 309 shows an example of LFT 309 layout and the entries used to associate each logical run with a file offset and a file identifier value (for example, a file handle) . For each LBZl run in the logical address, LFT 309 identifies a file identifier and a file offset address. [0129] LFT 309 also allows a host system to access logical data via file interface 300- LFT 309 indexes logical data during the convert to file operation, described below with respect to Figure 8.
- FIG. 5 shows an overall flow diagram for flash device 105. The process starts in step S500 and in step S502, flash device 105 is initialized, which includes initializing the memory controller 106, and executing boot code so that firmware is loaded into memory 110.
- step S504 memory system 105 looks for a command from the host system.
- step S506 memory controller 106 determines if the command is related to the file interface 300. If the command is rela ⁇ ed to the file interface 300, then in step S508 memory controller 106 interprets the command and in step S510, executes direct data file storage functions.
- the aforementioned co-pending application provides list of various commands that may be related to direct data file storage functions, including a Read, Write, Insert, Update, Remove, Delete and Erase commands.
- step S512 memory controller 106 interprets the command as a logical interface command received via logical interface 302.
- step S514 memory controller 106 determines if the command is for a logical write operation. If the command is for a logical write operation, then in step S516, the logical write operation (described below with respect to Figure 6) is executed.
- Step S5108 controller 106 determines if the pending command relates to a logical data read operation .
- step S520 the data read operation is executed. Details of the read operation are provided in the patent application filed herewith, Serial Number 11/196,168, Filed on August 3, 2005, and Attorney Docket Number SDK621.00US, entitled "Method And System For Dual Mode Access For Storage Devices".
- step S522 memory controller 106 determines if the command is related to any other function. Examples of other logical interface functions include reading device parameters ("Identify Drive” command) , changing device state (“Idle” and "Standby” commands) and others.
- step S524 memory controller 106 determines if the host interfaces, i.e., logical and file interface are idle. If they are idle, then in step S526, garbage collection is performed. Garbage collection may also be performed if an Idle command is received at step S504. If the host interfaces are not idle, then the process returns to step S504 and memory again looks for a pending host command.
- Write Operation [0141] Figure 6 shows a flow diagram of process steps for a logical data write operation (s516, Figure 5) in flash device 105 that also functions as a direct data file storage device, in one aspect of the present invention.
- step S600 controller 106 determines if logical data has been received via logical interface 302. If logical data has not: been received, then in step S616, controller 106 determines if a new command has been received. If a new command (for example, write, read or any other command) has been received from the host system, then in step S618, LPT 308 is updated. In step S620, the process returns to step S504 in Figure 5. If a new command is not received in step S616, then the process reverts back to step S602. [0142] If logical data was received in step S602, then in step S604, controller 106 determines if the LBA is related to a directory or DOS sector.
- step S610 controller 106 determines if an "end of file" condition is present. If the condition is present, then in step S612, the process moves to a "convert to file” operation described below with respect to Figure ⁇ and in step S614, the process returns to step S504, Figure 5. If in step S610, the end of file condition is not present, then the process reverts back to step S602.
- step S604 If in step S604, the logical data is not relateo to a directory or FAT sector, then in step S622, controller 106 determines if there is an entry for the LBA in LPT 308. If there is an entry, then in step S628 the operation is identified as an update operation and the block is identified as a modified block and the process moves to step S630,
- step S622 If in step S622, an entry for the LBA in not present in LPT 308 f then in step S624, controller 106 determines if an entry is present in LFT 309. If the entry is present, then in step S626, memory controller 106 finds the entry in FIT 204, which provides a physical address associated with the LBA. If an entry is not found in step S624, then the process moves to step S638, described below. [0146] In step S630, controller 106 determines if the block (from S628) is a new update block. If yes, then in step S632, controller 106 determines if the oldest modified block fully obsolete. If yes, then the oldest modified block is placed in the obsolete block queue for subsequent garbage collection, as described in the co-pending Direct Data File Storage Applications .
- step S632 If in step S632, the oldest modified block is not fully obsolete, then in step S636, the block is placed in a common block queue for garbage collection that is described in the aforementioned patent application (Reference to SDK0569) .
- controller 106 determines if the address for the LBA run is contiguous. If yes, then the data is stored in step S642. If the address is not contiguous, then LPT 308 is updated in S640 and the data is stored in step S642. The process then reverts back to step S602. [0149] Convert to File Process Flow: [0150] As logical data is received via logical interface 302, LPT 308 entries are created.
- the convert to file operation is initiated by an "end of file” condition in a sequence received at logical interface 302.
- the end of file condition is generated by a host system after the host completes writing data.
- the end of file condition is a characteristic sequence of directory and FAT write operations. It is noteworthy that the "convert to file” operation may also be initiated by a specific host command at the logical interface.
- step S700 controller 106 identifies the logical data with new file entries, updated and deleted files.
- controller 106 determines if the host system has written new data, modified existing data or deleted any information. The content of a directory sector written by a nest system is compared to a previous version and this allows controller 106 to identify entries for any new file, existing file that has been updated and any file that has been deleted.
- step S702 controller 106 identifies FAT entries related to the entries that are identified in step STOO. When a host writes a FAT sector, DO3IT 310 maintains a record of the FAT sectors as related to the LBA run.
- An extension table (DOSIT (ext) ) 310A maintains a record of the updated FAT entries. This is shown in Figure 4F, where an original FAT sector entry is stored in DOSIT 310. After the FAT sector is updated, table 310A stores the previous entry value and the updated entry value. The DOSIT 310 maintains all the current and updated entries. [0154] In step S704, the LBA runs for data that has been written, updated or deleted are Identified. [0155] In step 5706, LPT 308 is scanned to determine if data for the LBA runs already exists. [0156] In step S7G8, after data is identified to be new or modified, entries are created In file directory 203, FIT 204 and LFT 309.
- garbage collection queues are updated. Garbage collection needs are minimized, if the host has not repeated data. Garbage collection is performed if data for an LBA run has been written more than once. Garbage collection also deletes logical data that Is not associated with any file. Garbage collection is described in the co-pending Direct Data File Storage Applications.
- step S712 entries for data runs identified in step S704 are removed from LPT 308.
- Convert to Logical Process [0160] In one aspect of the present Invention, data written via file interface 300 is accessible via logical Interface 302. The convert to logical operation (shown as 311, Figure 3B) is performed to make that data accessible. The "convert to logical" operation creates FAT and directory entries in DOS sectors 305 and in LFT 309, so that data that is written, via file 300 can be accessed via logical interface 302. This operation may be initiated after a "Close" command is received via file interface 300, The Close command signifies that a file write operation via file interface 300 is complete.
- Step S800 the convert to logical operation begins. The operation starts after a "close" command or a specific command to start the convert to logical operation is received by controller 106.
- step S802 FIT 204 is scanned to determine the length of the file that will be made accessible via logical interface 302.
- step S804 DOS sectors 305 are scanned to find sufficient logical address space that will be allocated to a particular file (or batch of files) for which the convert to logical operation is being performed.
- step S806 a LBA run is allocated (or associated) for the file.
- step S808 LFT 309 entries are written. The entries associate a LBA run, LBA length with a file identifier having a file offset value. The file identifier and offset information is obtained from FIT 204.
- step S810 controller 106 defines cluster chains for the file.
- step S812 the FAT entries in DOS sectors 305 are updated and in step S814, the file directory entries for the file are read.
- step 5816 directory entries are written in DOS sector 305.
- step 3818 the logical write pointer in the FAT is incremented so that future convert to logical operations can be tracked and accommodated.
- controller 106 performs the foregoing convert to logical operation after a file is written via file interface 300.
- data written via a file interface is accessible ⁇ ia a logical interface.
- a flash device can operate with both a legacy host that does not support a file interface and a host system that supports a file interface.
- a flash device is provided that can be accessed via a logical interface or a file interface in real time, regardless of which interface is used to write data to the flash device.
- the term real-time in this context means that there are more than one FAT/directory updates instead of a single FAT/Directory update at the end of a file write operation.
- there are one or more than one F ⁇ T/directory updates there are one or more than one F ⁇ T/directory updates
- controller 106 allocates available LBA space and updates FAT entries in memory cells 107/108.
- the FAT update may be performed substantially in real-time or after a file write operation. This allows data written via file interface 301 to be immediately available via logical interface 302.
- the LBA allocation and FAT update is performed by an embedded system (907, Figure 9A) with an output that is specific to the file storage back-end system.
- the embedded file system output is similar to the logical interface used by a host system.
- the embedded file system can be a software module that is similar to the host's LBA based file system.
- controller 106 identifies the data run as a file. This allows the data run to be accessible via the file interface 301 even if the file write operation has not been completed by the file system (i.e. FAT and directory write operations) .
- Any data written via file interface or via logical interface is uniquely identified by a LBA and a unique file identifier. This allows data to be accessible from both the interfaces.
- FIGS 9A and 9B provide block diagrams of yet other aspects of the present invention.
- a file dual index table (“FDIT") 308 is maintained in flash memory (107/108).
- FDIT 308 maintains an entry for every file name with an offset value and a corresponding LBA (allocated by memory controller 106) . This allows access to files written via one or both of file interface 301 and logical interface 302, as described below.
- host 900 uses a direct data file interface 903 and host 901 uses a standard file system 904 to write data to flash 105 via file interface 301 and logical interface 302, respectively.
- direct data file interface 903 interfaces with application 902 and sends file access commands 906 to flash 105.
- the file access commands are received by file interface 301 and processed by controller 106.
- Files from host system 900 are shown as HFa, HFb...HFx.
- variable data groups shown as HFa, HBb...HFx
- memory controller 106 places a call to the file access to logical converter 907 (also referred to as "converter 907 w ) to register the received file (for example, HFa) with FDIT 908.
- Memory controller 106 analyzes FAT/directory area (shown as 305, Figure 3A) and allocates logical space to the file received via the file interface 301.
- Converter 907 then updates FDIT 308 so that the file written via file interface 301 can also be accessed via logical interface 302.
- Converter 907 after updating FDIT 908, generates file access commands (913, Figure 9A) that allow access to directory and FAT area.
- converter 907A (shown in Figure 9B) generates logical access command 913A that is then sent to converter 909.
- Converter 909 takes the logical commands 913A and converts them into file access command 915 that is sent to the file storage back-end system 910.
- file system 904 and 907A are identical and hence easier to implement.
- file system 904 interfaces with application 902.
- File system 904 receives file access commands (902A) from application 902 and then converts the file access commands 902A into logical access command 905.
- the logical access command 905 are received by logical interface 302 and then processed by memory controller 106, An example is shown where Host File A is received by file system 904 that sends logical fragments (shown as LFO, LFl...LFx) to logical interface 302 and then saved as Host File A in memory cells 107/108, as described below in detail.
- LFO logical fragments
- LFl...LFx logical fragments
- File system 904 analyzes FAT information to see if free logical sectors are available and can be allocated to a particular file. Host 901 typically only knows its logical address space and the logical addresses that it has allocated to its various files . If free sectors/clusters are available, then logical space is allocated. Host 901 then sends logical fragments (shown as LFO, LFl.... LFx) / (logical access command 905) to flash 105.
- logical fragments shown as LFO, LFl.... LFx
- flash 105 After flash 105 receives logical command 905, memory controller 106 updates directory and FAT information.
- the updated FAT and directory information 912 is sent to converter 907.
- converter 907 does not need to be updated every time, as it can instead access FAT/directory stored in non-volatile memory 107/108 directly itself when it is necessary to do a conversion is needed.
- Logical access command 911 is also sent to converter 909 that generates file access command 915 to store data in memory cells 107/108.
- Each logical data run is assigned an internal file name (i.e. internal to the flash system 105) by memory controller 106 (using converter 909 interfacing with FDIT 908) .
- more than one internal file name is used to identify and store the logical data runs.
- the internal file name can be based on various factors, for example, one or both of the StartLBA_Length and the LBA.
- the StartLBA_Length is based en the length of a logical data run and the start of a LBA, while the second file identifier ( ⁇ ID") is based on the actual LBA.
- memory controller 106 keeps saving the logical data runs as individual internal files. The internal files are all merged into a single file when host 901 sends a host file name to flash 105. Memory controller 106 associates plural data runs with the host file name. Memory controller 106 retains the second file ID, i.e., the LBAs for the plural data runs. [0187] Once the host file name is associated with the various data runs, converter 909 updates FDIT 908 so that the LBA, logical sector address is associated with the host file name and file offset value. File access command 915 is sent to the file storage back-end system 910 that stores the data received via logical interface 302 in memory cells 107/108 (see Figure 3A) . This allows a file written via logical interface to be accessible via file interface 301.
- Figure IOB illustrates the file write process via logical interface 302 and the use of the internal files described above.
- Host 901 f s file space 1000 is shown as multiples of 512 bytes (minimum sector size, LBA space is shown as 1002 and data as stored is shown as 1004. It is noteworthy that the present invention is not limited to any particular sector size or data unit size.
- file system 904 allocates LBAs and generates logical commands (905) to write the file data (shown as 1006 and 1008 (LFO.... LFx) .
- File storage system 105 then organizes the logical fragments into internal files .
- the internal files are shown as file 0, file 1 and so forth (1010) .
- a dual file ID table 1012 (same as FDIT 908) is maintained.
- the example in Figure 1OB shows the StartLBA_Length (100_200) and the LBA ID (100, 200) as the file identifiers.
- file system 904 updates FAT and directory information through logical commands (shov/n as Host File A (1014). Now the host file (Host File A) is associated with the logical fragments ⁇ shown as 1016) .
- File storage system 105 then updates FAT/directory files and merges ail the internal files and associates them to the host file ("A") (shown as 1018) .
- the updated dual file ID table (1020) saves the host file name "A" with the associated LBA ID (in this example, 100, 200 and 400, 200) .
- host file system 904 In order to update an existing host file (for example, host file "A") , host file system 904 identifies the LBAs for the fragment (shown as 1022) and generates the logical commands (shown as 1024) .
- the LBA is 400,1OC for the update process .
- File storage system 1C5 then identifies the fragments offset in the existing stored file 11 A" (200*sector size) and then writes the new fragment (shown as 1026) , The file identifiers stay the same (1028) but the physical location of the file data, especially the new fragment may change. [0195] It is noteworthy that the dual file ID tables shown in Figure 1OB are a part of FDIT 908.
- FDIT 908 is stored in flash device 105 and is updated/maintained real time (i.e. more than once, instead of being updated only after a file write operation) by converter 907/907A and converter 909. This keeps both the file interface 301 and logical interface 302 synchronized.
- FDIT 908 fields are shown in Figure 1OA and include the file name, file offset, logical sector address, LBA and logical data run number. For data written via file interface 301, LBAs are assigned and stored in FDIT 908. Data written via logical interface is assigned the host file name and stored with an offset value. Kence data written via either interface can be accessed.
- Logical Write Process Flow [0199] Figure 11 shows the overall process flow diagram for a write operation via logical interface 301 with respect to the system disclosed in Figures 9A and 9B. The process starts in step SIlOO, where host application 902 sends file access commands 902A to file system 904 to write data.
- step S1102 file system 904 analyzes FAT/directory information for free logical sector/cluster space.
- file system 904 allocates free cluster/logical sectors and generates logical commands to write the file data.
- file system sends logical fragments to flash 105 (1008, Figure iOB) , Logical fragments are received by flash 105 via logical interface 302.
- step S1108 memory controller 106 updates FAT/Directory and organizes the logical fragments into internal files . An example of how the internal files are named and stored is provided above with respect to Figure IOB.
- step S1110 the internal files created during step S1108 are merged into a single internal file, if a host file name (for example, A, 1014, Figure 10B) is available after host 901 writes to the FAT area creating a new chain of clusters for a new host file. If the host file itself is logically fragmented, it can be de-fragmented during initialization or when it is being accessed via file interface 301,
- Host 901 does not always create new chain of clusters for a host file and instead performs the following:
- the host 901 If the host 901 writes to the FAT area so that it allocates clusters that were previously marked as unused, then the corresponding range of LBAs are not used for any write operations via file interface 301. [0205] If the host 901 writes to the FAT area and deletes some clusters that were previously marked as used, then the corresponding range of LBAs are made available for a write operation. The internal files associated with the LBAs can be deleted during garbage collection.
- controller 106 identifies the file offset in an existing file and in step S1114, the new fragments are stored.
- the file ID stays the same but the physical location of the file data may change, especially for the new data fragment .
- step S1116 FDIT 908 is updated so that the host file's LBA is associated with a file name and offset and hence, is accessible via file interface 301.
- File Interface Write [0209]
- Figure 12 shows a process flow diagram for writing via file interface 301 and using FDlT 908, converter 907 and converter 909 so that the file can be accessed via logical interface 302. [0210] Turning in detail to Figure 12, in step
- step S1200 host 900 issues a write command via direct data file interface 903.
- the write command is a file access command (906) and not a logical command.
- step S1202 flash 105 manages the actual write operation in memory cells 107/108. Data is stored as variable length data groups (304, Figure 3B) . While data is being written or after the data is written in flash memory, in step S1204, controller 106 triggers a call to converter 907 (shown as 912A) .
- step S1206 Converter 907/907A analyzes the FAT/directory area. Converter 907/907A can do this via logical commands 913A or via file access commands
- step S1208 converter 907 allocates logical space for the file that is written via file interface 301.
- step S1210 FAT and directory information is updated, either through file access commands or logical commands.
- the file is registered with FDIT 908 so that the allocated LBAs are associated with the file name and offset. If the file access commanas are used ( Figure 9A) then the process ends after step S1210. [0215] If Converter 907A used logical access commands 913 ⁇ then converter 909 converts the "logical write" commands to file access commands 915.
- data written via a logical interface is accessible via a file interface.
- the flash device can be used easily with legacy host system and advanced host systems that support the direct data file storage format .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System (AREA)
- Read Only Memory (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
A mass storage memory system is provided. The memory system includes, re-programmable non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data via a first interface, and a second interface, and data received via the first interface and the second interface is accessible via the first interface and the second interface even if a file name for the data is not provided by a host system or before a write operation is complete. The first interface is a file based interface and the second interface is a logical interface.
Description
METHOD AND SYSTEM FOR ACCESSING NON-VOLATILE STORAGE
DEVICES INVEMTOR(S) : SERGEY A. GOROBETS ALAN W. SINCLAIR
CROSS REFERENCE TO RELATED APPLICATIONS [0001] This patent application is related to the following co-pending patent applications, incorporated herein by reference in their entirety: [0002] Serial Number: 11/060,249; Filed on February 16, 2005; Attorney Docket Number SDK0380.US0, entitled "Direct Data File Storage in Flash Memories" with Alan W. Sinclair and Peter J. Smith as inventors; [0003] Serial Number: 11/060,174; Filed on February 16, 2005; Attorney Docket Number SDK0380.US1, entitled "Direct Data File Programming and Deletion in Flash Memories", with Alan W. Sinclair and Peter J. Smith as inventors ;
[0004] Serial Number: 11/060,248; Filed on February 16,, 2005; Attorney Docket Number SDK0380.US2, entitled
"Direct Data File Srorage Implementation Techniques in Flash Memories", with Alan W. Sinclair as inventor; [0005] Provisional patent application filed by Alan W. Sinclair and Barry Wright concurrently herewith, and
?d "Direct Data File Storage in Flash Memories";
[0006] Serial Number 11/196,168, Filed on August: 3, 20G5, entitled "Method And System For Dual Mode Access For Storage Devices"; and
[0007] Serial Number 11/314,842, FILED ON December 21, 2QQ5, entitled "Dual Mode Access For Non-Volatile Storage Devices"; (the foregoing hereinafter collectively referenced as the "Direct Data File Storage Applications") . 1. Field of the Invention [0008] The present invention relates generally to the operation of re-programmable non-volatile memory systems such as semiconductor flash memory, and more particularly, to accessing the flash meinory device via plural interfaces. 2. Background
[0009] Conventional computer systems typically include several functional components. These components may include a central processing unit (CPU) , main memory, input/output (wI/0") devices, and disk drives. In conventional systems, the main memory is coαpled to the CPU via a system bus or a local memory bus. The main memory is used to provide the CPU access to data and program information that is stored in main memory at execution time. Typically, the main memory Is composed of random access memory (RAM) circuits. A computer
system with the CPU and main memory is often referred to as a host system.
[0010] A host system interfaces with flash mass storage devices (also referred to as "flash device", "flash" or "flash card" interchangeably throughout this specification) via an interface. In an early generation of commercial flash memory systems, a rectangular array of memory cells were divided into a large number of groups of cells that each stored the amount of data of a standard disk drive sector, namely 512 bytes. An additional amount of data, such as 16 bytes, are also usually included in each group to store an error correction code (ECC) and possibly other overhead data relating to the user data and to the memory cell group in which it is stored. The memory cells in each such group are the minimum number of memory cells that are erasable together. That is, the erase unit is effectively the number of memory ceils that store one data sector and any overhead data that is included. Examples of this type of memory system are described in United States patents nos . 5,602,987 and 6,426,833. It is a characteristic of flash memory that the memory cells need to be erased prior to re- programming them with data . [0011] Flash memory systems are most commonly provided in the form of a memory card or flash drive that is
removably connected with a variety of hosts such as a personal computer, a camera or the like, but may also be embedded within such host systems. [0012] Typically, a host system maintains a file directory and allocates file data to logical clusters. A host system that uses a logical interface for readincf/writing data from/to a flash memory device may be referred to as a legacy host system. The term host system in this context includes legacy flash memory card readers and digital cameras and the like.
[0013] In conventional systems, a host maintains a file system and allocates file data to logical clusters, where the cluster size is typically fixed. A flash- device is divided into plural logical sectors and the host allocates space within the clusters comprising of a plurality of logical sectors. A cluster is a subdivision of logical addresses and a cluster map is designated as a file allocation table ("FAT") . The FAT is normally stored on a storage device itself. [0014] In conventional systems, when writing data to the memory, the host typically assigns unique logical addresses to sectors, clusters or other units of data within a continuous virtual address space of the memory system. Like a disk operating system (DOS) , the host writes data to, and reads data from, addresses within the logical address space of the memory system. A
controller within the memory system translates logical addresses received from the host into physical addresses within the memory array, where the data are actually stored, and then keeps track of these address translations. The data storage capacity of the memory system is at least as large as the amount of data that is addressable over the entire logical address space defined for the memory system. [0015] Other file storage systems (or formats) are being developed so that: a host does not have to perforin the file to logical address mapping. However, these new file systems may still have to be used with legacy host systems for reading/writing data. [0016] Therefore, there is a need for a method and system that allows a flash device to be accessed via a conventional logical interface or these new formats where a host does not perform the file to logical mapping .
SUMMARY OF THE INVENTION [0017] In one aspect, a mass storage memory system is provided. The memory system includes, re-programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data via a first interface, and a second interface, and data received via the first interface and the second interface is accessible via
the first: interface and the second interface even if a file name for the data is not provided by a host system or before a write operation is complete. The first interface is a file based interface and the second interface is a logical interface.
[0018] In one aspect, a mass storage memory system is provided. The memory system includes, re-programmable non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive files of data individually via a first interface, identified by unique identifiers and received file data is stored in one or more memory blocks and indexed based on the unique identifiers; wherein the controller assigns a plurality of logical block addresses to the received file data and updates file allocation table (WFAT") entries that are stored in blocks of memory cells such that file data received via the first interface is accessible via a second interface. The first interface is a file based interface and the second interface is a logical interface .
[0019] In another aspect, a mass storage memory system is provided. The memory system includes, reprogrammable non-\7olatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by plurality
of logical addresses via a first interface which causes the data to be stored in one or more memory cells as a file and is accessible via a second interface even if a file name for the data is not provided by a host system. In this aspect, the first interface is a logical interface and the second interface is a file based interface.
[0020] In yet another aspectf a mass storage memory system is provided. The memory system includes, re- programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by plurality of logical addresses via a first interface which causes the data to be stored in one or more memory ceils as a file and is accessible via a second interface even if a file name for the data is not provided by a host system, wherein the controller assigns internal file names to the data and merges the internal file names to a single file name based on a file name after a file name is provided by the host system that sends the data via the first interface. In this aspect, the first interface is a logical interface and the second interface is a file based interface. [0021] In another aspect, a method for transferring data between a host system and a re-programmable nonvolatile mass storage system having memory ceils
organized into blocks of memory ceils is prcvided. The method comprises receiving individual files of data identified by unique file identifiers, wherein the mass storage system receives the individual files of data via a first interface and stores the received files of data indexed by the unique file identifiers; allocating a plurality of logxcal block addressees to a received file data; and updating a file allocation table (XXFAT") entries in the plurality of memory cells, so that the received file data can be accessible via a second interface .
[0022] In yet another aspect, a method for transferring data between a host system and a re-ρrogrammable nonvolatile mass storage system having memory ceils organized into blocks of memory cells is provided. The method comprises receiving data identifxed by plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; and identifying the data with file identifiers, so that the data can be accessible via a second interface, even if a file name for rhe data is not provided by the host system.
[0023] En another aspect, a method for transferring data between a host system and a re-programmable non- volatile mass storage system having memory cells organized into blocks of memory cells is provided. The
method comprises receiving data identified Dy a plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; identifying the data with file identifiers, so that the data can be accessible via a second interface even if a file name is not provided by the host system; storing data as internal files having unique file names; and merging the internal files with unique file names into a single file after a host file name for the data is received,
[0024] In yet another aspect, a method for transferring data between a host system and a re-programmable nonvolatile mass storage system having memory cells organized into blocks of memory cells is provided. The method comprises receiving data via a first interface and a second interface; and making data accessible via the first interface and the second interface, even if a file name is not provided by a host system or before a write operation is complete. [0025] This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof in connection with the attached drawincfs.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The foregoing features and other features of the present invention will now be described with reference to the drawings of a preferred embodiment. In the drawings, the same components have the same reference numerals. The illustrated embodiment is intended to illustrate, but not to limit the invention. The drawings include the following Figures:
[0027] Figure IA shows a block diagram of a host system using a flash device; [0028] Figure IB shows a block diagram of a flash device controller,, used according to one aspect of the present invention;
[0029] Figure 1C shows an of example physical memory organization for a flash memory system; [0030] Figure ID shows an expanded view of a portion of the physical memory of Figure 1C;
[0031] Figure IE shows a further expanded view of a portion of the physical memory of Figure ID;
[0032] Figure IF shows a conventional logical address interface between a host and a re-programmable memory system;
[0033] Figure IG shows a direct data file storage interface between a host and a re-programmable memory system, according to one aspect of the present invention;
[0034] Figure IH shows in a different manner than Figure
IF a conventional logical address interface between a host and a re-programmable memory system;
[0035] Figure IL shows in a different manner than Figure IG, a direct data file storage interface between a host and a re-programmable memory system, according to one aspect of the present invention;
[0036] Figure IM shows a functional hierarchy of an example memory system; [0037] Figure 2 shows a top-level logical block diagram of a system used by a flash device, according to one aspect of the present invention;
[0038] E'igure 3A shows a block diagram of a flash memory device that is accessible via a file interface and a logical interface, according to one aspect of the present invention;
[0039] Figure 3B shows a data flow/address indexing scheme, according to one aspect of the present invention; [0040] Figure 3C shows a top-level block diagram of a mass storage device, according to one aspect of the present invention; [0041] Figure 3D shows a table with data accessibility rules for the mass storage device; according to one aspect of the present invention;
[0042] Figure 4A shows a DOS index table, according to one aspect of the present invention;
[0043] Figure 4B shows how a logical to physical table is populated, according to one aspect of the present invention;
[0044] Figure 4C shows an example of a logical to physical table, according to one aspect of the present invention;
[0045] Figure 4D illustrates how a logical to file table is populated, according to one aspect of the present invention;
[0046] Figure 4E shows an example of a logical to file table, according to one aspect of the present inventi on; [0047] Figure 4F illustrates how records of updated FAT entries are maintained, according to one aspect of the present invention;
[0048] Figure 5 shows an overall flow diagram for the mass storage device, according to one aspect, of the present invention;
[0049] Figure 6 shows a flow diagram for a logical write process, according to one aspect of the present invention;
[0050] Figure 7 shows a flow diagram for the convert to file process, according to one aspect of the present invention;
[0051] Figure 8 shows a flow diagram for a convert to logical process, according to one aspect of the present invention;
[0052] Figures 9A and 9B show block diagrams of a file access system, according to yet another aspect of the present invention;
[0053] Figure 1OA shows an example of a table used by the system of Figures 9A and 9B, according to one aspect of the present invention; [0054] Figure 1OB shows an example of a file write process using internal file names, according to one aspect of the present invention;
[0055] Ξ'igures 11 and 12 show process flow diagrams for a write process using the system of Figures 9A and 9B, according to one aspect of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0056] To facilitate an understanding of the preferred embodiment, the general architecture and operation of a host system/ flash device will be described, The specific architecture and operation of the preferred embodiment will then be described with reference to the general architecture. [0057] Host System/Flash Controller:
[0058] Figure IA shows a block diagram of a typical host system 100 that includes a central processing unit
("CPU") (or microprocessor) 101 connected to a system
bus IGlA. Random access main memory ("RAM"** 103 is also coupled to system bus 101A and provides CPU 101 with access to memory storage. When executing program instructions, CPU 101 stores those process steps in RAM 103 and executes the stored process steps out of RAM 103.
[0059] Read only memory ( ^ROM") 102 is provided to store invariant instruction sequences such as start-up instruction sequences or basic Input/output operating system (BIOS) sequences.
[0060] Input/Output ("I/O") devices 102A, for example, a keyboard, a pointing device ("mouse"), a monitor, a modem and the like are also provided. [0061] Flash device (or card) 105 also provides non- volatile memory for CPU 101. Flash device 105 includes a controller module 106 (may also be referred to as "memory system controller") and solid state memory modules 107-108 (shown as Memory Module #1 and Memory Module #N) . Controller module 106 interfaces with host system 100 via a bus interface 104 or directly via system bus 101A or another peripheral bus (not shown) . [0062] There are currently many different flash memory cards that are commercially available, examples being the CompactFlash (CF) , the MultiMediaCard (MMC) , Secure Digital (SD) , miniSD, Memory Stick, SmartMedia and
TransFlash cards. Althougn each of these cards has a
unique mechanical and/or electrical interface according to its standardized specifications, the flash memory included in each is very similar. These cards are all available from SanDisk Corporation, assignee of the present application. SanDisk also provides a line of flash drives under its Cruzer trademark, which are hand held memory systems in small packages that have a Universal Serial Bus (USB) plug for connecting with a host by plugging into the host's USB receptacle. Each of these memory cards and flash drives includes controllers that interface with the host and control operation of the flash memory within them. [0063] Host systems that use such memory cards and flash drives are many and varied. They include personal computers (PCs), laptop and other portable computers, cellular telephones, personal digital assistants (PDAs) , digital still cameras, digital movie cameras and portable audio players. The host typically includes a built-in receptacle for one or more types of memory cards or flash drives but some require adapters into which a memory card is plugged. [0064] A NAND architecture of the memory cell arrays 107-108 is currently preferred, although other architectures, such as NOR, can also be used instead. Examples of NAND flash memories and their operation as part of a memory system may be had by reference to
United States patents nos . 5,570,315, 5,774f39^, 6,046,935, 6,373,746, 6,456,528, 6,522,530, 6,771,536 and 6,781,8^7 and united States patent application publication no. 2003/0147278. [0065] It is noteworthy chat the adaptive aspects of the present invention are not limited to a flash device 105, and can be used for any non-volatile mass storage system. [0066] Figure IB shows a block diagram of the internal architecture of controller module 106. Controller module 106 includes a microcontroller 109 that interfaces with various other components via interface logic 111. Memory 110 stores firmware and software instructions that are used by microcontroller 109 to control the operation of flash device 105. Memory 110 may be volatile re-programmable random access memory ("RAM"), a non-volatile memory that is not reprogrammable ("ROM"), a one-time programmable memory or a re-programmable flash electrically-erasable and programmable read-only memory ("EEPROM") .
[0067] A host interface 113 interfaces with host system 100, while a flash interface 112 interfaces with memory modules 107-108. [0068] Figure 1C conceptually illustrates an organization of the flash memory cell array (107-108) that is used as an example in further descriptions
below. Four planes or sub-arrays 131 - 134 of memory cells may be on a single integrated memory ceil chip, on two chips (two of the planes on each chip) or on four separate chips. The specific arrangement is not important to the discussion below. Of course, other numbers of planes, such as 1, 2, 8, 16 or more may exist in a system. The planes are individually divided into blocks of memory cells shown in Figure 1C by rectangles, such as blocks 137, 138, 139 and 140, located in respective planes 131 - 134. There can be dozens or hundreds of blocks in each plane. [0069] A block of memory cells is the unit of erase, the smallest number of memory cells that are physically erasable together. For increased parallelism, however, the blocks are operated in larger metablock units. One block from each plane is logically linked together to form a metablock. The four blocks 137 - 140 are shown to form one metablock 141. All of the cells within a metablock are typically erased together. The blocks used to form a metablock need not be restricted to the same relative locations within their respective planes, as is shown in a second metablock 143 made up of blocks 145 - 148. [0070] Although it is usually preferable to extend the metablocks across ail of the planes, for high system performance, the memory system can be operated with the
ability to dynamically form metabiocks of any or all of one, two or three blocks in different planes. This allows the size of the metablock to be more closely matched with the amount of data available for storage in one programming operation.
[0071] The individual blocks are in turn divided for operational purposes into pages of memory cells, as illustrated in Figure ID. The memory cells of each of the blocks 131 - 134, for example, are each divided into eight pages PO - P7. Alternatively, there may be 16, 32 or more pages of memory cells within each block. The page is the unit of data programming and reading within a block, containing the minimum amount of data that are programmed at one time. [0072] In the NAND architecture, a page is formed of memory cells along a word line within a block. However, in order to increase the memory system operational parallelism, such pages within two or more blocks may be logically linked into metapaqes. A metapage 151 is illustrated in Figure ID, being formed of one physical page from each of the four blocks 131 - 134. The metapage 151, for example, includes the page P2 in of each of the four blocks but the pages of a metapage need not necessarily have the same relative position within each of the blocks.
[0073] Although it is preferable to program and read the maximum amount of data in parallel across all four planes, for high system performance, the memory system can also be operated to form metapages of any or all of one, two or three pages in separate blocks in different planes. This allows the programming and reading operations to adaptively match the amount of data that: may be conveniently handled in parallel and reduces the occasions when part of a metapage remains unprograiraned with data.
[0074] A metapage formed of physical pages of multiple planes, as illustrated in Figure ID, contains memory cells along word line rows of those multiple planes. Rather than programming all of the cells in one word line row at the same time, they are rrore commonly alternately programmed in two or more interleaved groups, each group storing a page of data (in a single block) or a metapage of data (across multiple blocks) . By programming alternate memory cells at one time, a unit of peripheral circuits including data registers and a sense amplifier need not be provided for each bit line but rather are time-shared between adjacent bit lines. This economizes on the amount of substrate space required for the peripheral circuits and allows the memory cells to be packed with an increased density along the rows. Otherwise, it is preferable to
simultaneously program every cell along a row in order to maximize the parallelism available from a given memory system.
[0075] Figure IE shows a logical data page of two sectors 153 and 155 of data of a page or metapage.
Each sector usually contains a portion 157 of 512 bytes of user or system data being stored and another number of bytes 159 for overhead data related either to the data in the portion 157 or to the physical page or block in which it is stored. The number of bytes of overhead data is typically 16 bytes, making the total 528 bytes for each of the sectors 153 and 155. The overhead portion 159 may contain an ECC calculated from the data portion 157 during programming, its logical address, an experience count of the number of times the block has been erased and re-programmed, one or more control flags, operating voltage levels, and the like, plus an ECC calculated from such overhead data 159. Alternatively, the overhead data 159, or a portion of it, may be stored in different pages in other blocks. [0076] As the parallelism of memories increases, data storage capacity of the metablock increases and the size of the data page and metapage also increase as a result. The data page may then contain more than two sectors of data. With two sectors in a data page, and two data pages per metapage, there are four sectors in
a metapage. Each metapage thus stores 2048 bytes of data. This is a high degree of parallelism, and can be increased even further as the number of memory cells in the rows is increased. For this reason, the width of flash memories is being extended in order to increase the amount of data in a page and a metapage. [0077] The physically small re-programmable non-volatile memory cards and flash drives identified above are commercially available with data storage capacity of 512 megabytes (MB), 1 gigabyte (GB), 2 GB and 4GB, and may go higher,
[0078] Figure IF illustrates the most common interface between a host and a mass memory system (for example, a flash device) . The host deals with data files generated or used by application software or firmware programs executed by the host. A word processing data file is an example, and a drawing file of computer aided design (CAD) software is another, found mainly in general computer hosts such as PCs, laptop computers and the like. A document in the pdf format is also such a file. A still digital video camera αenerates a data file for each picture that is stored on a memory card. A cellular telephone utilizes data from files on an internal memory card, such as a telephone directory. A PDA stores and uses several different files, such as an address file, a calendar file, and the like. In any
such application, the memory card may also contain software that operates the host.
[0079] A common logical interface between the host and the memory system is illustrated in Figure IF. A continuous logical address space 161 is large enough to provide addresses for all the data that may be stored in the memory system. The host address space is typically divided into increments of clusters of data. Each cluster may be designed in a given host system to contain a number of sectors of data, somewhere between 4 and 64 sectors being typical . A standard sector contains 512 bytes of data.
[0080] Three Files 1, 2 and 3 are shown in the example of Figure IF to have been created. An application program running on the host system creates each file as an ordered set of data and identifies it by a unique name or other reference. Enough available logical address space not already allocated to other files is assigned by the host to File 1. File 1 is shown to have been assigned a contiguous range of available logical addresses. Ranges of addresses are also commonly allocated for specific purposes, such as a particular range for the host operating software, which are then avoided for storing data even if these addresses have not been utilized at the time the host is assigning logical addresses to the data.
[0081] When a File 2 is later created by the host, the host similarly assigns two different ranges of contiguous addresses within the logical address space 161, as shown in Figure IF. A file need not be assigned contiguous logical addresses but rather can be fragments of addresses in between address ranges already allocated to other files. This example then shows that yet another File 3 created by the host is allocated other portions of the host address space not previously allocated to the Files 1 and 2 and other data .
[0082] The host keeps track of the memory logical address space by maintaining a file allocation table (FAT) , where the logical addresses the host assigns to the various host fxles are maintained. The FAT table is typically stored in the non-volatile memory, as well as in a host memory, and is frequently updated by the host as new files are stored, other files deleted, files modified and the like. When a host file is deleted, for example, the host then de-allocates the logical addresses previously allocated to the deleted file by updating the FAT table to show that they are now available for use with other data files. [0083] The host is not concerned about the physical locations where the memory system controller chooses to store the files. The typical host only knows its
logical address space and the logical addresses that it has allocated to its various files. The memory system, on the other hand, through a typical host/card interface, only knows the portions of the logical address space to which data have been written but does not know the logical addresses allocated tc specific host files, or even the number of host files. The memory system controller 106 converts the logical addresses provided by the host for the storage or retrieval of data into unique physical addresses within the flash memory ceil array where host data are stored. A block 163 represents a working table of these logical-to-physical address conversions, which is maintained by the memory system controller 106. [0084] The memory system controller 106 is programmed to store data files within the blocks and metablocks of a memory array 165 in a manner to maintain the performance of the system at a high level. Four planes or sub-arrays are used in this illustration. Data are preferably programmed and read with the maximum degree of parallelism that the system allows, across an entire metablock formed of a block from each of the planes. At least one metablock 167 is usually allocated as a reserved block for storing operating firmware and data used by the memory controller. Another metablock 169, or multiple metablocks, may be allocated for storage of
host operating software, the host FAT table and the like. Most of the physical storage space remains for the storage of data files.
[0085] The memory system controller 106 does not know, however, how the data received has been allocated by the host among its various file objects. Ail the memory controller 106 typically knows from interacting with the host is that data written by the host to specific logical addresses are stored in corresponding physical addresses as maintained by the controller's logical-to-physical address table 163.
[0086] In a typical memory system, a few extra blocks of storage capacity are provided than are necessary to store the amount of data within the address space 161, One or more of these extra blocks may be provided as redundant blocks for substitution for other blocks that may become defective during the lifetime of the memory. The logical grouping of blocks contained 'within individual metablocks may usually be changed for various reasons, including the substitution of a redundant block for a defective block originally assigned to the metablock. One or more additional blocks, such as metablock 171, are typically maintained in an erased block pool. [0087] When the host writes data to the memory system, the controller 106 converts the logical addresses
assigned by the host to physical addresses within a metablcck in the erased block pool. Other metablocks not being used to store data within the logical address space 161 are then erased and designated as erased pool blocks for use during a subsequent data write operation.
[0088] Data stored at specific host logical addresses are frequently overwritten by new data as the original stored data become obsolete. The memory system controller 106, in response, writes the new data in an erased block and then changes the logica L-to-physieal address table for those logical addresses to identify the new physical block to which the data at those logical addresses are stored. The blocks containing the original data at those logical addresses are then erased and made available for the storage of new data. Such erasure often must take place before a current data write operation may be completed if there is not enough storage capacity in the pre-erased blocks from the erase block pool at the start of writing. This can adversely impact the system data programming speed. The memory controller 106 typically learns that data at a given logical address has been rendered obsolete by the host only when the host writes new data to their same logical address. Many blocks of the memory can therefore be storing such invalid data for a time.
[0089] The sizes of blocks and metablocks are increasing in order to efficiently use the area of the integrated circuit memory chip. This results in a large proportion of individual data writes storing an amount of data that is less than the storage capacity of a metablock, and in many cases even less than that of a block. Since the memory system controller 106 normally directs new data to an erased pool metablock, this can result in portions of metablocks going unfilled. If the new data are updates of some data stored in another metablock, remaining valid metapages of data from that other metablock having logical addresses contiguous with those of the new data metapages are also desirably copied in logical address order into the new metablock. The old metablock may retain other valid data metapages. This results over time in data of certain metapages of an individual metablock being rendered obsolete and invalid, and replaced by new data with the same logical address being written to a different metablock.
[0090] In order to maintain enough physical memory space to store data over the entire logical address space 161, such data are periodically compacted or consolidated (garbage collection) . It is also desirable to maintain sectors of data within the metablocks in the same order as their logical addresses
as much as practical, since this makes reading data in contiguous logical addresses more efficient. So data compaction and garbage collection are typically performed with this additional goal. Some aspects of managing a memory when receiving partial block dara updates and the use of metablocks are described in united States patent no. 6,763,424. [0091] E)irect Data File Storage ("DFS") : [0092] A direct data file storage ("DFS") methodology/system is disclosed in co-pending patent application serial number, 11/060,249; Filed on February 16, 2005; Attorney Docket Number SDK0380.US0, entitled "Direct Data File Storage in Flash Memories" and also in other the Direct Data File Storage Applications referenced above.
[0093] Tn a DFS device, data is accessed by host system 100 on a file-by-file basis as described in the aforementioned patent application, that is, data is identified by a file identifier and an offset address within the file. No logical address space is defined for the device. Host system 100 does not allocate file data to logical clusters, and directory/index table information for files is generated by flash device 105. [0094] The host addresses each file by a unique file ID (or other unique reference) and offset addresses of units of data (such as bytes) within the file. This
file address is qiven directly to the memory system controller 106, which then keeps its own table of where the data of each host file are physically stored. [0095] This file-based interface is illustrated in Figure IG, which should be compared with the logical address interface of Figure IF. An identification of each of the Files 1, 2 and 3 and offsets of data within the files of Figure IG are passed directly to the memory controller 106. This logical address information is then translated by a memory controller function 173 into physical addresses of metablocks and metapages of the memory 165.
[0096] The file-based interface is also illustrated by Figure IL, which should be compared with the logical address interface of Figure IH. The logical address space and hcst maintained FAT table of Figure IH are not present in Figure IL. Rather, data files generated by the host are identified to the memory system by file number and offsets of data within the file. The memory system then directly maps the files to the physical blocks of the memory cell array.
[0097] With reference to Figure IM, functional layers of an example mass storage system being described herein are illustrated. The "Direct File Storage Back End System" communicates through a "Direct-File Interface" and a "File-Based Front-End System" with a host system
over a file-based interface channel, Each host file is uniquely identified, such as by a file name. Data within a file are identified by an offset address within a linear address space that is unique to the file.
[0098] Although DFS devices will be used advantageously by host systems, legacy host systems will need to use a logical interface to read and write data files . Therefore, it is advantageous to have a direct file storage device accessible for read and write operations via dual interfaces, namely, a file interface and a conventional logical interface. [0100] Direct Data File Access: [0101] Details of direct data file access, i.e., when flash device 105 operates as a direct data file storage device, are described in the aforementioned co- pending patent application.
[0102] Figure 2 shows a block diagram of an indexing scheme of a direct data file storage system used according to one aspect of the present invention. Host 100 provides a path, filename and offset 203A (<fileld> parameter) to flash device 105 via file interface (shown as 300 in Figure 3A) . The path points to a file directory 203 that stores the directory information, for example, Directory A and B.
[0103] The <fileID> parameter can be either a full pathname for the file, or some shorthand identifier for the file, and may be referenced as a file_handle. A file pathname is provided to the direct data file interface of Figure IM in association with certain commands. This allows a fully explicit entry to be created in the file directory to be accessed when an existing file is ooened. [0104] The file pathname syntax may conform to the standard used by the DOS file system. The pathname describes a hierarchy of directories and a file within the lowest level of directory. Path segments imay be delimited by "\". A path prefixed by "\" is relative to the root directory. A path not prefixed by λλ\" is relative to the current directory. A segment of ".." indicates the parent directory of the current directory.
[0105] File directory 203 records file attribute information and a pointer to a first entry in a file index table 204 defining data groups for a file. File directory 203 and the file index table (may also be referred to as "FIT") 204 are generated by flash device 105. [0106] File index table 204 contains an entry for each valid data group within a file with contiguous
file offset addresses. Entries for a data group include a file offset address and a physical address. [0107] Every file in a directory points ~o an entry in FIT 204 (for example, 203B points to 204D) . FIT 204 includes an entry for every data group and each entry (for example, 204D) includes an offset value 204A, block value 204B and byte value 204C. The offset value 204A shows offset: address within the file corresponding to the start of a data group (for example, 205Δ) . The block value 204B provides the actual physical address of the data block and the byte value 204C points to a byte where the data group begins in flash block 205B. [0108] Dual Access Mode: [0109] In one aspect, a mass storage system is provided that is accessible via a file interface and a logical interface for both read and write operations, i.e. as a DFS device and a logical device. Figure 3A, as described below, shows a top-level functional block diagram of flash device 105 where it can be used both as a DFS device and a logical device. Figure 3B, as described shows how data and indexing is handled by device 105 operating as a direct data file storage device or logical device. [0110] Figure 3C shows a top-level block diagram for device 105 used both as a direct data file storage device and logical storage device. Figure 3D shows a
table of various data accessibility rules for device 105 when it is used as a DFS device and as a logical storage device.
[0111] As shown in Figure 3C, when data is written via file interface 300 (shown as (A)), it is immediately accessible for a read operation via file interface 300, (shown as (B) ) . Similarly, when data is written via logical interface 302 (shown as (C) ) , and then it is immediately accessible for a read operation via logical interface 302 (shown as (D) ) , Also, DOS and FAT sector data is accessible immediately after it is written (shown as (E) and (F)) .
[0112] When data is written via file interface 300 (shown as (A) ) , it is accessible via logical interface 302 (shown as (D)) after a "convert to logical" operation, as described below. Also, DOS and FAT information relating to data written via file interface 300 (shown as (A) ) is accessible after the convert to logical operation. [0113] When data is written via logical interface
302 (shown as (C) ) , it is accessible via file interface 300 (shown as (B) ) after a "convert to file" operation that is described below. [0114] To operate as a DFS device, file interface 300 and a file storage manager 301 are used by device 105. File storage manager 301 interfaces with file
directory 203 and FIT 204 maintained in flash memory 107-108, as described above. File interface 300 and file storage manager 301 include the plural modules of a direct data file storage device as shown in Figures IG, IL and IM described above and described in more detail in die aforementioned co-pending patent application .
[0115] Data received via file interface 300 is mapped to physical memory (shown as 315, Figure 3B) . The data is organized and stored in the order the data is received. File data exists in flash memory 107/108 as variable length data groups (shown as 304, Figure 3B) , where a data group includes contiguous file offset addresses. FIT 204 indexes the location of the data groups as described above and shown as 315A.
[0116] Logical interface 302 and a logical store manager module ("LSM") 303 facilitate access to device 105 via a logical path 302A. Logical interface 302 interfaces with a host system to receive host commands/data . LSM 303 interfaces with file directory (shown as FDIR) 203, FIT 204, a logical to physical mapping table V"LPT") 308, a logical to file table ("LFT") 309, a DOS index table ("DOSIT") 310 and file storage manager 301, as described below with respect to Figure 3B. File data/logical data 304, DOS sectors 305,
FDIR 203, FIT 204, LPT 308, LFT 309 and DOSlT 310 are information structures stored in memory cells 107/108, [0117] Peferring to Figure 3B, for data via logical interface 332, the host system provides a logical block address with a sector count. The logical data when received by device 105 may not be associated with any particular file. Device 105 receives the logical data and maps that information to actual physical memory location, shown as 313. The data is snored in memory cells 107/108 (shown as 304).
[0118] LPT 308 xndexes data that is written via logical interface 302 and is not associated with a file, at a given time. Figure 4B shows an example of how LPT 308 is built. LPT 308 includes an entry for plural LBA runs, shown as 1-4. Contiσuous data received for each LBA runs (1-4) is stored in flash blocks 1 and 2 in flash memory 107/108. Every logical run is handled in the order it is received. [0119] Figure 4C shows an example of LPT 308 format/fields that are used to index the data received from the host system via logical interface 302. LPT 308 maintains an index of the logical runs as associated with the logical block address for the first sector of the logical run, the length of the logical run, the physical block address and the physical sector address where the data is stored by flash device 105. LPT 308
identifies the physical locations for logical data runs with contiguous addresses.
[0120] It is noteworthy that throughout this specification, logical block address (LBA) is intended to identify a sector, while a logical block includes more than one sector and each sector is identified by a logical block address.
[0121] When logical address for data received from the host is lower than a corresponding end of a root directory, then data is designated as a directory sector or a FAT sector. This data is then stored by device 105 in a dedicated block 305. DOSIT 310 maintains an index for the stored DOS and FAT sectors. [0122] Figure 4A shows an example of DOSIT 310 that maintains information for every logical run received from the host system. DOSIT 310 includes the length of each logical run with the associated LBA, the physical block address and the physical sector address where the FAT sector is stored. [0123] If the logical address is higher than the end of the root directory, then data is written as a logical update block that is mapped to an equivalent block of logical addresses. [0124] Logical data runs may be updated sequentially or chaotically and LFT 308 can handle both situations, as described below, hence separate indexing mechanisms
are not needed. This is possible because an LPT 308 entry has the same format as a FIT 204 entry, except that the LPT 308 entry relates to a logical block address rather than file offset address. Entries in 5 LPT 308 define logical runs where the logical block addresses for plural sectors are sequential. However, multiple LPT 308 entries may be used to define a logical block. Address runs in a logical block may be out of order (i.e. chaotic) and LPT 308 can index the
10 out of order logical block addresses.
[0125] Logical update blocks may exist concurrently and hence may need garbage collection operations . During garbage collection, data groups are copied from other flash blocks to complete a block. If the copied
15 data group is indexed by FIT 204 and then FIT 204 entry is modified to reflect the new location. If a copied data group is indexed by LPT 308, then the copy operation itself is a part of a logical block consolidation .
O 0 [0126] As stated earlier, data that is indexed by FIT 204 can also be accessed via the logical interface 302. Logical to File table ("LFT") 309 maps a LBA run to a file indexed by FIT 204. [0127] Figure 4D illustrates how individual logicai
25 runs are associated with file offset values to populate LFT 309. In Figure 4O, the logical run (shown as Run
1) is associated with File 1, having an offset value shown as offset 1. Logical run 2 is also associated with File 1. Logical Run 3 is associated with 2, [0128] Figure 4E shows an example of LFT 309 layout and the entries used to associate each logical run with a file offset and a file identifier value (for example, a file handle) . For each LBZl run in the logical address, LFT 309 identifies a file identifier and a file offset address. [0129] LFT 309 also allows a host system to access logical data via file interface 300- LFT 309 indexes logical data during the convert to file operation, described below with respect to Figure 8. [0130] Overall Device 105 Process Flow: [0131] Figure 5 shows an overall flow diagram for flash device 105. The process starts in step S500 and in step S502, flash device 105 is initialized, which includes initializing the memory controller 106, and executing boot code so that firmware is loaded into memory 110.
[0132] In step S504, memory system 105 looks for a command from the host system.
[0133] If a host command is pending, then in step S506, memory controller 106 determines if the command is related to the file interface 300. If the command is rela~ed to the file interface 300, then in step S508
memory controller 106 interprets the command and in step S510, executes direct data file storage functions. The aforementioned co-pending application provides list of various commands that may be related to direct data file storage functions, including a Read, Write, Insert, Update, Remove, Delete and Erase commands. [0134] If the host command is not related to file interface 300 in S506, then in step S512, memory controller 106 interprets the command as a logical interface command received via logical interface 302. [0135] In step S514, memory controller 106 determines if the command is for a logical write operation. If the command is for a logical write operation, then in step S516, the logical write operation (described below with respect to Figure 6) is executed.
[0136] If the command is not for a logical write operation, then in Step S518, controller 106 determines if the pending command relates to a logical data read operation .
[0137] If the command is for a data read operation, then in step S520, the data read operation is executed. Details of the read operation are provided in the patent application filed herewith, Serial Number 11/196,168, Filed on August 3, 2005, and Attorney
Docket Number SDK621.00US, entitled "Method And System For Dual Mode Access For Storage Devices". [0138] If the command is not related to the logical read operation, then in step S522, memory controller 106 determines if the command is related to any other function. Examples of other logical interface functions include reading device parameters ("Identify Drive" command) , changing device state ("Idle" and "Standby" commands) and others. [0139] Returning to step S504, if a command is not pending, then in step S524, memory controller 106 determines if the host interfaces, i.e., logical and file interface are idle. If they are idle, then in step S526, garbage collection is performed. Garbage collection may also be performed if an Idle command is received at step S504. If the host interfaces are not idle, then the process returns to step S504 and memory again looks for a pending host command. [0140] Write Operation: [0141] Figure 6 shows a flow diagram of process steps for a logical data write operation (s516, Figure 5) in flash device 105 that also functions as a direct data file storage device, in one aspect of the present invention. The process starts in step S600 and in step S602, controller 106 determines if logical data has been received via logical interface 302. If logical
data has not: been received, then in step S616, controller 106 determines if a new command has been received. If a new command (for example, write, read or any other command) has been received from the host system, then in step S618, LPT 308 is updated. In step S620, the process returns to step S504 in Figure 5. If a new command is not received in step S616, then the process reverts back to step S602. [0142] If logical data was received in step S602, then in step S604, controller 106 determines if the LBA is related to a directory or DOS sector. If the logical address of the data is lower than the "end" of root directory, then it is designated as a directory or FAT sector. If logical data is related to a DOS sector, then the logical data is stored in a DOS sector 305 in step S606, and in step S608, the DOSIT 310 is updated, [0143] In step S610, controller 106 determines if an "end of file" condition is present. If the condition is present, then in step S612, the process moves to a "convert to file" operation described below with respect to Figure ~ and in step S614, the process returns to step S504, Figure 5. If in step S610, the end of file condition is not present, then the process reverts back to step S602. [0144] If in step S604, the logical data is not relateo to a directory or FAT sector, then in step
S622, controller 106 determines if there is an entry for the LBA in LPT 308. If there is an entry, then in step S628 the operation is identified as an update operation and the block is identified as a modified block and the process moves to step S630,
[0145] If in step S622, an entry for the LBA in not present in LPT 308f then in step S624, controller 106 determines if an entry is present in LFT 309. If the entry is present, then in step S626, memory controller 106 finds the entry in FIT 204, which provides a physical address associated with the LBA. If an entry is not found in step S624, then the process moves to step S638, described below. [0146] In step S630, controller 106 determines if the block (from S628) is a new update block. If yes, then in step S632, controller 106 determines if the oldest modified block fully obsolete. If yes, then the oldest modified block is placed in the obsolete block queue for subsequent garbage collection, as described in the co-pending Direct Data File Storage Applications .
[0147] If in step S632, the oldest modified block is not fully obsolete, then in step S636, the block is placed in a common block queue for garbage collection that is described in the aforementioned patent application (Reference to SDK0569) .
[0148] In step S638, controller 106 determines if the address for the LBA run is contiguous. If yes, then the data is stored in step S642. If the address is not contiguous, then LPT 308 is updated in S640 and the data is stored in step S642. The process then reverts back to step S602. [0149] Convert to File Process Flow: [0150] As logical data is received via logical interface 302, LPT 308 entries are created. As stated earlier, when a host sends data via logical interface 302, the data is not associated with a file. After one or more logical data runs, entries in LPT 308 are indexed so that the logical data is accessible via file interface 3PO. This occurs during the convert to file operation. The convert to file operation 312, as shown in Figure 3A and described below with respect to Figure 7, converts logical to physical indexing information in LPT 306 to file directory 203, FIT 204 and LFT 309, so that logical data can be indexec as a file and becomes accessible via file interface 3CO,
[0151] The convert to file operation is initiated by an "end of file" condition in a sequence received at logical interface 302. The end of file condition is generated by a host system after the host completes writing data. The end of file condition is a characteristic sequence of directory and FAT write
operations. It is noteworthy that the "convert to file" operation may also be initiated by a specific host command at the logical interface.
[0152] Turning in detail to Figure 7, the process starts in step S700. In step S700, controller 106 identifies the logical data with new file entries, updated and deleted files. During step S700, controller 106 determines if the host system has written new data, modified existing data or deleted any information. The content of a directory sector written by a nest system is compared to a previous version and this allows controller 106 to identify entries for any new file, existing file that has been updated and any file that has been deleted. [0153] In step S702, controller 106 identifies FAT entries related to the entries that are identified in step STOO. When a host writes a FAT sector, DO3IT 310 maintains a record of the FAT sectors as related to the LBA run. An extension table (DOSIT (ext) ) 310A maintains a record of the updated FAT entries. This is shown in Figure 4F, where an original FAT sector entry is stored in DOSIT 310. After the FAT sector is updated, table 310A stores the previous entry value and the updated entry value. The DOSIT 310 maintains all the current and updated entries.
[0154] In step S704, the LBA runs for data that has been written, updated or deleted are Identified. [0155] In step 5706, LPT 308 is scanned to determine if data for the LBA runs already exists. [0156] In step S7G8, after data is identified to be new or modified, entries are created In file directory 203, FIT 204 and LFT 309.
[0157] In step 710, garbage collection queues are updated. Garbage collection needs are minimized, if the host has not repeated data. Garbage collection is performed if data for an LBA run has been written more than once. Garbage collection also deletes logical data that Is not associated with any file. Garbage collection is described in the co-pending Direct Data File Storage Applications.
[0158] In step S712, entries for data runs identified in step S704 are removed from LPT 308. [0159] Convert to Logical Process: [0160] In one aspect of the present Invention, data written via file interface 300 is accessible via logical Interface 302. The convert to logical operation (shown as 311, Figure 3B) is performed to make that data accessible. The "convert to logical" operation creates FAT and directory entries in DOS sectors 305 and in LFT 309, so that data that is written, via file 300 can be accessed via logical interface
302. This operation may be initiated after a "Close" command is received via file interface 300, The Close command signifies that a file write operation via file interface 300 is complete. This operation may also be initiated by a specific command (for example, "convert to logical") from the host. The specific command will allow a host that has written via file interface 300 to control file access via logical interface 302. [0161] Figure 8 shows a process flow diagram for performing the convert τ.o logical operation. In step S800, the convert to logical operation begins. The operation starts after a "close" command or a specific command to start the convert to logical operation is received by controller 106. [0162] In step S802, FIT 204 is scanned to determine the length of the file that will be made accessible via logical interface 302. In step S804, DOS sectors 305 are scanned to find sufficient logical address space that will be allocated to a particular file (or batch of files) for which the convert to logical operation is being performed.
[0163] In step S806, a LBA run is allocated (or associated) for the file. In step S808, LFT 309 entries are written. The entries associate a LBA run, LBA length with a file identifier having a file offset
value. The file identifier and offset information is obtained from FIT 204.
[0164] In step S810, controller 106 defines cluster chains for the file. [0165] In step S812, the FAT entries in DOS sectors 305 are updated and in step S814, the file directory entries for the file are read. In step 5816, directory entries are written in DOS sector 305. In step 3818, the logical write pointer in the FAT is incremented so that future convert to logical operations can be tracked and accommodated.
[0166] It is noteworthy that controller 106 performs the foregoing convert to logical operation after a file is written via file interface 300. [0167] In one aspect of the present invention, data written via a file interface is accessible ^ia a logical interface. Hence, a flash device can operate with both a legacy host that does not support a file interface and a host system that supports a file interface.
[0168] In another aspect of the present invention, data written via a logical interface is accessible via a file interface. Hence, the flash device can be used easily with legacy host system and advanced host systems that support the direct data file storage format .
[0169] Peal Time Dual Interface Access: [0170] In one aspect of the present invention, a flash device is provided that can be accessed via a logical interface or a file interface in real time, regardless of which interface is used to write data to the flash device. The term real-time in this context means that there are more than one FAT/directory updates instead of a single FAT/Directory update at the end of a file write operation. In one aspect of the present invention, there are one or more than one FΑT/directory updates,
[0171] If a host system writes data via file interface 301, then controller 106 allocates available LBA space and updates FAT entries in memory cells 107/108. The FAT update may be performed substantially in real-time or after a file write operation. This allows data written via file interface 301 to be immediately available via logical interface 302. [0172] The LBA allocation and FAT update is performed by an embedded system (907, Figure 9A) with an output that is specific to the file storage back-end system. In another aspect, the embedded file system output is similar to the logical interface used by a host system. Hence, the embedded file system can be a software module that is similar to the host's LBA based file system.
[0173] When file data is written via a logical interface 302, then controller 106 identifies the data run as a file. This allows the data run to be accessible via the file interface 301 even if the file write operation has not been completed by the file system (i.e. FAT and directory write operations) . [0174] Any data written via file interface or via logical interface (whether identified as a file by the host or not) is uniquely identified by a LBA and a unique file identifier. This allows data to be accessible from both the interfaces.
[0175] Figures 9A and 9B provide block diagrams of yet other aspects of the present invention. A file dual index table ("FDIT") 308 is maintained in flash memory (107/108). FDIT 308 maintains an entry for every file name with an offset value and a corresponding LBA (allocated by memory controller 106) . This allows access to files written via one or both of file interface 301 and logical interface 302, as described below.
[0176] Turning in detail to Figure 9A, host 900 uses a direct data file interface 903 and host 901 uses a standard file system 904 to write data to flash 105 via file interface 301 and logical interface 302, respectively.
[0177] In host 900, direct data file interface 903 interfaces with application 902 and sends file access commands 906 to flash 105. The file access commands are received by file interface 301 and processed by controller 106. Files from host system 900 are shown as HFa, HFb...HFx.
[0178] To write data in flash device 105 via file interface 301, host 900 sends a file name and an offset value (906) to flash 105, Data is then stored by the file storage back-end system 910 as variable data groups (shown as HFa, HBb...HFx) . When data xs received via file interface 301, memory controller 106 places a call to the file access to logical converter 907 (also referred to as "converter 907w) to register the received file (for example, HFa) with FDIT 908.
[0179] Memory controller 106 analyzes FAT/directory area (shown as 305, Figure 3A) and allocates logical space to the file received via the file interface 301. Converter 907 then updates FDIT 308 so that the file written via file interface 301 can also be accessed via logical interface 302.
[0180] Converter 907, after updating FDIT 908, generates file access commands (913, Figure 9A) that allow access to directory and FAT area. Alternatively, converter 907A (shown in Figure 9B) generates logical access command 913A that is then sent to converter 909.
Converter 909 takes the logical commands 913A and converts them into file access command 915 that is sent to the file storage back-end system 910. One advantaqe of the second approach is that file system 904 and 907A are identical and hence easier to implement.
[0181] In host 901, file system 904 interfaces with application 902. File system 904 receives file access commands (902A) from application 902 and then converts the file access commands 902A into logical access command 905. The logical access command 905 are received by logical interface 302 and then processed by memory controller 106, An example is shown where Host File A is received by file system 904 that sends logical fragments (shown as LFO, LFl...LFx) to logical interface 302 and then saved as Host File A in memory cells 107/108, as described below in detail. [0182] To write data via logical interface 302, application 902 sends file access commands 902A to file system 904. File system 904 analyzes FAT information to see if free logical sectors are available and can be allocated to a particular file. Host 901 typically only knows its logical address space and the logical addresses that it has allocated to its various files . If free sectors/clusters are available, then logical space is allocated. Host 901 then sends logical
fragments (shown as LFO, LFl.... LFx) / (logical access command 905) to flash 105.
[0183] After flash 105 receives logical command 905, memory controller 106 updates directory and FAT information. The updated FAT and directory information 912 is sent to converter 907. In another aspect of the present invention, converter 907 does not need to be updated every time, as it can instead access FAT/directory stored in non-volatile memory 107/108 directly itself when it is necessary to do a conversion is needed.
[0184] Logical access command 911 is also sent to converter 909 that generates file access command 915 to store data in memory cells 107/108. [0185] Each logical data run is assigned an internal file name (i.e. internal to the flash system 105) by memory controller 106 (using converter 909 interfacing with FDIT 908) . In one aspect, more than one internal file name is used to identify and store the logical data runs. The internal file name can be based on various factors, for example, one or both of the StartLBA_Length and the LBA. The StartLBA_Length is based en the length of a logical data run and the start of a LBA, while the second file identifier (λλID") is based on the actual LBA.
[0186] As host 901 continues to send logical data runs, memory controller 106 keeps saving the logical data runs as individual internal files. The internal files are all merged into a single file when host 901 sends a host file name to flash 105. Memory controller 106 associates plural data runs with the host file name. Memory controller 106 retains the second file ID, i.e., the LBAs for the plural data runs. [0187] Once the host file name is associated with the various data runs, converter 909 updates FDIT 908 so that the LBA, logical sector address is associated with the host file name and file offset value. File access command 915 is sent to the file storage back-end system 910 that stores the data received via logical interface 302 in memory cells 107/108 (see Figure 3A) . This allows a file written via logical interface to be accessible via file interface 301.
[0188] Figure IOB illustrates the file write process via logical interface 302 and the use of the internal files described above. Host 901fs file space 1000 is shown as multiples of 512 bytes (minimum sector size, LBA space is shown as 1002 and data as stored is shown as 1004. It is noteworthy that the present invention is not limited to any particular sector size or data unit size.
[0189] When the host writes a new file before FAT/directory update, file system 904 allocates LBAs and generates logical commands (905) to write the file data (shown as 1006 and 1008 (LFO.... LFx) . File storage system 105 then organizes the logical fragments into internal files . The internal files are shown as file 0, file 1 and so forth (1010) . A dual file ID table 1012 (same as FDIT 908) is maintained. The example in Figure 1OB shows the StartLBA_Length (100_200) and the LBA ID (100, 200) as the file identifiers.
[0190] After all the logical fragments are stored, file system 904 updates FAT and directory information through logical commands (shov/n as Host File A (1014). Now the host file (Host File A) is associated with the logical fragments {shown as 1016) .
[0191] File storage system 105 then updates FAT/directory files and merges ail the internal files and associates them to the host file ("A") (shown as 1018) . [0192] The updated dual file ID table (1020) saves the host file name "A" with the associated LBA ID (in this example, 100, 200 and 400, 200) .
[0193] In order to update an existing host file (for example, host file "A") , host file system 904 identifies the LBAs for the fragment (shown as 1022) and generates the logical commands (shown as 1024) . In
this example, the LBA is 400,1OC for the update process .
[0194] File storage system 1C5 then identifies the fragments offset in the existing stored file 11A" (200*sector size) and then writes the new fragment (shown as 1026) , The file identifiers stay the same (1028) but the physical location of the file data, especially the new fragment may change. [0195] It is noteworthy that the dual file ID tables shown in Figure 1OB are a part of FDIT 908.
[0196] FDIT 908 is stored in flash device 105 and is updated/maintained real time (i.e. more than once, instead of being updated only after a file write operation) by converter 907/907A and converter 909. This keeps both the file interface 301 and logical interface 302 synchronized.
[0197] FDIT 908 fields are shown in Figure 1OA and include the file name, file offset, logical sector address, LBA and logical data run number. For data written via file interface 301, LBAs are assigned and stored in FDIT 908. Data written via logical interface is assigned the host file name and stored with an offset value. Kence data written via either interface can be accessed. [0198] Logical Write Process Flow:
[0199] Figure 11 shows the overall process flow diagram for a write operation via logical interface 301 with respect to the system disclosed in Figures 9A and 9B. The process starts in step SIlOO, where host application 902 sends file access commands 902A to file system 904 to write data.
[0200] In step S1102, file system 904 analyzes FAT/directory information for free logical sector/cluster space. In step S1104, file system 904 allocates free cluster/logical sectors and generates logical commands to write the file data, In step S1106, file system sends logical fragments to flash 105 (1008, Figure iOB) , Logical fragments are received by flash 105 via logical interface 302. [0201] In step S1108, memory controller 106 updates FAT/Directory and organizes the logical fragments into internal files . An example of how the internal files are named and stored is provided above with respect to Figure IOB. [0202] In step S1110, the internal files created during step S1108 are merged into a single internal file, if a host file name (for example, A, 1014, Figure 10B) is available after host 901 writes to the FAT area creating a new chain of clusters for a new host file. If the host file itself is logically fragmented, it can
be de-fragmented during initialization or when it is being accessed via file interface 301,
[0203] Host 901 does not always create new chain of clusters for a host file and instead performs the following:
[0204] If the host 901 writes to the FAT area so that it allocates clusters that were previously marked as unused, then the corresponding range of LBAs are not used for any write operations via file interface 301. [0205] If the host 901 writes to the FAT area and deletes some clusters that were previously marked as used, then the corresponding range of LBAs are made available for a write operation. The internal files associated with the LBAs can be deleted during garbage collection.
[0206] If host 901 is updating an existing file, then controller 106 identifies the file offset in an existing file and in step S1114, the new fragments are stored. The file ID stays the same but the physical location of the file data may change, especially for the new data fragment .
[0207] In step S1116, FDIT 908 is updated so that the host file's LBA is associated with a file name and offset and hence, is accessible via file interface 301. [0208] File Interface Write:
[0209] Figure 12 shows a process flow diagram for writing via file interface 301 and using FDlT 908, converter 907 and converter 909 so that the file can be accessed via logical interface 302. [0210] Turning in detail to Figure 12, in step
S1200, host 900 issues a write command via direct data file interface 903. The write command is a file access command (906) and not a logical command. [0211] In step S1202, flash 105 manages the actual write operation in memory cells 107/108. Data is stored as variable length data groups (304, Figure 3B) . While data is being written or after the data is written in flash memory, in step S1204, controller 106 triggers a call to converter 907 (shown as 912A) . [0212] In step S1206, Converter 907/907A analyzes the FAT/directory area. Converter 907/907A can do this via logical commands 913A or via file access commands
913.
[0213] Based on the analysis, in step S1208, converter 907 allocates logical space for the file that is written via file interface 301.
[0214] In step S1210, FAT and directory information is updated, either through file access commands or logical commands. The file is registered with FDIT 908 so that the allocated LBAs are associated with the file
name and offset. If the file access commanas are used (Figure 9A) then the process ends after step S1210. [0215] If Converter 907A used logical access commands 913Α then converter 909 converts the "logical write" commands to file access commands 915.
[0216] In another aspect of the present invention, data written via a logical interface is accessible via a file interface. Hence, the flash device can be used easily with legacy host system and advanced host systems that support the direct data file storage format .
[0217] Although the present invention has been described with reference to specific embodiments, these embodiments are illustrative only and not limiting. Many other applications and embodiments of the present invention will be apparent in light of this disclosure and the following claims .
Claims
1. A mass storage memory system, comprising: re-programmable non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive files of data individually via a first interface, identified by unique identifiers and received file data is stored in one or more memory blocks and indexed based on the unique identifiers; wherein the controller assigns a plurality of logical block addresses to the received file data and updates file allocation table ("FAT") entries that are stored in the blocks of memory cells such that the file data received via the first interface is accessible via a second interface,
2. The memory system of Claim 1, wherein logical block address assignment and FAT update is performed by an embedded file system with a file output interface that is specific to a file storage back-end system.
3. The memory system of Claim 1, wherein logical block address allocation and FAT update is performed by an embedded file system that is similar to a host system*" s logical block address based file system.
4. The memory system of Claim 2, wherein the embedded file system is a file to logical converter.
5. The memory system of Claim 1, wherein the first interface is a file based interface.
6, The memory system of Claim 1, wherein the second interface is a logical interface.
7. The memory system of Claim 1, wherein data is stored as files in the plurality of non-volatile memory cells.
8. A mass storage memory system, comprising: re-programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by plurality of logical addresses via a first interface which causes the data to be stored in one or more memory cells as a file and is accessible via a second interface even if a file name for the data is not provided by a host system.
9. The memory system of Claim 8, wherein the controller assigns internal file names to the data and merges the internal file names to a single file name based on a file name that is provided by the host system that sends the data via the first interface.
10. The memory system of Claim 9, wherein the internal file names are based on logical block addresses for the data .
11. The memory system of Claim 8, wherein the first interface is a logical interface.
12. The memory system of Claim 8, wherein the second interface is a file based interface.
13. A mass storage memory system, comprising: re-programmable non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data identified by a plurality of logical addresses via a first interface which causes the data to be stored in one or more memory cells as a file and is accessible via a second interface even if a file name for the data is not provided by a host system, wherein the controller assigns internal file names to the data and merges the internal file names to a single file name based on a file name after a file name is provided by the host system that sends the data via the first- interface .
14. The memory system of Claxm 13, wherein the first interface is a logical interface.
15. The memory system of Claim 13, wherein the second interface is a file based interface.
16. The memory system of Claim 13, wherein the internal file names are based on logical block addresses for the data.
17. The memory system of Claim 13, wherein data is stored as files in the plurality of non-volatile memory cells .
18. A mass storage memory system, comprising: re-prcgranimabie non-volatile memory cells arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive data via one or both of a first interface and a second interface, and data received via the first interface and the second interface is accessible via the first interface and the second interface even if a file name for the data is not provided by a host system or before a write operation is complete.
19. The memory device of Claim 18, wherein data received via the first interface is file data identified by unique identifiers and is stored in one or more blocks of memory cells.
20. The memory system of Claim 19, wherein the controller assigns a plurality of logical block addresses to the received file data and updates file allocation table ("FAT") entries that are stored in blocks of memory cells such that the file data recei\7ed via the first interface is accessible via the second interface .
21. The memory system of Claim 18, wherein the data received via the second interface is identified by a plurality of logical addresses and the controller causes the data to be stored in one or more memory cells as a file.
22. The memory system of Claim 20, wherein logical block address assignment and FAT update is performed by an embedded file system with a file output interface specific to a file storage back-end system.
23. The memory system of Claim 20, wherein logical block address allocation and FAT update is performed by an embedded file system that is similar to a host system' s logical block address based file system.
24. The memory system of Claim 23, wherein the embedded file system is a file to logical converter.
25. The memory system of Claim 18, wherein the controller assigns internal file names to the data received via the second interface and merges the internal file names to a single file name based on a file name provided by the host system that sends the data via the second interface.
26. The memory system of Claim 25, wherein the internal file names are based on logical block addresses for the data received via the second interface.
27. The memory system of Claim 18, wherein the data is stored as files in the plurality of non-volatile memory cells .
28. The memory system of Claim 18, wherein the first interface is a file based interface.
29. The memory system of Claim 18, wherein the second interface is a logical interface.
30. A mass storaqe memory system, comprising: re-programmable non-volatile memory ceils arranged in a plurality of blocks of memory cells; and a controller that is adapted to receive files of data individually via a first interface, identified by unique identifiers and the received file data is stored in one or more blocks of memory cells and the controller assigns a plurality of logical block addresses to the received file data and updates file allocation table ("FAT") entries that are stored in the blocks of memory cells, wherein the FAT update and logical block address assignment is performed substantially in real time, and the file data received via the first interface is accessible via a second interface.
31. The memory system of Claim 30, wherein the FAT update and logical block address allocation is performed after a write operation is completed via the first interface, instead of being performed substantially in real time.
32. The memory system of Claim 30, wherein logical block address allocation and FAT update is performed by an embedded file system with a file output interface that is specific to a file storage back-end system.
33. The memory system of Claim 30, wherein logical block address allocation and FAT update is performed by an embedded file system that is similar to host system's logical block address based file system,
34. The memory system of Claim 33, wherein the embedded file system is a file to logical converter.
35. The memory of Claim 30, wherein the controller is adapted to receive data identified by a plurality of logical addresses via the second interface which causes the data to be stored in one or more memory cells as a file and is accessible via the first interface even if a file name is net by the host system.
36. The memory system of Claim 35, wherein the controller assigns internal file names to the data received via the second interface and merges the internal file names to a single file name based on a file name that is provided by a host system that sends the data.
37. The memory system of Claim 35, wherein indexing information for data written via one or both of the first interface and the second interface is stored in the memory cells and is continuously updated keeping the first interface and the second interface synchronized .
38. The memory system of Claim 30, wherein any data written via the first interface or the second inrerface is uniquely identified by a logical block address and a file identifier; and is accessible via the first interface or the second interface.
39. The memory system of Claim 30, wherein a directory and FAT entries are created by a file system maintained by the memory system for a file written via the first interface .
40. The memory system of Claim 30, wherein the file system maintained by the memory system is similar to a file system maintained by a host system that sends data via the second interface.
41. The memory system of Claim 30, wherein the first interface is a file based interface.
42. The memory system of Claim 30, wherein the second interface is a logical interface.
43. The memory system of Claim 30, wherein data is stored as files in the plurality of non-volatile memory cells .
44. A method for transferring data between a host system and a re-programmable non-volatile mass storage system having memory cells organized into blocks of memory ceils, comprising: receiving individual files of data identified by unique file identifiers, wherein the mass storage system receives the individual files of data via a first interface and stores the received files of data indexed by the unique file identifiers; allocating a plurality of logical block addressees to a received file data; and updating a file allocation table ("FAT") entries in the plurality of memory cells, so that the received file data is accessible via a second interface.
45. The method of Claim 44, wherein the first interface is a file interface and the second interface is a logical interface and the data is stored as files in the non-volatile memory cells.
46. The method of Claim 44, wherein the mass storage system performs the allocation by maintaining an index table in the memory cells where the file data is registered, and the index table is updated when the file data is written, modified and deleted.
47. The method of Claim 44, wherein the FAT entries update and logical block address allocation is performed in real time.
48 The method of Claim 44, wherein the FAT entries update and LBA allocation is performed after a write operation.
49. The method of Claim 44, wherein the logical block address allocation and FAT update is performed by an embedded file system with a file output interface that is specific to a file storage back-end system.
50. The method of Claim 44, wherein logical block address allocation and FAT update is performed by an embedded file system that is similar to host system' s logical block address based file system,
51. A method for transferring data between a host system and a re-programmable non-volatile mass storage system having memory cells organized into blocks of memory cells, comprising: receiving data identified by a plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; and iαentifying the data with file identifiers, so that the data can be accessible via a second interface, even if a file name for the data is not provided by the host system.
52. The method of Claim 51, wherein the data is stored as internal files with unique file names.
53 The method of Claim 52, wherein the internal files with unique file names are merged into a single file after a host file name for the data is received.
54. The method of Claim 52, wherein a controller assigns the unique file names.
55. The method of Claim 52, wherein the unique file name is based on a logical block address for the data,
56. The method of Claim 51, wherein data is stored in non-volatile memory cells as files.
57. The method of Claim 51, wherein the first interface is a logical interface.
58. The method of Claim 51, wherein the second interface is a file interface.
59. A method for transferring data between a host system and a re-programmable non-volatile mass storage system having memory ceils organized into blocks of memory cells, comprising: receiving data identified by a plurality of logical addresses from the host system via a first interface, wherein the mass storage system receives the data; identifying the data with file identifiers, so that the data can be accessible via a second interface even if a file name is not provided by the host system; storing data as internal files having unique file; names; and merging the internal files with unique file names into a single file after a host file name for the data is received.
60. The method of Claim 59, wherein the data are registered with an index table that is maintained in the memory cells.
61. The method of Claim 59, wherein a controller allocates the unique file names.
62. The method of Claim 59, wherein the unique file names are based on a logical block address for the data .
63. The method of Claim 59, wherein the data is stored in non-volatile memory cells as files,
64. The method of Claim 59, wherein the first interface is a logical interface.
65. The method of Claim 59, wherein the second interface is a file interface.
66. A method for transferring data between a host system and a re-programmable non-volatile mass storage system having memory cells organized into blocks of memory cells, comprising: receiving data via one or both of a first interface and a second interface; and making data accessible via the first interface and the second interface, even if a file name is not provided by a host system; or before a write operation is complete.
67. The method of Claim 66, wherein individual files of data identified by unique file identifiers are received by the mass storage system via the first interface and the mass storage system allocates a plurality of logical block addresses to the received file data; and updates a file allocation table (WFAT") entries in the plurality of memory cells, so that the received file data can be accessible via the second interface.
68. The method of Claim 66, wherein the data is stored in a plurality of non-volatile memory cells as files.
69. The method cf Claim 67, wherein the FAT entries update and logical block address allocation is performed in real time; or after a write operation.
70. The method of Claim 67, wherein the logical block address allocation and FAT update is performed by an embedded file system with a file output interface that is specific to a file storage back-end system.
71. The method of Claim 67, wherein logical block address allocation and FAT update is performed by an embedded file system that is similar to a host system's logical block address based file system.
72. The method of Claim 66, wherein data received via the second interface is identified by plurality of logical addresses; and the mass storage system identifies the data received via the second interface with file identifiers, so that the data can be accessible via the first interface.
73. The method of Claim 72, wherein the data identified by plurality of logical addresses are registered with an index table that is maintained in the memory cells and stored as internal files having unique file names.
74. The method of Claim 73, wherein the internal files with unique file names are merged into a single file after a host file name for the data is received.
75. The method of Claim 73, wherein the index table is updated such that data identified by plurality of logical address are associated with unique file identifiers .
76. The method of Claim 73, wherein the unique file names are based on a logical block address for the data.
77. The method of Claim 66, wherein the first interface is a file interface,
78. The method of Claim 66, wherein the second interface is a logxcal interface.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/313,567 | 2005-12-21 | ||
US11/313,633 | 2005-12-21 | ||
US11/313,567 US7747837B2 (en) | 2005-12-21 | 2005-12-21 | Method and system for accessing non-volatile storage devices |
US11/313,633 US7769978B2 (en) | 2005-12-21 | 2005-12-21 | Method and system for accessing non-volatile storage devices |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2007079358A2 true WO2007079358A2 (en) | 2007-07-12 |
WO2007079358A3 WO2007079358A3 (en) | 2008-01-03 |
Family
ID=38228926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2006/062340 WO2007079358A2 (en) | 2005-12-21 | 2006-12-19 | Method and system for accessing non-volatile storage devices |
Country Status (2)
Country | Link |
---|---|
TW (1) | TWI339338B (en) |
WO (1) | WO2007079358A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7627733B2 (en) | 2005-08-03 | 2009-12-01 | Sandisk Corporation | Method and system for dual mode access for storage devices |
US7747837B2 (en) | 2005-12-21 | 2010-06-29 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US7769978B2 (en) | 2005-12-21 | 2010-08-03 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US7793068B2 (en) | 2005-12-21 | 2010-09-07 | Sandisk Corporation | Dual mode access for non-volatile storage devices |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9237186B2 (en) | 2009-07-15 | 2016-01-12 | Aten International Co., Ltd. | Virtual media with folder-mount function and graphical user interface for mounting one or more files or folders |
US9235583B2 (en) | 2009-07-15 | 2016-01-12 | Aten International Co., Ltd. | Virtual media with folder-mount function |
US8615594B2 (en) * | 2009-07-15 | 2013-12-24 | Aten International Co., Ltd. | Virtual media with folder-mount function |
CN105745627B (en) * | 2013-08-14 | 2019-03-15 | 西部数据技术公司 | Address translation for non-volatile memory storage devices |
US10763752B1 (en) | 2019-06-25 | 2020-09-01 | Chengdu Monolithic Power Systems Co., Ltd. | Zero-voltage-switching flyback converter |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6424975B1 (en) * | 2000-01-07 | 2002-07-23 | Trg Products, Inc. | FAT file system in palm OS computer |
KR100453053B1 (en) * | 2002-06-10 | 2004-10-15 | 삼성전자주식회사 | Flash memory file system |
-
2006
- 2006-12-19 WO PCT/US2006/062340 patent/WO2007079358A2/en active Application Filing
- 2006-12-20 TW TW095148079A patent/TWI339338B/en not_active IP Right Cessation
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US10055147B2 (en) | 2005-02-04 | 2018-08-21 | Sandisk Technologies Llc | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US10126959B2 (en) | 2005-02-04 | 2018-11-13 | Sandisk Technologies Llc | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US7627733B2 (en) | 2005-08-03 | 2009-12-01 | Sandisk Corporation | Method and system for dual mode access for storage devices |
US7747837B2 (en) | 2005-12-21 | 2010-06-29 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US7769978B2 (en) | 2005-12-21 | 2010-08-03 | Sandisk Corporation | Method and system for accessing non-volatile storage devices |
US7793068B2 (en) | 2005-12-21 | 2010-09-07 | Sandisk Corporation | Dual mode access for non-volatile storage devices |
US8209516B2 (en) | 2005-12-21 | 2012-06-26 | Sandisk Technologies Inc. | Method and system for dual mode access for storage devices |
Also Published As
Publication number | Publication date |
---|---|
TWI339338B (en) | 2011-03-21 |
TW200732918A (en) | 2007-09-01 |
WO2007079358A3 (en) | 2008-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7747837B2 (en) | Method and system for accessing non-volatile storage devices | |
US7793068B2 (en) | Dual mode access for non-volatile storage devices | |
US7769978B2 (en) | Method and system for accessing non-volatile storage devices | |
KR101369996B1 (en) | Method and system for dual mode access for storage devices | |
US7877540B2 (en) | Logically-addressed file storage methods | |
CN101233480B (en) | Reprogrammable non-volatile memory systems with indexing of directly stored data files | |
US7949845B2 (en) | Indexing of file data in reprogrammable non-volatile memories that directly store data files | |
US8713283B2 (en) | Method of interfacing a host operating through a logical address space with a direct file storage medium | |
US20150363131A1 (en) | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage | |
WO2007079358A2 (en) | Method and system for accessing non-volatile storage devices | |
US20070033374A1 (en) | Reprogrammable Non-Volatile Memory Systems With Indexing of Directly Stored Data Files | |
US20090164745A1 (en) | System and Method for Controlling an Amount of Unprogrammed Capacity in Memory Blocks of a Mass Storage System | |
US20080307156A1 (en) | System For Interfacing A Host Operating Through A Logical Address Space With A Direct File Storage Medium | |
KR101464199B1 (en) | Method for using direct data file system with continuous logical address space interface | |
WO2007070763A2 (en) | Logically-addressed file storage | |
US20070136553A1 (en) | Logically-addressed file storage systems | |
KR20090108695A (en) | How to manage the LAN interface in a direct data file memory system | |
WO2008082999A2 (en) | Configuration of host lba interface with flash memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 06849120 Country of ref document: EP Kind code of ref document: A2 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06849120 Country of ref document: EP Kind code of ref document: A2 |