[go: up one dir, main page]

HK1094259B - Hard disk drive power reduction module - Google Patents

Hard disk drive power reduction module Download PDF

Info

Publication number
HK1094259B
HK1094259B HK07100731.7A HK07100731A HK1094259B HK 1094259 B HK1094259 B HK 1094259B HK 07100731 A HK07100731 A HK 07100731A HK 1094259 B HK1094259 B HK 1094259B
Authority
HK
Hong Kong
Prior art keywords
data
lpdd
hpdd
low power
control module
Prior art date
Application number
HK07100731.7A
Other languages
Chinese (zh)
Other versions
HK1094259A1 (en
Inventor
S.苏塔迪亚
Original Assignee
马维尔国际贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/865,368 external-priority patent/US7634615B2/en
Application filed by 马维尔国际贸易有限公司 filed Critical 马维尔国际贸易有限公司
Publication of HK1094259A1 publication Critical patent/HK1094259A1/en
Publication of HK1094259B publication Critical patent/HK1094259B/en

Links

Description

Hard disk drive power reduction module
The present application is a divisional application of chinese patent application 2005100709131 entitled "adaptive storage system" filed on 17.5.2005.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application, filed 2/13/2004 under the name: U.S. patent application No.10/779544 to Computer with Low-Power Secondary processor and Secondary Display, entitled: U.S. patent application No.10/856368 to LowPower Computer with Main and Auxiliary processors, both of which are incorporated herein by reference in their entirety.
Technical Field
The present application relates to data storage systems, and more particularly to low power data storage systems.
Background
Notebook computers are powered using both line power and battery power. The processor, graphics processor, memory and display of a notebook computer consume a significant amount of power during operation. One significant limitation of notebook computers relates to the amount of time the notebook computer can operate using a battery without charging the battery. The relatively high power consumption of notebook computers typically corresponds to a relatively short battery life.
Referring now to FIG. 1A, an example computer architecture 4 is shown that includes a processor 6 and memory 7, such as cache memory. The processor 6 communicates with an input/output (I/O) interface 8. Volatile memory 9, such as Random Access Memory (RAM)10 and/or other suitable electronic data storage, is also in communication with the interface 8. The graphics processor 11 and memory 12, such as cache memory, improve the speed and performance of graphics processing.
One or more I/O devices such as a keyboard 13 and a pointing device 14 (such as a mouse and/or other suitable device) communicate with the interface 8. A High Power Disk Drive (HPDD)15, such as a hard disk drive having one or more platters with a diameter greater than 1.8 inches, provides permanent memory, stores data, and communicates with the interface 8. The HPDD15 typically consumes a relatively large amount of power during operation. When operating on batteries, frequent use of the HPDD15 will greatly shorten battery life. The computer architecture 4 also includes a display 16, audio output devices 17 such as audio speakers, and/or other input/output devices generally indicated at 18.
Referring now to FIG. 1B, an example computer architecture 20 includes a processing chipset 22 and an input/output chipset 24. For example, the computer architecture may be a north bridge/south bridge architecture (with the processing chipset corresponding to the north bridge chipset and the input/output chipset corresponding to the south bridge chipset) or other similar architecture. The processing chipset 22 communicates with the processor 25 and the graphics processor 26 via a system bus 27. The processing chipset 22 controls interaction with volatile memory 28 (such as external DRAM or other memory), a Peripheral Component Interconnect (PCI) bus 30, and/or a level 2 cache 32. Level 1 caches 33 and 34 may be associated with processor 25 and/or graphics processor 26, respectively. In an alternative embodiment, an Accelerated Graphics Port (AGP) (not shown) communicates with the processing chipset 22 instead of the graphics processor 26, and/or it communicates with the processing chipset 22 in addition to communicating with the graphics processor 26. The processing chipset 22 is typically, but not necessarily, implemented using a plurality of chips. The PCI slots interface with the PCI bus 30.
The I/O chipset 24 manages the basic form of input/output (I/O). The I/O chipset 24 communicates with a Universal Serial Bus (USB)40, an audio device 41, a Keyboard (KBD) and/or pointing device 42, and a basic input/output system (BIOS)43 via an Industry Standard Architecture (ISA) bus 44. Unlike the processing chipset 22, the I/O chipset 24 is typically (but not necessarily) implemented using a single chip that is connected to the PCI bus 30. An HPDD50 such as a hard disk drive also communicates with the I/O chipset 24. The HPDD50 stores a fully functional Operating System (OS), such as WindowsWindowsLinux and based onIs executed by the processor 25.
Disclosure of Invention
In accordance with the present invention, a disk drive system for a computer having high power and low power modes includes a Low Power Disk Drive (LPDD) and a High Power Disk Drive (HPDD). The control module includes a Least Used Block (LUB) module that identifies a LUB in the LPDD. The control module selectively transmits the LUB to the HPDD during the low power mode when at least one of a data storage request and a data retrieval request is received.
In other features, during a storage request to write data, the control module transfers the write data to the LPDD if there is sufficient space on the LPDD for the write data. If there is not enough space on the LPDD for the write data, the control module powers the HPDD and transfers the LUB from the LPDD to the HPDD and the write data to the LPDD.
In still other features, the control module includes an adaptive storage module that determines whether write data is likely to be used before the LUB when there is insufficient space on the LPDD to write the data. If the write data is likely to be used after the LUB, the control module stores the write data onto the HPDD. If the write data is likely to be used before the LUB, the control module powers the HPDD and transfers the LUB from the LPDD to the HPDD and the write data to the LPDD.
In still other features, during a data retrieval request for read data, the control module retrieves the read data from the LPDD if the read data is stored in the LPDD. The control module includes an adaptive memory module that determines whether read data is likely to be used once when the read data is not located on the LPDD. If the read data is likely to be used once, the control module retrieves the read data from the HPDD. If the adaptive storage module determines that multiple uses of the read data are possible, the control module transfers the read data from the HPDD to the LPDD if there is sufficient space on the LPDD for the read data. If the adaptive storage module determines that multiple uses of the read data are possible, the control module transfers the LUB from the LPDD to the HPDD and the read data from the HPDD to the LPDD if there is insufficient space on the LPDD for the read data.
In still other features, the control module transfers the read data from the HPDD to the LPDD if there is sufficient space on the LPDD for the read data. If there is insufficient space on the LPDD for the read data, the control module transfers the LUB from the LPDD to the HPDD and the read data from the HPDD to the LPDD. If the read data is not located on the LPDD, the control module retrieves the read data from the HPDD.
In still other features, the HPDD includes one or more platters, wherein the one or more platters have a diameter greater than 1.8 inches. The LPDD includes one or more platters, wherein the one or more platters have a diameter less than or equal to 1.8 inches.
In accordance with the present invention, a disk drive system for a computer having high power and low power modes includes a Low Power Disk Drive (LPDD) and a High Power Disk Drive (HPDD). The control module communicates with the LPDD and the HPDD. During a storage request for write data in the low power mode, the control module determines whether there is sufficient space on the LPDD for the write data and, if so, it transfers the write data to the LPDD.
In other features, the control module stores the write data on the HPDD if sufficient space is available. The control module further includes a LPDD maintenance module that transfers data files from the LPDD to the HPDD during the high power mode to increase available disk space on the LPDD. The LPDD maintenance module transfers data files in a low power mode based on at least one of a lifetime, a size, and a likelihood of future use. The HPDD includes one or more platters having a diameter greater than 1.8 inches. The LPDD includes one or more platters having a diameter less than or equal to 1.8 inches.
In accordance with the present invention, a data storage system for a computer including high power and low power modes includes a Low Power (LP) persistent memory and a High Power (HP) persistent memory. The cache control module communicates with the low power and high power persistent memory and includes an adaptive memory module. When write data is written to one of the low power and high power nonvolatile memories, an adaptive storage decision is generated at the adaptive storage module that selects the one of the low power and high power nonvolatile memories.
In other features, the adaptive decision is based on at least one of: a power mode associated with a previous use of the write data, a size of the write data, a last use date of the write data, and a manual override status of the write data. The LP nonvolatile memory includes at least one of flash memory and a Low Power Disk Drive (LPDD). The LPDD includes one or more platters, wherein the one or more platters have a diameter less than or equal to 1.8 inches. The HP persistent memory includes a hard disk drive including one or more platters, wherein the one or more platters are greater than 1.8 inches in diameter.
In accordance with the present invention, a data storage system for a computer including high power and low power modes includes a Low Power (LP) persistent memory and a High Power (HP) persistent memory. The cache control module communicates with the low power and high power nonvolatile memories and includes a drive power reduction module. When reading data from the high power persistent memory during the low power mode, and the read data comprising a sequential access data file, the drive power reduction module calculates a burst period (burst period) for transferring segments of the read data from the HP persistent memory to the LP persistent memory.
In other features, the drive power reduction module selects the burst period to reduce power consumption during read of the read data during the low power mode. The LP nonvolatile memory includes at least one of flash memory and a Low Power Disk Drive (LPDD). The LPDD includes one or more platters, wherein the one or more platters have a diameter less than or equal to 1.8 inches. The HP nonvolatile memory includes a High Power Disk Drive (HPDD). The HPDD includes one or more platters, wherein the one or more platters have a diameter greater than 1.8 inches. The burst period is based on at least one of: spin-up time of the LPDD, spin-up time of the HPDD, power consumption of the LPDD, power consumption of the HPDD, read data read length, and capacity of the LPDD.
A multi-disk drive system according to the present invention includes a High Power Disk Drive (HPDD) including one or more platters, wherein the one or more platters have a diameter greater than 1.8 inches, and a Low Power Disk Drive (LPDD) including one or more platters, wherein the one or more platters have a diameter less than or equal to 1.8 inches. The drive control module centrally controls data access to the LPDD and HPDD.
A Redundant Array of Independent Disks (RAID) system in accordance with the present invention includes a first disk array including X High Power Disk Drives (HPDDs), wherein X is greater than or equal to 2. The second disk array includes Y Low Power Disk Drives (LPDD) where Y is greater than or equal to 1. The array management module is in communication with the first and second disk arrays and utilizes the second disk array to cache data to and/or from the first disk array.
Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Drawings
The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:
FIGS. 1A and 1B illustrate an exemplary computer architecture in accordance with the prior art;
FIG. 2A illustrates a first exemplary computer architecture according to the present invention having a primary processor, a primary graphics processor and primary volatile memory operating during a high power mode and a secondary processor and a secondary graphics processor in communication with the primary processor, operating during a low power mode and utilizing the primary volatile memory during the low power mode;
FIG. 2B illustrates a second exemplary computer architecture according to the present invention that is similar to FIG. 2A and that includes secondary volatile memory connected to the secondary processor and/or the secondary graphics processor;
FIG. 2C illustrates a third exemplary computer architecture according to the present invention that is similar to FIG. 2A and that includes embedded volatile memory associated with the secondary processor and/or the secondary graphics processor;
FIG. 3A illustrates a fourth exemplary computer architecture according to the present invention, the computer having a primary processor, a primary graphics processor and primary volatile memory that operate during a high power mode and a secondary processor and a secondary graphics processor that communicate with a processing chipset, that operate during a low power mode and that utilize the primary volatile memory during the low power mode;
FIG. 3B illustrates a fifth exemplary computer architecture according to the present invention that is similar to FIG. 3A and that includes a secondary volatile memory connected to the secondary processor and/or the secondary graphics processor;
FIG. 3C illustrates a sixth exemplary computer architecture according to the present invention that is similar to FIG. 3A and that includes embedded volatile memory associated with the secondary processor and/or the secondary graphics processor;
FIG. 4A illustrates a seventh exemplary computer architecture according to the present invention, the computer having a secondary processor and a secondary graphics processor in communication with an input/output chipset, operating during a low power mode and utilizing a primary volatile memory during the low power mode;
FIG. 4B illustrates an eighth exemplary computer architecture according to the present invention that is similar to FIG. 4A and that includes a slave volatile memory connected to the slave processor and/or the slave graphics processor;
FIG. 4C illustrates a ninth exemplary computer architecture according to the present invention that is similar to FIG. 4A and that includes embedded volatile memory associated with the secondary processor and/or the secondary graphics processor; and
FIG. 5 illustrates a cache hierarchy for the computer architecture of FIGS. 2A-4C in accordance with the present invention;
FIG. 6 is a functional block diagram of a drive control module, which includes a Least Used Block (LUB) module, and manages the storage and transfer of data between a Low Power Disk Drive (LPDD) and a High Power Disk Drive (HPDD);
FIG. 7A is a flowchart illustrating steps performed by the drive control module of FIG. 6;
FIG. 7B is a flowchart illustrating alternative steps performed by the drive control module of FIG. 6;
FIGS. 7C and 7D are flow charts illustrating alternative steps performed by the drive control module of FIG. 6;
FIG. 8A illustrates a cache control module that includes an adaptive storage control module and that controls storage and transfer of data between the LPDD and HPDD;
FIG. 8B illustrates an operating system that includes an adaptive storage control module and that controls storage and transfer of data between the LPDD and HPDD;
FIG. 8C illustrates a host control module that includes an adaptive storage control module and that controls storage and transfer of data between the LPDD and HPDD;
FIG. 9 illustrates steps performed by the adaptive storage control module of FIGS. 8A-8C;
FIG. 10 is an exemplary table illustrating one method of determining the likelihood that a program or file will be used during a low power mode;
FIG. 11A illustrates a cache control module including a disk drive power reduction module;
FIG. 11B illustrates an operating system including a disk drive power reduction module;
FIG. 11C illustrates a host control module including a disk drive power reduction module;
FIG. 12 illustrates steps performed by the disk drive power reduction module of FIGS. 11A-11C;
FIG. 13 illustrates a multi-disk drive system including a High Power Disk Drive (HPDD) and a Low Power Disk Drive (LPDD);
FIGS. 14-17 illustrate other exemplary embodiments of the multi-disk drive system of FIG. 13;
FIG. 18 illustrates the use of low power persistent memory, such as flash memory or a Low Power Disk Drive (LPDD), for increasing virtual storage of a computer;
FIGS. 19 and 20 illustrate steps performed by an operating system to allocate and use the virtual storage of FIG. 18;
FIG. 21 is a functional block diagram of a Redundant Array of Independent Disks (RAID) system according to the prior art;
FIG. 22A is a functional block diagram of an exemplary RAID system according to the present invention with a disk array including X HPDD and a disk array including Y LPDD;
FIG. 22B is a functional block diagram of the RAID system of FIG. 22A, wherein X and Y are equal to Z;
FIG. 23A is a functional block diagram of another exemplary RAID system according to the present invention having a disk array including Y LPDD that communicate with a disk array including X HPDD;
FIG. 23B is a functional block diagram of the RAID system of FIG. 23A wherein X and Y are equal to Z;
FIG. 24A is a functional block diagram of yet another exemplary RAID system according to the present invention having a disk array including X HPDD that communicate with a disk array including Y LPDD;
FIG. 24B is a functional block diagram of the RAID system of FIG. 24A, wherein X and Y are equal to Z;
FIG. 25 is a functional block diagram of a Network Attached Storage (NAS) system according to the prior art; and
FIG. 26 is a functional block diagram of a Network Attached Storage (NAS) system according to the present invention including the RAID system of FIG. 22A, FIG. 22B, FIG. 23A, FIG. 23B, FIG. 24A and/or 24B and/or the multi-drive system according to FIGS. 6-17.
Detailed Description
The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. As used herein, the term module and/or device refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
As used herein, the term "high power mode" refers to active operation of the host processor and/or a host Graphics Processor (GPU) of the host device. The term "low power mode" refers to a low power hibernate mode, an off mode, and/or a non-responsive mode of the primary processor and/or the primary graphics processor when the secondary processor and the secondary graphics processor are operational. "off mode" refers to the condition when both the master and slave processors are off.
The term "low power disk drive" or LPDD refers to disk drives and/or microdrives that have one or more platters with a diameter less than or equal to 1.8 inches. The term "high power disk drive" or HPDD refers to hard disk drives having one or more platters with a diameter greater than 1.8 inches. LPDD typically has low storage capacity and consumes less power than HPDD. The LPDD also rotates faster than the HPDD. For example, LPDD may achieve rotation speeds of 10000-.
A computer architecture in accordance with the present invention includes a primary processor, a primary graphics processor, and a primary memory (as described in conjunction with fig. 1A and 1B), which operate during a high power mode. The secondary processor and the secondary graphics processor operate during the low power mode. The secondary processor and the secondary graphics processor may be connected to various components of the computer, as described below. The secondary processor and the secondary graphics processor may use the primary volatile memory during the low power mode. Alternatively, a secondary volatile memory, such as a DRAM and/or an embedded secondary volatile memory, such as an embedded DRAM, may be used, as will be described below.
When operating in the high power mode, the primary processor and the primary graphics processor consume relatively high power. The primary processor and the primary graphics processor execute a fully functional Operating System (OS) that requires a relatively large number of external processorsAnd (4) storing. The primary processor and the primary graphics processor support high performance operations, including complex computations and advanced graphics. A fully functional operating system may be based onSuch as WindowsLinux-based OS and Linux-basedOS, etc. A fully functional operating system is stored in the HPDD15 and/or 50.
The secondary processor and the secondary graphics processor consume less power (less than the primary processor and the primary graphics processor) when operating during the low power mode. Slave processors and slave graphics processors operate a limited function operating system that requires a relatively small amount of external volatile storage. The secondary processor and the secondary graphics processor may also use the same operating system as the primary processor. For example, a pared down version of a fully functional operating system may be used. The secondary processor and the secondary graphics processor support lower performance operations, lower computation rates, and less advanced graphics. For example, the limited functionality operating system may be WindowsOr any other suitable limited-function operating system. The limited functionality operating system is preferably stored in a persistent store such as flash memory and/or the LPDD. In a preferred embodiment, fully functional and limited-function operating systems share a common data format to reduce complexity.
The primary processor and/or the primary graphics processor preferably comprise transistors that are fabricated using a fabrication process having relatively small feature sizes. In one embodiment, these transistors are fabricated using advanced CMOS fabrication processes. Transistors used in the primary processor and/or the primary graphics processor have relatively high standby leakage (standby leakage), relatively short channels, and are sized for high speed. The primary processor and the primary graphics processor preferably primarily utilize dynamic logic. In other words, they cannot be turned off. The transistors are switched at a duty cycle of less than about 20%, and preferably less than about 10%, although other duty cycles may be used.
In contrast, the slave and/or slave graphics processors preferably include transistors that are fabricated using a fabrication process having larger feature sizes than the process used for the master and/or master graphics processors. In one embodiment, these transistors are fabricated using conventional CMOS fabrication processes. Transistors used in the slave processor and/or the slave graphics processor have relatively low standby leakage, relatively long channels, and are sized for low power consumption. The secondary processor and the secondary graphics processor preferably utilize primarily static logic rather than dynamic logic. The transistors are switched at a duty cycle greater than 80%, and preferably greater than 90%, although other duty cycles may be used.
When operating in the high power mode, the primary processor and the primary graphics processor consume relatively high power. When operating in the low power mode, the secondary processor and the secondary graphics processor consume less power. However, in the low power mode, the computer architecture is able to support reduced features and computations and less complex graphics than when operating in the high power mode. As the skilled person will appreciate, there are many ways of implementing the computer architecture in accordance with the present invention. 2A-4C are only exemplary and not limiting.
Referring now to FIG. 2A, a first exemplary computer architecture 60 is shown. During the high power mode, the primary processor 6, volatile memory 9, and primary graphics processor 11 communicate with the interface 8 and support complex data and graphics processing. During the low power mode, the secondary processor 62 and the secondary graphics processor 64 communicate with the interface 8 and support less complex data and graphics processing. During the low power and/or high power modes, optional persistent memory 65, such as LPDD66 and/or flash memory 68 communicate with interface 8 and provide low power persistent storage of data. The HPDD15 provides high power/capacity persistent memory. The persistent memory 65 and/or the HPDD15 are used to store a limited function operating system and/or other data and files during the low power mode.
In this embodiment, the secondary processor 62 and the secondary graphics processor 64 utilize the volatile memory 9 (or primary memory) when operating in the low power mode. Therefore, during the low power mode, at least a portion of the interface 8 is powered to support communication with the main memory and/or between components that are powered during the low power mode. For example, the keyboard 13, pointing device 14, and main display 16 may be powered and used during the low power mode. In all embodiments described in connection with fig. 2A-4C, a slave display (such as a monochrome display) and/or a slave input/output device with reduced functionality may also be provided and used during the low power mode.
Referring now to FIG. 2B, a second exemplary computer architecture 70 that is similar to the architecture in FIG. 2A is shown. In this embodiment, the secondary processor 62 and the secondary graphics processor 64 communicate with secondary volatile memory 74 and/or 76. The secondary volatile memory 74 and 76 may be DRAM or other suitable memory. During the low power mode, the secondary volatile memory 74 and/or 76 is utilized by the secondary processor 62 and the secondary graphics processor 64, respectively, in addition to the primary volatile memory 9 shown and described in FIG. 2A, and/or the secondary volatile memory 74 and/or 76 is utilized by the secondary processor 62 and the secondary graphics processor 64, respectively, instead of the primary volatile memory 9.
Referring now to FIG. 2C, a third exemplary computer architecture 80 that is similar to FIG. 2A is shown. The secondary processor 62 and/or the secondary graphics processor 64 include embedded volatile memory 84 and 86, respectively. During the low power mode, the secondary processor 62 and the secondary graphics processor 64 utilize embedded volatile memory 84 and/or 86, respectively, in addition to and/or in place of the primary volatile memory. In one embodiment, the embedded volatile memory 84 and 86 is embedded DRAM (eDRAM), although other types of embedded volatile memory may be used.
Referring now to FIG. 3A, a fourth exemplary computer architecture 100 according to the present invention is shown. During the high power mode, the primary processor 25, the primary graphics processor 26, and the primary volatile memory 28 communicate with the processing chipset 22 and support complex data and graphics processing. The secondary processor 104 and the secondary graphics processor 108 support less complex data and graphics processing when the computer is in a low power mode. In this embodiment, the secondary processor 104 and the secondary graphics processor 108 utilize the primary volatile memory 28 when operating in the low power mode. Thus, during the low power mode, the processing chipset 22 may be fully powered and/or partially powered to facilitate communications therebetween. During the low power mode, the HPDD50 may be powered to provide high power volatile memory. Low power persistent memory 109(LPDD110 and/or flash memory 112) is connected to processing chipset 22, I/O chipset 24, or elsewhere, and stores a limited functionality operating system for low power mode.
The processing chipset 22 may be fully powered and/or partially powered to support operation of the HPDD50, LPDD110, and/or other components used during the low power mode. For example, during the low power mode, the keyboard and/or pointing device 42 and the main display may be used.
Referring now to FIG. 3B, a fifth exemplary computer architecture 150 that is similar to FIG. 3A is shown. The secondary volatile memories 154 and 158 are connected to the secondary processor 104 and/or the secondary graphics processor 108, respectively. During the low power mode, the secondary processor 104 and the secondary graphics processor 108 utilize the secondary volatile memory 154 and 158, respectively, instead of the primary volatile memory 28 and/or also utilize the primary volatile memory 28. The processing chipset 22 and primary volatile memory 28 may be shut down during the low power mode if desired. The secondary volatile memory 154 and 158 may be DRAM or other suitable memory.
Referring now to FIG. 3C, a sixth exemplary computer architecture 170 similar to FIG. 3A is shown. The secondary processor 104 and/or the secondary graphics processor 108 include embedded memory 174 and 176, respectively. During the low power mode, the secondary processor 104 and the secondary graphics processor 108 utilize embedded memory 174 and 176, respectively, instead of the primary volatile memory 28 and/or also utilize the primary volatile memory 28. In one embodiment, embedded volatile memory 174 and 176 is embedded DRAM (eDRAM), although other types of embedded memory may be used.
Referring now to FIG. 4A, a seventh exemplary computer architecture 190 according to the present invention is shown. During the low power mode, the secondary processor 104 and the secondary graphics processor 108 communicate with the I/O chipset 24 and utilize the primary volatile memory 28 as volatile memory. The processing chipset 22 is still fully powered and/or partially powered to allow access to the primary volatile memory 28 during the low power mode.
Referring now to FIG. 4B, an eighth exemplary computer architecture 200 that is similar to FIG. 4A is shown. The slave volatile memories 154 and 158 are connected to the slave processor 104 and the slave graphics processor 108, respectively, and are used to replace the master volatile memory 28 and/or to also utilize the master volatile memory 28 during the low power mode. During the low power mode, the processing chipset 22 and the primary volatile memory 28 can be shut down.
Referring now to FIG. 4C, a ninth exemplary computer architecture 210 similar to FIG. 4A is shown. In addition to and/or in place of the master volatile memory 28, embedded volatile memories 174 and 176 are provided to the slave processor 104 and/or the slave graphics processor 108, respectively. In this embodiment, the processing chipset 22 and the primary volatile memory 28 can be shut down during the low power mode.
Referring now to FIG. 5, a cache hierarchy 250 for the computer architecture illustrated in FIGS. 2A-4C is shown. The HP nonvolatile memory HPDD50 is located at the lowest level 254 of the cache hierarchy 250. Layer 254 may or may not be used during the low power mode if HPDD50 is disabled, and layer 254 is used if HPDD50 is enabled during the low power mode. LP nonvolatile memory, such as LPDD110 and/or flash memory 112, is located at a next level 258 of the cache hierarchy 250. External volatile memory, such as primary volatile memory, secondary volatile memory, and/or secondary embedded memory, is the next level 262 of the cache hierarchy 250, depending on the configuration. The level 2 cache or slave cache includes a next level 266 of the cache hierarchy 250. Level 1 cache is the next level 268 of the cache hierarchy 250. The CPU (master or slave) is the last layer 270 of the cache hierarchy. A similar hierarchy is used by master and slave graphics processors.
The computer architecture according to the present invention provides a low power mode that supports less complex processing and graphics. As a result, the power consumption of the computer can be significantly reduced. For notebook computer applications, battery life is extended.
Referring now to FIG. 6, a drive control module 300 or host control module for a multi-disk drive system includes a Least Used Block (LUB) module 304, an adaptive storage module 306, and/or a LPDD maintenance module 308. Based in part on the LUB information, the drive control module 300 controls storage and data transfer between a High Power Disk Drive (HPDD)310, such as a hard disk drive, and a Low Power Disk Drive (LPDD)312, such as a microdrive. By managing the storage and transfer of data between the HPDD and LPDD during the high and low power modes, the drive control module 300 reduces power consumption.
The least used block module 304 tracks the least used blocks of data in the LPDD 312. During the low power mode, the least used block module 304 identifies the least used block of data (such as files and/or programs) in the LPDD312 so that it can be replaced when needed. Certain data blocks or files may be exempted from least used block monitoring, such as files associated only with a limited function operating system, blocks stored in the LPDD312 that are manually set, and/or other files and programs that only operate in a low power mode. Other criteria may be used to select the data block to be overwritten, as will be described below.
During the low power mode, the adaptive storage module 306 determines whether write data is more likely to be used before the least used block during a data storage request. The adaptive storage module 306 also determines whether read data may only be used once during a data retrieval request during the low power mode. During the high power mode and/or other circumstances, the LPDD maintenance module 308 transfers old data from the LPDD to the HPDD as will be described below.
Referring now to fig. 7A, steps performed by the drive control module 300 are shown. Control begins in step 320. In step 324, the drive control module 300 determines whether a data storage request exists. If step 324 is true, the drive control module 300 determines whether sufficient space is available on the LPDD312 in step 328. If not, the drive control module 300 powers the HPDD310 in step 330. In step 334, the drive control module 300 transfers the least used data block to the HPDD 310. In step 336, the drive control module 300 determines whether sufficient space is available on the LPDD 312. If not, control loops to step 334. Otherwise, the drive control module 300 proceeds to step 340 and turns off the HPDD 310. At step 344, data to be stored (e.g., from the host) is transferred to the LPDD 312.
If step 324 is false, the drive control module 300 proceeds to step 350 and determines if a data retrieval request exists. If not, control returns to step 324. Otherwise, control continues to step 354 and determines whether the data is located in the LPDD 312. If step 354 is true, the drive control module 300 retrieves data from the LPDD312 in step 356 and continues to step 324. Otherwise, the drive control module 300 powers the HPDD310 in step 360. In step 364, the drive control module 300 determines whether there is sufficient space available on the LPDD312 for the requested data. If not, the drive control module 300 transfers the least used data block to the HPDD310 in step 366 and continues to step 364. When step 364 is true, the drive control module 300 transmits data to the LPDD312 and retrieves data from the LPDD312 in step 368. In step 370, control turns off the HPDD310 when the data transfer to the LPDD312 is complete.
Referring now to FIG. 7B, a modified method similar to that shown in FIG. 7A includes one or more adaptation steps performed by the adaptive storage module 306. In step 328, when sufficient space is available on the LPDD, control determines whether the data to be stored is likely to be used before the data in the least used block or the block identified by the least used block module in step 372. If step 372 is false, the drive control module 300 stores the data on the HPDD in step 374 and control continues with step 324. By doing so, the power consumed to transfer the least used blocks to the LPDD is saved. If step 372 is true, control continues to step 330 as described above with respect to FIG. 7A.
During a data retrieval request, when step 354 is false, control continues to step 376 and determines whether it is possible to use the data once. If step 376 is true, the drive control module 300 retrieves the data from the HPDD in step 378 and continues to step 324. By doing so, the power consumed to transfer data to the LPDD is saved. If step 376 is false, control continues with step 360. As can be appreciated, if the data is likely to be used once, then the data need not be moved to the LPDD. However, the power consumption of the HPDD cannot be avoided.
Referring now to FIG. 7C, a more simplified form of control can also be performed during low power operation. Maintenance steps can also be performed during the high power and/or low power modes (using the LPDD maintenance module 308). When sufficient space is available on the LPDD in step 328, the data is transferred to the LPDD in step 344 and control returns to step 324. Otherwise, when step 328 is false, the data is stored on the HPDD in step 380 and control returns to step 324. As can be appreciated, the method illustrated in FIG. 7C uses the LPDD when capacity is available and uses the HPDD when LPDD capacity is not available. The skilled person understands that a hybrid approach may be utilized, which may use various combinations of the steps of fig. 7A-7D.
In FIG. 7D, the drive control module 300 performs maintenance steps to delete unused or under-utilized files stored on the LPDD when returning to the high power mode and/or at other times. This maintenance step may also be performed periodically during use, in a low power mode, upon the occurrence of an event such as a full disk event, and/or in other cases. Control begins in step 390. In step 392, control determines whether the high power mode is being used. If not, control loops back to step 392. If step 392 is true, control determines whether the last mode was a low power mode in step 394. If not, control returns to step 392. If step 394 is true, control performs maintenance such as moving old files or low use files from the LPDD to the HPDD in step 396. Adaptive decisions may also be made regarding which files are likely to be used in the future, such as using the criteria described above and the criteria described below in connection with fig. 8A-10.
Referring now to FIGS. 8A and 8B, storage control systems 400-1, 400-2, 400-3 are shown. In FIG. 8A, the storage control system 400-1 includes a cache control module 410 having an adaptive storage control module 414. The adaptive storage control module 414 monitors the use of files and/or programs to determine whether it is possible to use them in a low power mode or a high power mode. The cache control module 410 communicates with one or more data buses 416, which in turn communicate with volatile memory 422, such as L1 cache, L2 cache, volatile RAM such as DRAM, and/or other volatile electronic data storage. The bus 416 also communicates with low power persistent storage 424 (such as flash memory and/or a LPDD) and/or high power persistent storage 426 such as a HPDD 426. A fully functional and/or limited function operating system 430 is shown in FIG. 8B and includes the adaptive storage control module 414. Suitable interfaces and/or controllers (not shown) are located between the data bus and the HPDD and/or the data bus and/or the LPDD.
In FIG. 8C, the host control module 440 includes the adaptive storage control module 414. The host control module 440 communicates with the LPDD426 'and the hard disk drive 426'. The host control module 440 may be a drive control module, Integrated Device Electronics (IDE), ATA, Serial ATA (SATA), or other controller.
Referring now to FIG. 9, steps performed by the storage control system of FIGS. 8A-8C are shown. In FIG. 9, control begins in step 460. In step 462, control determines whether there is a data storage request to persistent storage. If not, control loops back to step 462. Otherwise, the adaptive storage control module 414 determines whether it is possible to use the data in the low power mode in step 464. If step 464 is false, then the data is stored in the HPDD in step 468. If step 464 is true, then the data is stored in persistent memory 444 at step 474.
Referring now to fig. 10, a method of determining whether it is possible to use a data block in a low power mode is shown. The table 490 includes a data block descriptor field 492, a low power counter field 493, a high power counter field 494, a size field 495, a last use field 496, and/or a manual override field 497. The counter field 493 and/or 494 is incremented when a particular program or file is used during the low power mode or the high power mode. When data storage of a program or file is required from persistent storage, table 492 is accessed. The evaluation may be performed using a threshold percentage and/or a count value. For example, if a file or program is used more than 80% of the time in the low power mode, the file may be stored in a low power persistent memory, such as a flash memory and/or a microdrive. If the threshold is not reached, the file or program is stored in high power persistent storage.
As can be appreciated, the counter may be reset periodically, after a predetermined number of samples (in other words, a rolling window is provided), and/or using any other criteria. Further, the likelihood may be weighted, or modified, and/or replaced by the size field 495. In other words, as the file size increases, the required threshold may increase due to the limited capacity of the LPDD.
The likelihood of a usage decision may be further modified based on the time since the file was last used, as recorded by the last usage field 496. A threshold date and/or time since last use may be used as a factor in the likelihood decision. Although a table is shown in FIG. 10, one or more fields that are used may be stored in other locations and/or in other data structures. An algorithm and/or weighted sampling of two or more fields may be used.
The use manual override field 497 allows a user and/or operating system to manually override the possibility of a use decision. For example, the manual override field may allow the L status to be used for default storage in the LPDD, the H status to be used for default storage in the HPDD, and/or the A status to be used for automatic storage decisions (as described above). Other manual override categories may be defined. In addition to the above criteria, the current power level of the computer operating in the LPDD may be used to adjust the decision. The skilled person will appreciate that there are other methods for determining the possibility of using a file or program in the high power mode or the low power mode, which methods fall within the scope of the principles of the present invention.
Referring now to fig. 11A and 11B, a drive power reduction system 500-1, 500-2, 500-3 (collectively 500) is shown. Drive power reduction system 500 bursts or bursts periodically or otherwise large sequential access files, such as but not limited to segments of audio and/or video files, to low power persistent storage. In FIG. 11A, the drive power reduction system 500-1 includes a cache control module 520 having a drive power reduction control module 522. The cache control module 520 communicates with one or more data buses 526, which data buses 526 then communicate with volatile memory 530 such as L1 cache, L2 cache, volatile RAM such as DRAM and/or other volatile electronic data storage, and persistent storage 534 such as flash memory and/or LPDD and HPDD 538. In FIG. 11B, drive power reduction system 500-2 includes a fully functional and/or limited function operating system 542 with drive power reduction control module 522. Suitable interfaces and/or controllers (not shown) are located between the data bus and the HPDD and/or LPDD.
In FIG. 11C, the drive power reduction system 500-3 includes a host control module 560 having an adaptive storage control module 522. The host control module 560 communicates with one or more data buses 564, which communicate with the LPDD534 'and the hard disk drive 538'. The host control module 560 may be a drive control module, Integrated Device Electronics (IDE), ATA, Serial ATA (SATA), and/or other controller or interface.
Referring now to fig. 12, steps performed by the drive power reduction system 500 of fig. 11A-11C are shown. Control begins in step 582. In step 584, control determines whether the system is in a low power mode. If not, control loops back to step 584. If step 584 is true, control continues to step 586 where control determines whether a large block access is typically from a request from the HPDD. If not, control loops back to step 584. If step 586 is true, control continues with step 590 and determines whether the data block was accessed sequentially. If not, control loops back to step 584. If step 590 is true, control continues to step 594 and determines the readout length. In step 598, control determines a burst period and frequency for transferring data from the high power persistent memory to the low power persistent memory.
In one embodiment, the burst period and frequency are optimized to reduce power consumption. The burst period and frequency are preferably based on spin-up time of the HPDD and/or LPDD, capacity of persistent memory, read-out Rate (Playback Rate), spin-up and steady-state power consumption of the HPDD and/or LPDD, and/or read-out length of sequential data blocks.
For example, the high power nonvolatile memory is an HPDD that consumes 1-2 Watts of power during operation, has a spin-up time of 4-10 seconds, and a capacity generally greater than 20 Gb. The low power persistent memory is a micro-drive that consumes 0.3-0.5 watts of power during operation, has a spin-up time of 1-3 seconds, and a capacity of 1-6 Gb. As can be appreciated, the foregoing performance values and/or capacities may vary for other embodiments. The HPDD may have a data transfer rate of 1Gb/s to the microdrive. The read-out rate may be 10Mb/s (e.g. for video files). As can be appreciated, the burst period times the transfer rate of the HPDD should not exceed the capacity of the microdrive. The time between bursts should be greater than the spin-up time plus the burst period. Within these parameters, the power consumption of the system can be optimized. In the low power mode, if the HPDD is operated to play an entire video, such as a movie, considerable power is consumed. Using the method described above, power consumption can be greatly reduced by selectively transferring data from the HPDD to the LPDD at a very high rate (e.g., 100 times the read-out rate) in multiple bursts at fixed intervals, and then the HPDD can be turned off. Power savings of greater than 50% can be readily achieved.
Referring now to FIG. 13, a multi-disk drive system 640 is shown in accordance with the present invention that includes a drive control module 650 and one or more HPDD644 and one or more LPDD 648. The drive control module 650 communicates with a host device via a host control module 651. To the host, the multi-disk drive system 640 efficiently operates the HPDD644 and LPDD648 as a unitary disk drive to reduce complexity, improve performance, and reduce power consumption, as will be described below. The host control module 651 can be an IDE, ATA, SATA and/or other control module or interface.
Referring now to FIG. 14, in one implementation, the drive control module 650 includes a Hard Disk Controller (HDC)653 that is used to control one or both of the HPDD and/or LPDD. The buffer 656 stores data associated with control of the HPDD and/or LPDD and/or proactively buffers data to or from the HPDD and/or LPDD to increase data transfer rates by optimizing data block sizes. The processor 657 performs processing related to the operation of the HPDD and/or LPDD.
The HPDD648 includes one or more motherboards 652, the motherboards 652 having a magnetic coating that stores a magnetic field. The platter 652 is rotated by a spindle motor, shown schematically at 654. Typically the spindle motor 654 rotates the platter 652 at a fixed speed during read/write operations. One or more read/write arms 658 move relative to the motherboard 652 to read data from and/or write data to the motherboard 652. Because the HPDD648 has larger platters than the LPDD, the spindle motor 654 requires more power to spin-up the HPDD and to maintain the HPDD at high speeds. Typically, the spin-up time of the HPDD is also longer.
The read/write device 659 is located near the end of the read/write arm 658. The read/write device 659 includes a write element such as an inductor that generates a magnetic field. The read/write device 659 also includes a read element (such as a magnetoresistive MR element) that induces a magnetic field on the motherboard 652. The preamp circuit 660 amplifies analog read/write signals.
When reading data, the preamp circuit 660 amplifies low level signals from the read element and outputs the amplified signals to the read/write channel device. When writing data, a write current is generated that flows through the write element of the read/write device 659 and is switched to produce a magnetic field having positive and negative levels. The positive and negative levels are stored by the motherboard 652 and are used to represent data. The LPDD644 also includes one or more platters 662, a spindle motor 664, one or more read/write arms 668, a read/write device 669, and preamp circuitry 670.
The HDC653 communicates with the host control module 651 and with a first spindle/Voice Coil Motor (VCM) driver 672, a first read/write channel circuit 674, a second spindle/VCM driver 676, and a second read/write channel circuit 678. The host control module 651 and the drive control module 650 can be implemented by a system on a chip (SOC) 684. As can be appreciated, the spindle VCM drivers 672 and 676 and/or the read/write channel circuits 674 and 678 can be combined. The spindle/VCM drivers 672 and 676 control the spindle motors 654 and 664, which rotate the platters 652 and 662, respectively. The spindle/VCM drivers 672 and 676 also generate control signals that position the read/write arms 658 and 668, respectively, for example using a voice coil actuator, a stepper motor, or any other suitable actuator.
Referring now to FIGS. 15-17, other variations of a multi-disk drive system are shown. In FIG. 15, the drive control module 650 may include a direct interface 680 to provide external connection to one or more LPDD. In one embodiment, the direct interface is a Peripheral Component Interconnect (PCI) bus, a PCI express (PCIX) bus, and/or any other suitable bus or interface.
In FIG. 16, the host control module 651 communicates with both the LPDD644 and the HPDD 648. The low power drive control module 650LP and the high power disk drive control module 650HP communicate directly with the host control module. One or both of the 0, LP and/or HP drive control modules can be implemented as an SOC.
In FIG. 17, an example LPDD682 is shown that includes an interface 690 that supports communication with the direct interface 680. As set forth above, interfaces 680 and 690 may be a Peripheral Component Interconnect (PCI) bus, a PCI express (PCIX) bus, and/or any other suitable bus or interface. The LPDD682 includes the HDC692, the buffer 694, and/or the processor 696. The LPDD682 also includes the spindle/VCM driver 676, the read/write channel circuit 678, the motherboard 662, the spindle motor 665, the read/write arm 668, the read element 669, and the preamp 670, as described above. Alternatively, the HDC653, the buffer 656, and the processor 658 can be combined and used for both drives. Similarly, the spindle motor/VCM driver and read channel circuitry can optionally be combined. In the embodiment of FIGS. 13-17, active buffering of the LPDD is used to improve performance. For example, buffers are used to optimize data block sizes for exceeding the optimal speed of the host data bus.
In conventional computer systems, the paging file is a hidden file on the HPDD or HP nonvolatile memory that is used by the operating system to hold portions of program and/or data files that do not fit into the volatile memory of the computer. The paging file and physical memory, or RAM, define the virtual memory of the computer. The operating system transfers data from the paging file to memory and returns data from volatile memory to the paging file as needed to make room for new data. Paging files are also referred to as swap files.
18-20, the present invention utilizes LP nonvolatile memory, such as a LPDD and/or flash memory, to increase virtual storage of a computer system. In FIG. 18, operating system 700 allows a user to define virtual memory 702. During operation, operating system 700 addresses virtual memory 702 via one or more buses 704. Virtual memory 702 includes both volatile memory 708 and LP nonvolatile memory 710, such as flash memory and/or a LPDD.
Referring now to FIG. 19, the operating system allows the user to allocate some or all of LP nonvolatile memory 710 as paged memory to add virtual storage. Control begins in step 720. In step 724, the operating system determines whether additional paged storage is requested. If not, control loops back to step 724. Otherwise, the operating system allocates a portion of the LP nonvolatile memory for the paging file to add virtual storage at step 728.
In FIG. 20, the operating system utilizes the additional LP nonvolatile memory as paging memory. Control begins in step 740. In step 744, control determines whether the operating system is requesting a data write operation. If so, control continues to step 748 and determines whether the capacity of the volatile memory is exceeded. If not, then a write operation is performed using volatile memory at step 750. If step 748 is true, then the data is stored in the paging file of the LP nonvolatile memory in step 754. If step 744 is false, control continues to step 760 and determines whether a data read operation is requested. If false, control loops back to step 744. Otherwise, control determines whether the address corresponds to a RAM address in step 764. If step 764 is true, control reads data from volatile memory in step 766 and continues with step 744. If step 764 is false, control reads data from the paging file in the LP nonvolatile memory in step 770, and control continues with step 744.
As can be appreciated, increasing the size of the virtual memory using LP nonvolatile memory, such as flash memory and/or a LPDD, will increase the performance of the computer compared to systems utilizing the HPDD. In addition, the paging files consume less power than systems using HPDD. Due to its increased size, the HPDD requires additional spin-up time, which increases data access time compared to flash memory, which has no spin-up latency, and/or the LPDD, which has shorter spin-up time and lower power consumption.
Referring now to FIG. 21, a Redundant Array of Independent Disks (RAID) system 800 is shown that includes one or more servers and/or clients 804 in communication with a disk array 808. One or more servers and/or clients 804 include a disk array controller 812 and/or an array management module 814. The disk array controller 812 and/or the array management module 814 receive the data and perform logical to physical address mapping of the data to the disk array 808. The disk array typically includes a plurality of HPDDs 816.
The multiple HPDDs 816 provide fault tolerance (redundancy) and/or increased data access rates. The RAID system 800 provides a way to access multiple individual HPDDs as if the disk array 808 were one large hard disk drive. Disk array 808 may provide a total of several hundred Gb to 10 to 100 Tb of data storage. Data is stored on the multiple HPDDs 816 in various ways to reduce the risk of losing all data if one drive fails and to improve data access times.
The method of storing data on the HPDD816 is typically and collectively referred to as a RAID level. There are various RAID levels including RAID level 0 or disk fragmentation. In a RAID level 0 system, data is written in blocks across multiple drives to allow one drive to write or read a block of data while another drive is looking for the next block. Advantages of disk fragmentation include faster access rates and full utilization of array capacity. The disadvantage is that there is no fault tolerance. If one drive fails, the entire contents of the array become inaccessible.
RAID level 1 or disk mirroring provides redundancy by writing twice-once per drive. If one drive fails, the other contains an accurate copy of the data and the RAID system can switch to using the mirrored drive without errors in user access. Disadvantages include not increasing data access speed and higher costs due to the increased number of drives required (2N). However, RAID level 1 provides optimal protection of the data because the array management software only directs all application requests to surviving HPDDs when one of the HPDDs fails.
RAID level 3 segmented data spans multiple drives and has one extra drive dedicated to parity for error correction/recovery. RAID level 5 provides fragmentation and parity for error recovery. In RAID level 5, parity blocks are allocated among the drives of the array, which provides a more balanced access burden among the drives. If one drive fails, the parity information is used to recover the data. The disadvantage is the relatively slow write cycle (two reads and two writes are required for each block being written). The array capacity is N-1 and a minimum of 3 drivers is required.
RAID level 0+1 includes segmentation and mirroring without parity. The advantages are fast data access (like RAID0 level) and single drive fault tolerance (like RAID1 level). RAID0+1 level still requires twice the number of disks (like RAID1 level). As can be appreciated, there may be other RAID levels and/or methods to store data on the array 808.
Referring now to FIGS. 22A and 22B, a RAID system 834-1 in accordance with the present invention includes a disk array 836 comprising X HPDDs and a disk array 838 comprising Y LPDDs. One or more clients and/or servers 840 include a disk array controller 842 and/or an array management module 844. Although separate devices 842 and 844 are shown, these devices can be integrated if desired. As can be appreciated, X is greater than or equal to 2 and Y is greater than or equal to 1. X can be greater than Y, less than Y, and/or equal to Y. For example, fig. 22B shows a RAID system 834-1' where X ═ Y ═ Z.
23A, 23B, 24A, and 24B, RAID systems 834-2 and 834-3 are shown. In FIG. 23A, the LPDD disk array 838 communicates with the servers/clients 840 and the HPDD disk array 836 communicates with the LPDD disk array 838. The RAID system 834-2 may include a management bypass path that selectively circumvents the LPDD disk array 838. As can be appreciated, X is greater than or equal to 2 and Y is greater than or equal to 1. X can be greater than Y, less than Y, and/or equal to Y. For example, fig. 23B shows a RAID system 834-2' where X ═ Y ═ Z. In FIG. 24A, the HPDD disk array 836 communicates with the servers/clients 840 and the LPDD disk array 838 communicates with the HPDD disk array 836. The RAID system 834-2 may include a management bypass path, represented by dashed line 846, that selectively circumvents the LPDD disk array 838. As can be appreciated, X is greater than or equal to 2 and Y is greater than or equal to 1. X can be greater than Y, less than Y, and/or equal to Y. For example, fig. 24B shows a RAID system 834-3' where X ═ Y ═ Z. The policies used in FIGS. 23A-24B may include write-through and/or write-back.
The array management module 844 and/or the disk controller 842 utilizes the LPDD disk array 838 to reduce power consumption of the HPDD disk array 836. Typically, the HPDD disk array 808 in the conventional RAID system of FIG. 21 remains open at all times during operation to support the required data access times. As can be appreciated, the HPDD disk array 808 consumes relatively high power. Furthermore, because a large amount of data is stored in the HPDD disk array 808, the motherboard of the HPDD is typically as large as possible, which requires a higher capacity spindle motor and increases data access time as the read/write arm moves on average further.
In accordance with the present invention, the techniques described above in connection with FIGS. 6-17 are selectively employed in a RAID system 834 as shown in FIG. 22B to reduce power consumption and data access time. Although not shown in FIGS. 22A and 23A-24B, other RAID systems in accordance with the present invention may also use these techniques. In other words, the LUB module 304, the adaptive storage module 306, and/or the LPDD maintenance module described in FIGS. 6 and 7A-7D are selectively implemented by the disk array controller 842 and/or the array management controller 844 to selectively store data on the LPDD disk array 838 to reduce power consumption and data access time. The adaptive storage control module 414 described in FIGS. 8A-8C, 9, and 10 may also be selectively implemented by the disk array controller 842 and/or the array management controller 844 to reduce power consumption and data access times. The drive power reduction module 522 described in FIGS. 11A-11C and 12 may also be implemented by the disk array controller 842 and/or the array management controller 844 to reduce power consumption and data access times. In addition, the multi-drive systems and/or direct interfaces shown in FIGS. 13-17 may be implemented with one or more HPDDs in the HPDD disk array 836 to increase functionality and reduce power consumption and access time.
Referring now to FIG. 25, a Network Attached Storage (NAS) system 850 is shown in accordance with the prior art including storage devices 854, storage requesters 858, a file server 862, and a communication system 866. The storage devices 854 typically include disk drives, RAID systems, tape drives, tape libraries, optical drives, jukeboxes, and any other storage devices to be shared. The storage devices 854 are preferably, but not necessarily, object oriented devices. The storage devices 854 may include an I/O interface for data storage and retrieval by the requesters 858. The requesters 858 typically include servers and/or clients that share and/or directly access the storage devices 854.
The file server 862 performs administrative and security functions such as request authentication and resource allocation. The storage devices 854 rely on management directives of the file server 862, while the requesters 858 do not assume storage management, but rather the file server 862 assumes this responsibility. In smaller systems, a dedicated file server may not be required. In this situation, the requestor may assume responsibility for monitoring the operation of the NAS system 850. Likewise, both the file server 862 and the requestor 858 are shown to include management modules 870 and 872, respectively, although one or the other and/or both management modules could be provided. The communication system 866 is the physical infrastructure through which the components of the NAS system 850 communicate. It preferably has the properties of both a network and a channel, is capable of connecting all components in the network, and has low latency typically found in a channel.
When the NAS system 850 is powered on, the storage devices 854 identify themselves to each other or to a common reference point, such as the file server 862, one or more requesters 858, and/or to the communication system 866. The communication system 866 typically provides network management techniques for this, which are readily available by connecting to the medium associated with the communication system. The storage devices 854 and requesters 858 log onto the media. Any component for which an operational configuration is to be determined is able to identify all other components using the media service. The requesters 858 learn from the file server 862 if there are storage devices 854 they can access, and when a storage device 854 needs to locate another device or invoke a management service such as backup, it will know where to go. Similarly, the file server 862 can learn from the media service whether the storage devices 854 exist. Depending on the security of a particular installation, a requestor may be denied access to a certain device. From the set of accessible storage devices, it can then identify the file, the database, and the free space available.
At the same time, each NAS component can identify any special consideration file servers 862 that it should understand. Any device-level service attributes should be communicated to the file server 862 at once, and all other components can learn them from the file server 862. For example, the requestor may wish to be informed of the introduction of additional memory after startup, triggered by the properties set when the requestor logs onto the file server 862. When new storage devices are added to the configuration, the file server 862 automatically does so, including transferring important features such as it is RAID 5, mirrored, and so on.
When a requestor must open a file, it can go directly to the storage device 845 or it may have to go to the file server to obtain permission and location information. The extent to which the file server 854 controls access to memory is a function of the security requirements of the installation.
Referring now to FIG. 26, a Network Attached Storage (NAS) system 900 in accordance with the present invention is shown to include storage devices 904, requesters 908, file servers 912, and a communications system 916. The storage devices 904 include a RAID system 834 and/or a multiple disk drive system 930, as described in fig. 6-19. The storage devices 904 also typically include disk drives, RAID systems, tape drives, tape libraries, optical drives, jukeboxes, and/or any other storage devices to be shared as described above. As can be appreciated, using the improved RAID system and/or multi-disk drive system 930 will reduce the power consumption and data access time of the NAS system 900.
Those skilled in the art can now appreciate from the foregoing description that the broad principles of the present invention can be implemented in a variety of forms. Therefore, while this invention has been described in connection with particular examples thereof, the true scope of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the specification and the following claims, and the scope of the invention should, therefore, be determined by the appended claims.

Claims (10)

1. A data storage system for a computer having high power and low power modes, the data storage system comprising:
a low power LP nonvolatile memory;
a high power HP nonvolatile memory; and
a drive power reduction module in communication with said low power and high power nonvolatile memories, wherein when reading data from said high power nonvolatile memory during said low power mode, and said read data comprising a sequential access data file, said drive power reduction module calculates a burst period for transferring segments of said read data from said HP nonvolatile memory to LP nonvolatile memory.
2. The data storage system of claim 1, wherein the drive power reduction module selects the burst period to reduce power consumption during read of the read data during the low power mode.
3. The data storage system of claim 1, wherein the LP nonvolatile memory comprises at least one of a flash memory and a Low Power Disk Drive (LPDD).
4. The data storage system of claim 3, wherein the LPDD comprises one or more platters having a diameter less than or equal to 1.8 inches.
5. The data storage system of claim 3, wherein the HP nonvolatile memory comprises a high power disk drive HPDD.
6. The data storage system of claim 5, wherein the HPDD comprises one or more platters, wherein the one or more platters have a diameter greater than 1.8 inches.
7. The data storage system of claim 5, wherein the burst period is based on at least one of: a spin-up time of the LPDD, a spin-up time of the HPDD, a power consumption of the LPDD, a power consumption of the HPDD, a read length of the read data, and a capacity of the LPDD.
8. The data storage system of claim 1, further comprising: a cache control module comprising the drive power reduction module.
9. The data storage system of claim 1, further comprising: a host control module comprising the drive power reduction module.
10. The data storage system of claim 1, further comprising: an operating system comprising the drive power reduction module.
HK07100731.7A 2004-06-10 2007-01-19 Hard disk drive power reduction module HK1094259B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/865,368 2004-06-10
US10/865,368 US7634615B2 (en) 2004-06-10 2004-06-10 Adaptive storage system

Publications (2)

Publication Number Publication Date
HK1094259A1 HK1094259A1 (en) 2007-03-23
HK1094259B true HK1094259B (en) 2009-06-19

Family

ID=

Similar Documents

Publication Publication Date Title
EP1605453B1 (en) Adaptive storage system
EP1605361B1 (en) Cache hierarchy
EP2049968B1 (en) Adaptive storage system including hard disk drive with flash interface
JP2009536767A (en) Adaptive storage system including hard disk drive with flash interface
HK1094259B (en) Hard disk drive power reduction module