US20130007755A1 - Methods, computer systems, and physical computer storage media for managing resources of a storage server - Google Patents
Methods, computer systems, and physical computer storage media for managing resources of a storage server Download PDFInfo
- Publication number
- US20130007755A1 US20130007755A1 US13/172,648 US201113172648A US2013007755A1 US 20130007755 A1 US20130007755 A1 US 20130007755A1 US 201113172648 A US201113172648 A US 201113172648A US 2013007755 A1 US2013007755 A1 US 2013007755A1
- Authority
- US
- United States
- Prior art keywords
- request
- computer
- resources
- priority level
- storage server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
Definitions
- the present invention relates in general to storage servers, and more particularly, to methods, computer systems, and physical computer storage media for managing resources of storage servers.
- Data processing systems can have many nodes that are geographically dispersed.
- the systems can be managed in a distributed manner by logically separating each environment into a series of loosely connected managed regions.
- Each environment can have a management server for managing local resources, while a management system including management servers coordinate activities across the system to permit remote site management and operation. In this way, local resources within one region can be exported for the use of other regions.
- the management system can be governed by a service level agreement providing rules for managing the many systems.
- performance measurements may be required along various network routes throughout the system.
- the management system measures resource consumption while an application is running. Measurements are taken along particular routes and metrics and descriptions relating to operations performed consuming bandwidth are accumulated.
- Different applications may have different quality-of-service requirements delineated by the service level agreement. For instance, some applications may require a faster response time and/or higher input/output throughput than other applications. In other cases, an application may require larger bandwidth or larger storage capacity than another application.
- lower priority input/output (IO) requests were throttled based on static mapping between storage requirements and storage service classes of an application. As a result, in some instances, lower priority IO requests would be neglected in favor of high priority IO requests, and the system would become overloaded by the volume of high priority IO requests in a queue. Consequently, system operation was not as efficient as desired and low priority request would be minimally fulfilled. To optimize overall performance of the system, improved methods and systems for managing storage media are needed.
- One improved method dynamically assigns priorities to IO requests based on the performance level and importance of the host application which the IO requests belong to, and throttling lower priority IO requests based on 1) a performance level of the high priority IO requests compared with a performance level of the lower priority requests, and 2) relative performance level of the high priority IO requests compared with its performance target, which is defined within the storage class to which the high priority IO requests is currently mapped.
- a first IO request is received.
- a first priority level is dynamically assigned to the first IO request, where the first IO request is associated with a performance level for an application residing on a host in communication with the storage server.
- a second IO request of a second priority level is throttled, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, where the second priority level is different than the first priority level.
- a computer system in another embodiment, by way of example only, includes a host and a storage server.
- the host is configured to provide input/output (IO) requests.
- the storage server is in communication with the host and is configured to receive the IO requests.
- the storage server includes a processor configured to receive a first input/output (IO), to dynamically assign a first priority level to the first IO request, the first IO request having a performance level, and to throttle a second IO request of a second priority level, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
- IO input/output
- a physical computer storage medium comprising a computer program product method for managing resources of a storage server.
- the storage medium includes computer code for receiving a first input/output (IO), computer code for dynamically assigning a first priority level to the first IO request, the first IO request having a performance level for an application residing on a host in communication with the storage server, and computer code for throttling a second IO request of a second priority level, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
- IO input/output
- FIG. 1 is a pictorial representation of an example distributed data processing system, according to an embodiment
- FIG. 2 is a block diagram of an example data processing system, according to an embodiment.
- FIG. 3 is a flow diagram of a method of managing resources of a storage server, according to an embodiment.
- the illustrated embodiments below provide systems and methods for managing a storage server that have improved overall system performance.
- the systems and methods allow dynamic adjustment of storage classes of high and low priority IO requests, based on analyses of performance feedback data associated with those IO requests.
- the method includes receiving a first input/output (IO) request, where the first IO request is associated with a performance level and a first priority level for an application residing on a host in communication with the storage server.
- IO input/output
- a second IO request of a second priority level is throttled, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, where the second priority level is different than the first priority level.
- FIGS. 1-2 example diagrams of data processing environments are provided in which illustrative embodiments of the present invention may be implemented. It should be appreciated that FIGS. 1-2 are only examples and are not intended to assert or imply any limitation with regard to the environments in which aspects or embodiments of the present invention may be implemented. Many modifications to the depicted environments may be made without departing from the spirit and scope of the present invention.
- FIG. 1 depicts a pictorial representation of an example distributed data processing system in which aspects of the illustrative embodiments may be implemented.
- Distributed data processing system 100 may include a network of computers in which aspects of the illustrative embodiments may be implemented.
- the distributed data processing system 100 contains at least one network 102 , which is the medium used to provide communication links between various devices and computers connected together within distributed data processing system 100 .
- the network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
- host/server 104 and host/server 106 are connected to network 102 along with storage server 108 .
- the host/servers 104 , 106 are application servers and include a storage controller 109 , 111 that is configured to control storage and access of data stored on the storage server 108 .
- the host/servers 104 , 106 are configured to provide input/output (“IO”) requests to the storage server 108 .
- the host/servers 104 , 106 assign priority levels and performance levels to the IO requests.
- the priority level of an IO request can range from a high priority, a medium priority, or a low priority.
- one IO request can have a higher or lower priority level than another IO request.
- the performance level can be set at a target and can be measured numerically or qualitatively.
- Storage server 108 may be include a storage unit and can comprise any storage system. Examples of storage server 108 may include an advanced storage device, such as a DS8000 dual node controller, or a file server, such as a network attached storage (NAS) device. Although two host/servers 104 , 106 are shown, more or fewer can be included in other embodiments. Distributed data processing system 100 may include additional servers, and other devices not shown.
- an advanced storage device such as a DS8000 dual node controller
- NAS network attached storage
- distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another.
- TCP/IP Transmission Control Protocol/Internet Protocol
- At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages.
- the distributed data processing system 100 may also be implemented to include a number of different types of networks, such as for example, an intranet, a local area network (LAN), a wide area network (WAN), or the like.
- FIG. 1 is intended as an example, not as an architectural limitation for different embodiments of the present invention, and therefore, the particular elements shown in FIG. 1 should not be considered limiting with regard to the environments in which the illustrative embodiments of the present invention may be implemented.
- Data processing system 200 is an example of a computer, such as host/server 104 , 106 in FIG. 1 , in which computer usable code or instructions implementing the processes for illustrative embodiments of the present invention may be located.
- Data processing system 200 includes a controller 209 comprising a processor 206 , main memory 208 and, alternatively, a graphics processor 210 .
- the controller 209 supplies commands to run database and/or backup applications to the system 200 .
- the data processing system 200 employs a hub architecture including north bridge and memory controller hub (NB/MCH) 202 and south bridge and input/output (I/O) controller hub (SB/ICH) 204 .
- NB/MCH north bridge and memory controller hub
- I/O controller hub SB/ICH
- Processor 206 , main memory 208 , and graphics processor 210 are connected to NB/MCH 202 .
- Graphics processor 210 may be connected to NB/MCH 202 through an accelerated graphics port (AGP).
- AGP accelerated graphics port
- local area network (LAN) adapter 212 connects to SB/ICH 204 .
- Audio adapter 216 , keyboard and mouse adapter 220 , modem 222 , read only memory (ROM) 224 , hard disk drive (HDD) 226 , CD-ROM drive 230 , universal serial bus (USB) ports and other communication ports 232 , and PCl/PCIe devices 234 connect to SB/ICH 204 through bus 238 and bus 240 .
- PCl/PCIe devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not.
- ROM 224 may be, for example, a flash basic input/output system (BIOS).
- HDD 226 and CD-ROM drive 230 connect to SB/ICH 204 through bus 240 .
- HDD 226 and CD-ROM drive 230 may use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface.
- IDE integrated drive electronics
- SATA serial advanced technology attachment
- Super I/O (SIO) device 236 may be connected to SB/ICH 204 .
- An operating system runs on processor 206 .
- the operating system coordinates and provides control of various components within the data processing system 200 in FIG. 2 .
- the operating system may be a commercially available operating system such as Microsoft® Windows® XP (Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both).
- An object-oriented programming system such as the JavaTM programming system, may run in conjunction with the operating system and provides calls to the operating system from JavaTM programs or applications executing on data processing system 200 (Java is a trademark of Sun Microsystems, Inc. in the United States, other countries, or both).
- data processing system 200 may be, for example, an IBM® eServerTM System p® computer system, running the Advanced Interactive Executive (AIX®) operating system or the LINUX® operating system (eServer, System p, and AIX are trademarks of International Business Machines Corporation in the United States, other countries, or both while LINUX is a trademark of Linus Torvalds in the United States, other countries, or both).
- Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors in processor 206 . Alternatively, a single processor system may be employed.
- SMP symmetric multiprocessor
- the data processing system 200 may be comprised of one or more System p servers with a network of host adapters to communicate over the network 102 in FIG. 1 , and a network of RAID adapters to communicate to a plethora of storage devices.
- Computer code for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as HDD 226 , and may be loaded into main memory 208 for execution by processor 206 .
- the processes for illustrative embodiments of the present invention may be performed by processor 206 using computer usable program code, which may be located in a memory such as, for example, main memory 208 , ROM 224 , or in one or more peripheral devices 226 and 230 , for example.
- a bus system such as bus 238 or bus 240 as shown in FIG. 2 , may be comprised of one or more buses.
- the bus system may be implemented using any type of communication fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture.
- a communication unit such as modem 222 or network adapter 212 of FIG. 2 , may include one or more devices used to transmit and receive data.
- a memory may be, for example, main memory 208 , ROM 224 , or a cache such as found in NB/MCH 202 in FIG. 2 .
- FIG. 3 is a flow diagram of a method 300 of managing resources of a storage server, according to an embodiment.
- the storage server e.g., storage server 108
- the storage server may receive an input/output ( 10 ′′) request, block 302 .
- a host/server e.g., host/server 104 , 106
- the storage server e.g., storage server 108
- IO requests are continuously provided during operation of the system.
- IO requests may be provided before or after the IO request referred to in block 302 .
- IO requests provided before the IO request referred to in block 302 may be referred to as “previously-submitted IO requests,” and those IO requests provided after the IO request referred to in block 302 may be referred to as “subsequent IO requests.”
- a previously-submitted IO request may be referred to as a “first IO request,” and the IO request referred to in block 304 may be referred to as a “second IO request.”
- the subsequent IO request can be referred to as a “third IO request.” It will be appreciated that the ordinal numbers referencing the IO requests are used to illustrate when the IO requests occur relative to each other.
- the determination is made by comparing the performance of the high priority IO request with the performance of a low priority IO request.
- the determination is made by comparing the performance of the high priority IO request with previous attempts to perform the high priority IO request.
- historical data related to the performance of the high priority IO request and/or low priority IO request is analyzed. For example, analysis of the historical data includes determining whether the performance level of the IO request has met or exceeded a target in previous runs of the IO request.
- the target can be a numerical value, in an embodiment.
- the target can be a qualitative value, such as “satisfactory” or “unsatisfactory.” In other embodiments, other targets are employed.
- the storage server concurrently waits to perform the high priority IO request, while the performance level of the high priority IO request does not meet or exceed the target. In another embodiment, the storage server concurrently waits to perform the high priority IO request, while the performance level of the high priority IO request compared to the performance level of the low priority IO request does not meet or exceed the target. After a subsequent low priority IO request is identified for throttling, resources are reallocated to performing the high priority IO request by the storage server.
- the throttled low priority IO request is not performed, in an embodiment.
- the throttled low priority IO request is performed by applying limited resources. Consequently, the high priority IO request is performed using re-allocated resources in proportion to the limited resources directed towards performance of the low priority IO request. In any case, the high priority IO request is performed, block 310 .
- the method continues to block 316 to determine whether a decision had been made from a previous cycle to throttle low priority IO requests. If so, performance of the low priority IO request is delayed so that the resources can be allocated for the performance of the previously-submitted high priority IO request, block 318 . For example, when the performance level of the high priority IO request does not meet or exceed a target or when the performance level of the high priority IO request compared to the performance level of the low priority IO request does not meet or exceed a target, at least a portion of a predetermined amount of the resources previously designated for performing the previously-submitted low priority IO request is re-allocated to performing the high priority IO request.
- the low priority IO request is then performed, block 320 . If at block 316 , a decision had not been made to throttle low priority IO request, the method continues to block 320 and the low priority IO request is performed.
- FIGS. 1-2 may vary depending on the implementation.
- Other internal hardware or peripheral devices such as flash memory, equivalent non-volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIGS. 1-2 .
- a distributed system is depicted, a single system alternatively can be employed. In such embodiment, some of the hardware (such as the additional server) may not be included.
- the processes of the illustrative embodiments may be applied to a multiprocessor data processing system, other than the SMP system mentioned previously, without departing from the spirit and scope of the present invention.
- data processing system 200 may take the form of any of a number of different data processing systems including host computing devices, server computing devices, a tablet computer, laptop computer, telephone or other communication device, a personal digital assistant (PDA), or the like.
- data processing system 200 may be a portable computing device which is configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data, for example.
- data processing system 200 may be any known or later developed data processing system without architectural limitation.
- aspects of the present invention may be embodied as a system, method, or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer readable program code embodied thereon.
- the computer-readable medium may be a computer-readable signal medium or a physical computer-readable storage medium.
- a physical computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, crystal, polymer, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- Examples of a physical computer-readable storage medium include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, RAM, ROM, an EPROM, a Flash memory, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that can contain, or store a program or data for use by or in connection with an instruction execution system, apparatus, or device.
- Computer code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
- Computer code for carrying out operations for aspects of the present invention may be written in any static language, such as the “C” programming language or other similar programming language.
- the computer code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, or communication system, including, but not limited to, a local area network (LAN) or a wide area network (WAN), Converged Network, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider an Internet Service Provider
- These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flow diagram and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flow diagram and/or block diagram block or blocks.
- each block in the flow diagrams or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagram can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Computer And Data Communications (AREA)
Abstract
For managing a storage server having improving overall system performance, a first input/output (I/O) request is received. A first priority level is dynamically assigned to the first I/O request, the first I/O request associated with a performance level for an application residing on a host in communication with the storage server. A second I/O request of a second priority level is throttled to allow at least a portion of a predetermined amount of resources previously designated for performing the second I/O request to be re-allocated to performing the first I/O request. The second priority level is different than the first priority level.
Description
- 1. Field of the Invention
- The present invention relates in general to storage servers, and more particularly, to methods, computer systems, and physical computer storage media for managing resources of storage servers.
- 2. Description of the Related Art
- Data processing systems can have many nodes that are geographically dispersed. In such case, the systems can be managed in a distributed manner by logically separating each environment into a series of loosely connected managed regions. Each environment can have a management server for managing local resources, while a management system including management servers coordinate activities across the system to permit remote site management and operation. In this way, local resources within one region can be exported for the use of other regions. The management system can be governed by a service level agreement providing rules for managing the many systems.
- In order to fulfill quality-of-service guarantees delineated by the service level agreement within a network management system, performance measurements may be required along various network routes throughout the system. In particular, the management system measures resource consumption while an application is running. Measurements are taken along particular routes and metrics and descriptions relating to operations performed consuming bandwidth are accumulated.
- Different applications may have different quality-of-service requirements delineated by the service level agreement. For instance, some applications may require a faster response time and/or higher input/output throughput than other applications. In other cases, an application may require larger bandwidth or larger storage capacity than another application. In the past, lower priority input/output (IO) requests were throttled based on static mapping between storage requirements and storage service classes of an application. As a result, in some instances, lower priority IO requests would be neglected in favor of high priority IO requests, and the system would become overloaded by the volume of high priority IO requests in a queue. Consequently, system operation was not as efficient as desired and low priority request would be minimally fulfilled. To optimize overall performance of the system, improved methods and systems for managing storage media are needed.
- One improved method dynamically assigns priorities to IO requests based on the performance level and importance of the host application which the IO requests belong to, and throttling lower priority IO requests based on 1) a performance level of the high priority IO requests compared with a performance level of the lower priority requests, and 2) relative performance level of the high priority IO requests compared with its performance target, which is defined within the storage class to which the high priority IO requests is currently mapped. In an embodiment, by way of example only, a first IO request is received. Then, a first priority level is dynamically assigned to the first IO request, where the first IO request is associated with a performance level for an application residing on a host in communication with the storage server. A second IO request of a second priority level is throttled, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, where the second priority level is different than the first priority level.
- In another embodiment, by way of example only, a computer system is provided. The computer system includes a host and a storage server. The host is configured to provide input/output (IO) requests. The storage server is in communication with the host and is configured to receive the IO requests. The storage server includes a processor configured to receive a first input/output (IO), to dynamically assign a first priority level to the first IO request, the first IO request having a performance level, and to throttle a second IO request of a second priority level, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
- In still another embodiment, by way of example only, a physical computer storage medium comprising a computer program product method for managing resources of a storage server is provided. The storage medium includes computer code for receiving a first input/output (IO), computer code for dynamically assigning a first priority level to the first IO request, the first IO request having a performance level for an application residing on a host in communication with the storage server, and computer code for throttling a second IO request of a second priority level, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
- In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
-
FIG. 1 is a pictorial representation of an example distributed data processing system, according to an embodiment; -
FIG. 2 is a block diagram of an example data processing system, according to an embodiment; and -
FIG. 3 is a flow diagram of a method of managing resources of a storage server, according to an embodiment. - The illustrated embodiments below provide systems and methods for managing a storage server that have improved overall system performance. The systems and methods allow dynamic adjustment of storage classes of high and low priority IO requests, based on analyses of performance feedback data associated with those IO requests. Generally, the method includes receiving a first input/output (IO) request, where the first IO request is associated with a performance level and a first priority level for an application residing on a host in communication with the storage server. A second IO request of a second priority level is throttled, when the performance level of the first IO request does not meet or exceed a first target to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, where the second priority level is different than the first priority level.
- With reference now to the figures and in particular with reference to
FIGS. 1-2 , example diagrams of data processing environments are provided in which illustrative embodiments of the present invention may be implemented. It should be appreciated thatFIGS. 1-2 are only examples and are not intended to assert or imply any limitation with regard to the environments in which aspects or embodiments of the present invention may be implemented. Many modifications to the depicted environments may be made without departing from the spirit and scope of the present invention. - With reference now to the figures,
FIG. 1 depicts a pictorial representation of an example distributed data processing system in which aspects of the illustrative embodiments may be implemented. Distributeddata processing system 100 may include a network of computers in which aspects of the illustrative embodiments may be implemented. The distributeddata processing system 100 contains at least onenetwork 102, which is the medium used to provide communication links between various devices and computers connected together within distributeddata processing system 100. Thenetwork 102 may include connections, such as wire, wireless communication links, or fiber optic cables. - In the depicted example, host/
server 104 and host/server 106 are connected tonetwork 102 along withstorage server 108. One or both of the host/servers storage server 108. In this regard, the host/servers storage server 108. In an embodiment, the host/servers -
Storage server 108 may be include a storage unit and can comprise any storage system. Examples ofstorage server 108 may include an advanced storage device, such as a DS8000 dual node controller, or a file server, such as a network attached storage (NAS) device. Although two host/servers data processing system 100 may include additional servers, and other devices not shown. - In the depicted example, distributed
data processing system 100 is the Internet withnetwork 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, the distributeddata processing system 100 may also be implemented to include a number of different types of networks, such as for example, an intranet, a local area network (LAN), a wide area network (WAN), or the like. The illustrative embodiments are also particularly well suited for implementation with networks, such as SANs, where the wires and switches utilize Fibre Channel, iSCSI, FCOCEE, or the like technologies. As stated above,FIG. 1 is intended as an example, not as an architectural limitation for different embodiments of the present invention, and therefore, the particular elements shown inFIG. 1 should not be considered limiting with regard to the environments in which the illustrative embodiments of the present invention may be implemented. - With reference now to
FIG. 2 , a block diagram of an example data processing system is shown in which aspects of the illustrative embodiments may be implemented.Data processing system 200 is an example of a computer, such as host/server FIG. 1 , in which computer usable code or instructions implementing the processes for illustrative embodiments of the present invention may be located. -
Data processing system 200 includes acontroller 209 comprising aprocessor 206,main memory 208 and, alternatively, agraphics processor 210. Thecontroller 209 supplies commands to run database and/or backup applications to thesystem 200. In the depicted embodiment, thedata processing system 200 employs a hub architecture including north bridge and memory controller hub (NB/MCH) 202 and south bridge and input/output (I/O) controller hub (SB/ICH) 204.Processor 206,main memory 208, andgraphics processor 210 are connected to NB/MCH 202.Graphics processor 210 may be connected to NB/MCH 202 through an accelerated graphics port (AGP). - In the depicted example, local area network (LAN)
adapter 212 connects to SB/ICH 204.Audio adapter 216, keyboard andmouse adapter 220,modem 222, read only memory (ROM) 224, hard disk drive (HDD) 226, CD-ROM drive 230, universal serial bus (USB) ports andother communication ports 232, and PCl/PCIe devices 234 connect to SB/ICH 204 throughbus 238 andbus 240. PCl/PCIe devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not.ROM 224 may be, for example, a flash basic input/output system (BIOS). -
HDD 226 and CD-ROM drive 230 connect to SB/ICH 204 throughbus 240.HDD 226 and CD-ROM drive 230 may use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. Super I/O (SIO)device 236 may be connected to SB/ICH 204. - An operating system runs on
processor 206. The operating system coordinates and provides control of various components within thedata processing system 200 inFIG. 2 . As a host, the operating system may be a commercially available operating system such as Microsoft® Windows® XP (Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both). An object-oriented programming system, such as the Java™ programming system, may run in conjunction with the operating system and provides calls to the operating system from Java™ programs or applications executing on data processing system 200 (Java is a trademark of Sun Microsystems, Inc. in the United States, other countries, or both). - As a server,
data processing system 200 may be, for example, an IBM® eServer™ System p® computer system, running the Advanced Interactive Executive (AIX®) operating system or the LINUX® operating system (eServer, System p, and AIX are trademarks of International Business Machines Corporation in the United States, other countries, or both while LINUX is a trademark of Linus Torvalds in the United States, other countries, or both).Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors inprocessor 206. Alternatively, a single processor system may be employed. Moreover, in one illustrative embodiment, thedata processing system 200 may be comprised of one or more System p servers with a network of host adapters to communicate over thenetwork 102 inFIG. 1 , and a network of RAID adapters to communicate to a plethora of storage devices. - Computer code for the operating system, the object-oriented programming system, and applications or programs (such as backup applications or database applications) are located on storage devices, such as
HDD 226, and may be loaded intomain memory 208 for execution byprocessor 206. The processes for illustrative embodiments of the present invention may be performed byprocessor 206 using computer usable program code, which may be located in a memory such as, for example,main memory 208,ROM 224, or in one or moreperipheral devices - A bus system, such as
bus 238 orbus 240 as shown inFIG. 2 , may be comprised of one or more buses. Of course, the bus system may be implemented using any type of communication fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture. A communication unit, such asmodem 222 ornetwork adapter 212 ofFIG. 2 , may include one or more devices used to transmit and receive data. A memory may be, for example,main memory 208,ROM 224, or a cache such as found in NB/MCH 202 inFIG. 2 . -
FIG. 3 is a flow diagram of amethod 300 of managing resources of a storage server, according to an embodiment. During operation of a data processing system (e.g.,system 100 ofFIG. 1 ), the storage server (e.g., storage server 108) may receive an input/output (10″) request, block 302. For example, a host/server (e.g., host/server 104, 106) provides the IO request that is received by the storage server (e.g., storage server 108). It will be appreciated that IO requests are continuously provided during operation of the system. Thus, IO requests may be provided before or after the IO request referred to inblock 302. IO requests provided before the IO request referred to inblock 302 may be referred to as “previously-submitted IO requests,” and those IO requests provided after the IO request referred to inblock 302 may be referred to as “subsequent IO requests.” A previously-submitted IO request may be referred to as a “first IO request,” and the IO request referred to inblock 304 may be referred to as a “second IO request.” Additionally, the subsequent IO request can be referred to as a “third IO request.” It will be appreciated that the ordinal numbers referencing the IO requests are used to illustrate when the IO requests occur relative to each other. - At
block 304, a determination is made as to whether the 10 request has a high priority or a low priority. For example, a calculation is made, based on a quality of service agreement, as to the priority of the IO request to dynamically assign a priority level to the IO request. As noted above, the IO request can be pre-assigned an initial priority level. After the calculation is made, the priority level can change to another priority level. In any case, when the storage server receives the IO request, the priority level of the IO request is provided to the storage server. In an example, the IO request may have a priority level that is higher or lower than a previously-submitted IO request. In another example, the IO request may have a priority level that is higher or lower than a subsequent IO request. - If the priority level is high, a determination is made as to whether the IO request is being performed sufficiently, block 306. In an embodiment, the determination is made by comparing the performance of the high priority IO request with the performance of a low priority IO request. In another embodiment, the determination is made by comparing the performance of the high priority IO request with previous attempts to perform the high priority IO request. In any case, to make such a determination, historical data related to the performance of the high priority IO request and/or low priority IO request is analyzed. For example, analysis of the historical data includes determining whether the performance level of the IO request has met or exceeded a target in previous runs of the IO request. The target can be a numerical value, in an embodiment. In another embodiment, the target can be a qualitative value, such as “satisfactory” or “unsatisfactory.” In other embodiments, other targets are employed.
- If the IO request is not being performed sufficiently, a decision is made to throttle a low priority IO request and a calculation is made as to how long the low priority IO request should be delayed, block 308. In an embodiment, the storage server concurrently waits to perform the high priority IO request, while the performance level of the high priority IO request does not meet or exceed the target. In another embodiment, the storage server concurrently waits to perform the high priority IO request, while the performance level of the high priority IO request compared to the performance level of the low priority IO request does not meet or exceed the target. After a subsequent low priority IO request is identified for throttling, resources are reallocated to performing the high priority IO request by the storage server. Specifically, at least a portion of the resources that would have been used for performing the low priority IO request are reallocated towards the high priority IO request of
block 308. When a determination is made to throttle the low priority IO request, the throttled low priority IO request is not performed, in an embodiment. In another embodiment, the throttled low priority IO request is performed by applying limited resources. Consequently, the high priority IO request is performed using re-allocated resources in proportion to the limited resources directed towards performance of the low priority IO request. In any case, the high priority IO request is performed, block 310. - Returning to block 306, if the high priority IO request is being performed sufficiently, a determination is made as to whether resources can be allocated to perform the throttled low priority IO request, block 312. If so, a calculation is made as to how much of the delay of the throttled low priority IO request should be shortened or whether the delay should be canceled for the next performance cycle, block 314. Once calculated, the storage server then reallocates the resources to the performance of a subsequent the low priority IO request. The high priority IO request is performed, block 310. In an embodiment in which a determination is made that no resources can be allocated to perform the throttled low priority IO request, the system continues to block 310 to perform the high priority IO request.
- At
block 304, if the IO request is a low priority request, the method continues to block 316 to determine whether a decision had been made from a previous cycle to throttle low priority IO requests. If so, performance of the low priority IO request is delayed so that the resources can be allocated for the performance of the previously-submitted high priority IO request, block 318. For example, when the performance level of the high priority IO request does not meet or exceed a target or when the performance level of the high priority IO request compared to the performance level of the low priority IO request does not meet or exceed a target, at least a portion of a predetermined amount of the resources previously designated for performing the previously-submitted low priority IO request is re-allocated to performing the high priority IO request. After the high priority IO request is performed, the low priority IO request is then performed, block 320. If atblock 316, a decision had not been made to throttle low priority IO request, the method continues to block 320 and the low priority IO request is performed. - Those of ordinary skill in the art will appreciate that the hardware in
FIGS. 1-2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash memory, equivalent non-volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted inFIGS. 1-2 . In addition, although a distributed system is depicted, a single system alternatively can be employed. In such embodiment, some of the hardware (such as the additional server) may not be included. Also, the processes of the illustrative embodiments may be applied to a multiprocessor data processing system, other than the SMP system mentioned previously, without departing from the spirit and scope of the present invention. - Moreover, the
data processing system 200 may take the form of any of a number of different data processing systems including host computing devices, server computing devices, a tablet computer, laptop computer, telephone or other communication device, a personal digital assistant (PDA), or the like. In some illustrative examples,data processing system 200 may be a portable computing device which is configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data, for example. Essentially,data processing system 200 may be any known or later developed data processing system without architectural limitation. - As will be appreciated by one of ordinary skill in the art, aspects of the present invention may be embodied as a system, method, or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer readable program code embodied thereon.
- Any combination of one or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a physical computer-readable storage medium. A physical computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, crystal, polymer, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Examples of a physical computer-readable storage medium include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, RAM, ROM, an EPROM, a Flash memory, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain, or store a program or data for use by or in connection with an instruction execution system, apparatus, or device.
- Computer code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the foregoing. Computer code for carrying out operations for aspects of the present invention may be written in any static language, such as the “C” programming language or other similar programming language. The computer code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, or communication system, including, but not limited to, a local area network (LAN) or a wide area network (WAN), Converged Network, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Aspects of the present invention are described above with reference to flow diagrams and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flow diagrams and/or block diagrams, and combinations of blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flow diagram and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flow diagram and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flow diagram and/or block diagram block or blocks.
- The flow diagrams and block diagrams in the above figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flow diagrams or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagram, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Claims (10)
1-11. (canceled)
12. A computer system comprising:
a host configured to provide input/output (IO) requests; and
a storage server in communication with the host and configured to receive the IO requests, the storage server including a processor configured to receive a first input/output (IO), to dynamically assign a first priority level to the first IO request, the first IO request associated with a performance level for an application residing on a host in communication with the storage server and to throttle a second IO request of a second priority level to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
13. The computer system of claim 12 , wherein the processor is further configured to wait to perform the first IO request, when the performance level of the first IO request does not meet or exceed the first target, and to perform the first IO request after receiving the re-allocated resources.
14. The computer system of claim 12 , wherein the processor is further configured to re-allocate resources to perform the first IO request in proportion to a limited portion of the predetermined amount of resources.
15. The computer system of claim 12 , wherein the processor is further configured perform the second IO request, when the performance level of the first IO request meets or exceeds a second target.
16. A physical computer storage medium comprising a computer program product method for managing resources of a storage server, the physical computer storage medium comprising:
computer code for receiving a first input/output (IO);
computer code for dynamically assigning a first priority level to the first IO request, the first IO request associated with a performance level for an application residing on a host in communication with the storage server; and
computer code for throttling a second IO request of a second priority level to allow at least a portion of a predetermined amount of resources previously designated for performing the second IO request to be re-allocated to performing the first IO request, the second priority level being different than the first priority level.
17. The physical computer storage medium of claim 16 , further comprising:
computer code for waiting to perform the first IO request, when the performance level of the first IO request does not meet or exceed the first target; and
computer code for performing the first IO request after receiving the re-allocated resources.
18. The physical computer storage medium of claim 16 , wherein the computer code for throttling the second IO request includes computer code for performing the second IO request with a limited portion of the predetermined amount of resources.
19. The physical computer storage medium of claim 18 , wherein the computer code for throttling the second IO request includes computer code for re-allocating resources to perform the first IO request in proportion to the limited portion of the predetermined amount of resources used for performing the second IO request.
20. (canceled)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/172,648 US20130007755A1 (en) | 2011-06-29 | 2011-06-29 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
US13/534,125 US8881165B2 (en) | 2011-06-29 | 2012-06-27 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
CN201210219029.XA CN103106043B (en) | 2011-06-29 | 2012-06-28 | For method and the computer system of the resource of management storage server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/172,648 US20130007755A1 (en) | 2011-06-29 | 2011-06-29 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/534,125 Continuation US8881165B2 (en) | 2011-06-29 | 2012-06-27 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130007755A1 true US20130007755A1 (en) | 2013-01-03 |
Family
ID=47392084
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/172,648 Abandoned US20130007755A1 (en) | 2011-06-29 | 2011-06-29 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
US13/534,125 Active 2032-02-24 US8881165B2 (en) | 2011-06-29 | 2012-06-27 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/534,125 Active 2032-02-24 US8881165B2 (en) | 2011-06-29 | 2012-06-27 | Methods, computer systems, and physical computer storage media for managing resources of a storage server |
Country Status (2)
Country | Link |
---|---|
US (2) | US20130007755A1 (en) |
CN (1) | CN103106043B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140173113A1 (en) * | 2012-12-19 | 2014-06-19 | Symantec Corporation | Providing Optimized Quality of Service to Prioritized Virtual Machines and Applications Based on Quality of Shared Resources |
CN104168260A (en) * | 2013-05-15 | 2014-11-26 | 约翰内斯·海德汉博士有限公司 | Method for transferring data between a position measuring device and an associated processing unit |
US20160092380A1 (en) * | 2014-09-30 | 2016-03-31 | Emc Corporation | Leveling io |
US20180109641A1 (en) * | 2015-09-29 | 2018-04-19 | Huawei Technologies Co., Ltd. | Data Processing Method and Apparatus, Server, and Controller |
US20180131633A1 (en) * | 2016-11-08 | 2018-05-10 | Alibaba Group Holding Limited | Capacity management of cabinet-scale resource pools |
US20240393950A1 (en) * | 2023-05-24 | 2024-11-28 | Western Digital Technologies, Inc. | Disaggregated memory management |
US12360905B2 (en) * | 2022-01-24 | 2025-07-15 | Robert Bosch Gmbh | Computer-implemented method for managing cache utilization |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103765969B (en) * | 2011-10-03 | 2018-04-10 | 太阳专利信托公司 | Terminal, base station, and communication method |
US9354813B1 (en) * | 2012-12-28 | 2016-05-31 | Emc Corporation | Data storage system modeling |
CN103761051B (en) * | 2013-12-17 | 2016-05-18 | 北京同有飞骥科技股份有限公司 | One flows concurrent write performance optimization method based on duration data Multiinputoutput |
CA2882446A1 (en) | 2014-02-21 | 2015-08-21 | Coho Data, Inc. | Methods, systems and devices for parallel network interface data structures with differential data storage service capabilities |
GB2523568B (en) * | 2014-02-27 | 2018-04-18 | Canon Kk | Method for processing requests and server device processing requests |
GB2528318A (en) * | 2014-07-18 | 2016-01-20 | Ibm | Measuring delay |
US9733987B2 (en) * | 2015-02-20 | 2017-08-15 | Intel Corporation | Techniques to dynamically allocate resources of configurable computing resources |
US10102031B2 (en) * | 2015-05-29 | 2018-10-16 | Qualcomm Incorporated | Bandwidth/resource management for multithreaded processors |
US9588913B2 (en) | 2015-06-29 | 2017-03-07 | International Business Machines Corporation | Management of allocation for alias devices |
CN106168911A (en) * | 2016-06-30 | 2016-11-30 | 联想(北京)有限公司 | A kind of information processing method and equipment |
US10599340B1 (en) * | 2017-07-13 | 2020-03-24 | EMC IP Holding LLC | Policy driven IO scheduler to improve read IO performance in hybrid storage systems |
US10592123B1 (en) * | 2017-07-13 | 2020-03-17 | EMC IP Holding Company LLC | Policy driven IO scheduler to improve write IO performance in hybrid storage systems |
US10509739B1 (en) | 2017-07-13 | 2019-12-17 | EMC IP Holding Company LLC | Optimized read IO for mix read/write scenario by chunking write IOs |
US10719245B1 (en) | 2017-07-13 | 2020-07-21 | EMC IP Holding Company LLC | Transactional IO scheduler for storage systems with multiple storage devices |
CN109726005B (en) * | 2017-10-27 | 2023-02-28 | 伊姆西Ip控股有限责任公司 | Method, server system and computer readable medium for managing resources |
US10613896B2 (en) | 2017-12-18 | 2020-04-07 | International Business Machines Corporation | Prioritizing I/O operations |
US10432798B1 (en) * | 2018-05-25 | 2019-10-01 | At&T Intellectual Property I, L.P. | System, method, and apparatus for service grouping of users to different speed tiers for wireless communication |
US10419943B1 (en) | 2018-06-15 | 2019-09-17 | At&T Intellectual Property I, L.P. | Overlay of millimeter wave (mmWave) on citizens broadband radio service (CBRS) for next generation fixed wireless (NGFW) deployment |
US10798537B2 (en) | 2018-07-09 | 2020-10-06 | At&T Intellectual Property I, L.P. | Next generation fixed wireless qualification tool for speed-tier based subscription |
US11360544B2 (en) * | 2018-10-03 | 2022-06-14 | Google Llc | Power management systems and methods for a wearable computing device |
US11314558B2 (en) | 2019-07-23 | 2022-04-26 | Netapp, Inc. | Methods for dynamic throttling to satisfy minimum throughput service level objectives and devices thereof |
CN112445569B (en) * | 2019-09-02 | 2023-01-17 | 阿里巴巴集团控股有限公司 | Deployment method, device, electronic equipment and storage medium |
US11915043B2 (en) * | 2020-01-31 | 2024-02-27 | Rubrik, Inc. | Scheduler for handling IO requests of different priorities |
US11757788B2 (en) * | 2020-03-09 | 2023-09-12 | Nippon Telegraph And Telephone Corporation | Signal transfer system, signal transfer device, signal transfer method and signal transfer program |
US20230135477A1 (en) * | 2020-03-09 | 2023-05-04 | Nippon Telegraph And Telephone Corporation | Signal transfer system, signal transfer device, signal transfer method and signal transfer program |
US12079101B2 (en) * | 2022-04-22 | 2024-09-03 | EMC IP Holding Company, LLC | System and method for modeling and forecasting input/output (IO) performance using adaptable machine learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070226435A1 (en) * | 2004-10-29 | 2007-09-27 | Miller Wayne E | Methods and systems of managing I/O operations in data storage systems |
US7752622B1 (en) * | 2005-05-13 | 2010-07-06 | Oracle America, Inc. | Method and apparatus for flexible job pre-emption |
US20110083121A1 (en) * | 2009-10-02 | 2011-04-07 | Gm Global Technology Operations, Inc. | Method and System for Automatic Test-Case Generation for Distributed Embedded Systems |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040225736A1 (en) | 2003-05-06 | 2004-11-11 | Raphael Roger C. | Method and apparatus for providing a dynamic quality of service for serving file/block I/O |
US7519725B2 (en) | 2003-05-23 | 2009-04-14 | International Business Machines Corporation | System and method for utilizing informed throttling to guarantee quality of service to I/O streams |
US8031603B1 (en) * | 2005-06-30 | 2011-10-04 | Cisco Technology, Inc. | Technique for reducing resources allocated to an existing reservation in a data network |
US7757013B1 (en) | 2006-10-20 | 2010-07-13 | Emc Corporation | Techniques for controlling data storage system performance |
-
2011
- 2011-06-29 US US13/172,648 patent/US20130007755A1/en not_active Abandoned
-
2012
- 2012-06-27 US US13/534,125 patent/US8881165B2/en active Active
- 2012-06-28 CN CN201210219029.XA patent/CN103106043B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070226435A1 (en) * | 2004-10-29 | 2007-09-27 | Miller Wayne E | Methods and systems of managing I/O operations in data storage systems |
US7752622B1 (en) * | 2005-05-13 | 2010-07-06 | Oracle America, Inc. | Method and apparatus for flexible job pre-emption |
US20110083121A1 (en) * | 2009-10-02 | 2011-04-07 | Gm Global Technology Operations, Inc. | Method and System for Automatic Test-Case Generation for Distributed Embedded Systems |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11599374B2 (en) | 2012-12-19 | 2023-03-07 | Veritas Technologies Llc | System and method for providing preferential I/O treatment to devices that host a critical virtual machine |
US9515899B2 (en) * | 2012-12-19 | 2016-12-06 | Veritas Technologies Llc | Providing optimized quality of service to prioritized virtual machines and applications based on quality of shared resources |
US20140173113A1 (en) * | 2012-12-19 | 2014-06-19 | Symantec Corporation | Providing Optimized Quality of Service to Prioritized Virtual Machines and Applications Based on Quality of Shared Resources |
CN104168260A (en) * | 2013-05-15 | 2014-11-26 | 约翰内斯·海德汉博士有限公司 | Method for transferring data between a position measuring device and an associated processing unit |
US20160092380A1 (en) * | 2014-09-30 | 2016-03-31 | Emc Corporation | Leveling io |
US10585823B2 (en) * | 2014-09-30 | 2020-03-10 | EMC IP Holding Company LLC | Leveling IO |
US20180109641A1 (en) * | 2015-09-29 | 2018-04-19 | Huawei Technologies Co., Ltd. | Data Processing Method and Apparatus, Server, and Controller |
US10708378B2 (en) * | 2015-09-29 | 2020-07-07 | Huawei Technologies Co., Ltd. | Data processing method and apparatus, server, and controller |
US11102322B2 (en) * | 2015-09-29 | 2021-08-24 | Huawei Technologies Co., Ltd. | Data processing method and apparatus, server, and controller |
US20180131633A1 (en) * | 2016-11-08 | 2018-05-10 | Alibaba Group Holding Limited | Capacity management of cabinet-scale resource pools |
US12360905B2 (en) * | 2022-01-24 | 2025-07-15 | Robert Bosch Gmbh | Computer-implemented method for managing cache utilization |
US20240393950A1 (en) * | 2023-05-24 | 2024-11-28 | Western Digital Technologies, Inc. | Disaggregated memory management |
US12321602B2 (en) * | 2023-05-24 | 2025-06-03 | Western Digital Technologies, Inc. | Disaggregated memory management |
Also Published As
Publication number | Publication date |
---|---|
CN103106043A (en) | 2013-05-15 |
US20130007757A1 (en) | 2013-01-03 |
US8881165B2 (en) | 2014-11-04 |
CN103106043B (en) | 2015-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8881165B2 (en) | Methods, computer systems, and physical computer storage media for managing resources of a storage server | |
US10896086B2 (en) | Maximizing use of storage in a data replication environment | |
US10895947B2 (en) | System-wide topology and performance monitoring GUI tool with per-partition views | |
US8713572B2 (en) | Methods, systems, and physical computer storage media for processing a plurality of input/output request jobs | |
US8782656B2 (en) | Analysis of operator graph and dynamic reallocation of a resource to improve performance | |
US10025503B2 (en) | Autonomous dynamic optimization of platform resources | |
US20150172204A1 (en) | Dynamically Change Cloud Environment Configurations Based on Moving Workloads | |
US8838916B2 (en) | Hybrid data storage management taking into account input/output (I/O) priority | |
US20120096457A1 (en) | System, method and computer program product for preprovisioning virtual machines | |
US10915368B2 (en) | Data processing | |
US8661221B2 (en) | Leasing fragmented storage between processes | |
US8677050B2 (en) | System, method and computer program product for extending a cache using processor registers | |
US20110153971A1 (en) | Data Processing System Memory Allocation | |
US7743140B2 (en) | Binding processes in a non-uniform memory access system | |
US9218219B2 (en) | Managing virtual functions of an input/output adapter | |
US10447800B2 (en) | Network cache deduplication analytics based compute cluster load balancer | |
US11269527B2 (en) | Remote data storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAMBLISS, DAVID D.;LU, LEI;SHERMAN, WILLIAM G.;AND OTHERS;SIGNING DATES FROM 20110628 TO 20110629;REEL/FRAME:026597/0083 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |