US10762021B2 - Information processing system, and control method of information processing system - Google Patents
Information processing system, and control method of information processing system Download PDFInfo
- Publication number
- US10762021B2 US10762021B2 US16/086,297 US201616086297A US10762021B2 US 10762021 B2 US10762021 B2 US 10762021B2 US 201616086297 A US201616086297 A US 201616086297A US 10762021 B2 US10762021 B2 US 10762021B2
- Authority
- US
- United States
- Prior art keywords
- logical path
- slus
- logical
- pair
- hosts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4027—Coupling between buses using bus bridges
- G06F13/4031—Coupling between buses using bus bridges with arbitration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3027—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a bus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3055—Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/10—Program control for peripheral devices
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
Definitions
- the present invention relates to an information processing system, and a control method of the information processing system.
- Patent Literature 1 proposes Conglomerate LUN Structure, in which plural logical volumes of a storage device are held together into plural logical groups (LUN Conglomerate), a logical path is set in a representative logical volume (ALU: Administrative Logical Unit, PE (Protocol End Point) in Patent Literature 1) of each logical group; an I/O (Input/Output) command issued from a host computer includes an identifier of a logical volume (SLU: Subsidiary Logical Unit, vvol (virtual volume) in Patent Literature 1) other than the ALU to which the logical path belongs in each logical group; and the storage device delivers the relevant I/O processing to the SLU specified by the reception command.
- ALU Administrative Logical Unit, PE (Protocol End Point) in Patent Literature 1
- SLU Subsidiary Logical Unit, vvol (virtual volume) in Patent Literature 1
- the storage device delivers the relevant I/O processing to the SLU specified by the reception command.
- Patent Literature 2 discloses a technology in which a control device that controls the logical resources of a storage system includes: a virtual resource preparation unit that prepares virtual resources that are virtual logical resources; a real resource assignment unit that assigns real resources, which are substantial logical resources, to the above-described prepared virtual resources; and a data duplication control unit that duplicates data in a logical volume as duplicate data in a logical volume in a different storage system. Furthermore, in this technology, the identification of the virtual resource of the duplication source logical volume and that of the duplication destination logical volume are set the same, so that a higher-level host computer recognizes the two logical volumes as one logical volume. Therefore, Patent Literature 2 also discloses in this technology that the control device that controls the logical resources of a storage system makes an I/O command transmitted from the host computer to a logical volume in the storage system transferred to a logical volume in another storage systems.
- Patent Literature 1 U.S. Pat. No. 8,775,774
- Patent Literature 2 WO 2015/162634
- Patent Literature 2 In order to enhance the availabilities of logical volumes assigned to a virtual server, a storage cluster configuration in which the logical volumes assigned to the virtual server are reduplicated in different storage devices can be built according to Patent Literature 2.
- Patent Literature 2 if one storage device is in the state of copying formation or in a suspended state, an I/O transfer is executed to another storage device that is in a normal state in the storage cluster configuration.
- this I/O transfer is executed via a network between the two storage devices, this I/O transfer needs a larger delay time than a normal I/O transfer.
- Patent Literature 1 in the case where a virtual server uses plural logical volumes, because a logical path control unit included in the host computer uses the logical path to the relevant ALU, if there mixedly exist plural SLUs that have various response delays, this causes the processing delay or stoppage of the virtual server or the processing delays or stoppages of an OS and applications on the virtual server.
- the object of the present invention is to provide an information system in which the delay or stoppage of processing performed by a computer, which uses logical volumes, can be suppressed.
- An information processing system is connected to one or plural hosts and a second information processing system, and includes a processor.
- the second information processing system manages a second ALU connected to the one or plural hosts via a second logical path, and plural second SLUs that receive I/O requests from the one or plural hosts via the second logical path.
- the processor manages a first ALU connected to the one or plural hosts via a first logical path and plural first SLUs that receive I/O requests from the one or plurality of hosts via the first logical path, and builds up a first group including the plural first SLUs.
- At least one first SLU of the plural of first SLUs and at least one second SLU of the plural second SLUs compose an HA pair, and the HA pair is provided to the one or plural hosts as one volume.
- the processor evaluates the state of the first logical path on the basis of the pair state of the first SLU that composes the HA pair included in the first group so that priorities with which the one or plural hosts issue I/Os to the first logical path can be determined.
- I/Os can consistently be issued to logical paths without the occurrences of I/O delays and retries in virtual servers, an OS, or the applications on a host computer, so that the delays or stoppages of pieces of processing performed on the computer using the logical volumes of storage devices can be suppressed.
- FIG. 1 is a diagram showing an example of a configuration of a storage system.
- FIG. 2 is a diagram showing an example of a configuration of a storage device.
- FIG. 3 is a diagram showing an example of a configuration of a host computer.
- FIG. 4 is a diagram showing an example of a configuration of a control unit and software included in the host computer.
- FIG. 5 is a diagram showing an example of a configuration of a management computer.
- FIG. 6 is a diagram showing an example of a configuration of a virtual server management table included in the host computer.
- FIG. 7 is a diagram showing an example of a configuration of a logical path management table included in the host computer.
- FIG. 8 is a diagram showing an example of a configuration of a bound management table included in the host computer.
- FIG. 9 is a diagram showing examples of configurations of LUN management tables included in the storage device.
- FIG. 10 is a diagram showing an example of a configuration of a logical volume management table included in the storage device.
- FIG. 11 is a diagram showing an example of a configuration of a logical volume management table included in the storage device.
- FIG. 12 is a diagram showing examples of configurations of consistency group management tables included in the storage device.
- FIG. 13 is a diagram showing examples of configurations of bound management tables included in the storage device.
- FIG. 14 is a diagram showing an example of virtual server creation processing.
- FIG. 15 is a diagram showing an example of a flowchart of logical resource creation processing that is included in the storage device and needed at the time of the virtual server creation.
- FIG. 16 is a diagram showing an example of storage resource operation processing in the configuration of a storage cluster.
- FIG. 17 is a diagram showing an example of a flowchart of logical path evaluation processing included in the storage device.
- FIG. 18 is a diagram showing an example of a flowchart of binding processing included in the storage device.
- FIG. 19 is a diagram showing an example of a flowchart of unbinding processing included in the storage device.
- FIG. 20 is a diagram showing an example of a configuration of a storage system.
- FIG. 21 is a diagram showing examples of configurations of consistency group management tables included in a storage device.
- FIG. 22 is a diagram showing an example of a flowchart of logical resource creation processing included in the storage device.
- FIG. 23 is a diagram showing an example of a configuration of a storage system.
- FIG. 24 is a diagram showing examples of configurations of virtual port management tables included in a host computer.
- FIG. 25 is a diagram showing examples of configurations of host group management tables included in a storage device.
- FIG. 26 is a diagram showing an example of a flowchart of LUN creation in a storage device management program.
- FIG. 27 is a diagram showing an example of a flowchart of logical bath evaluation processing included in the storage device.
- FIG. 28 is a diagram showing an example of a flowchart of logical bath evaluation processing included in the host computer.
- the pieces of processing performed through the processor's executing the programs are substantially equal to pieces of processing performed by dedicated hardware, there for part of the pieces of processing or all the pieces of processing can be realized by the dedicated hardware. Therefore, although there are several things each of which is represented by “***unit” in this specification, part of this “***unit” or the entirety of this “***unit” can be realized through a processor's executing a program, or part of this “***unit” or the entirety of this “***unit” can be realized by dedicated hardware.
- programs can be installed from a program delivery server or from a storage medium that is computer-readable.
- a storage system in which logical volumes in a storage device, which are dealt with by one or plural virtual servers (hereinafter, referred to as VM (Virtual Machines)) on a host computer, are held together into one logical group in the logical volume (hereinafter, also referred to SLUs) assignment configuration, which is proposed by Conglomerate LUN Structure disclosed in Patent Literature 1) and in the storage cluster disclosed in Patent Literature 2, logical paths to ALUs are evaluated for respective logical groups, and a logical path to which an I/O is issued is selected on the basis of the evaluation result.
- VM Virtual Machines
- Patent Literature 2 In order to enhance the availabilities of logical volumes assigned to a virtual server, a storage cluster configuration in which the logical volumes assigned to the virtual server are reduplicated in different storage devices can be built according to Patent Literature 2. Furthermore, according to Patent Literature 1, one or plural logical volumes can be assigned per virtual server. According to Patent Literature 2, if one storage device to be assigned to a storage server is in the state of copying formation or in a suspended state, an I/O transfer is executed to another storage device that is in a normal state in the storage cluster configuration.
- this I/O transfer is executed via a network between the two storage devices, this I/O transfer needs a larger delay time than a normal I/O transfer, therefore, in the case where a virtual server uses plural logical volumes, because logical volumes that use I/O transfers and logical volumes that do not use I/O transfers mixedly exist, which leads to the processing delay or stoppage of the virtual server or the processing delays or stoppages of the OS and applications on the virtual server.
- a logical path control unit included in a host computer targets logical paths to ALUs
- when plural virtual servers have an ALU in common if an I/O issuance to a logical volume of a virtual server is delayed due to its I/O transfer, or the retry of the I/O issuance is needed, the logical path to the ALU is projected to be abnormal. Therefore, the logical path control unit of the host computer regards the logical path to the ALU as abnormal, and degrades or suppresses the priority of the I/O issuance, so that, even if the logical volumes of other severs that have the ALU in common are in a normal state, I/Os to the logical volumes cannot be issued, which leads to the stoppage of the other virtual servers.
- I/Os can consistently be issued to logical paths without the occurrences of I/O delays and retries in virtual servers, an OS, or applications on a host computer, so that the delays or stoppages of pieces of processing performed on the computer using the logical volumes of storage devices are suppressed.
- FIG. 1 is a diagram showing an example of a configuration of the storage system according to the first embodiment.
- a host computer 1000 includes HBAs (Host Bus Adapters) 1200 that are connection I/Fs between the hypervisor 1110 , which expands a virtual machine, and SANs 5000 , and an alternate path software 1120 that deals with Raw Devices (not shown) obtained from the HBAs 1200 .
- HBAs Hyper Bus Adapters
- Data dealt with by a VM expanded by the hypervisor 1110 is stored in logical volumes 2500 to which SLU attributes of storage devices 2000 are given (hereinafter, referred to as SLUs); Conglomerate LUN Structure is applied to access to an SLU 2400 , and an HBA 1200 recognizes an ALU via the SAN 5000 and creates a RAW Device (not shown); the alternate path software performs logical path control per RAW DEVICE; and the hypervisor includes a data input/output unit conforming to Conglomerate LUN Structure; recognizes the SLU assigned to the RAW DEVICE, that is to say, to the ALU (hereinafter, this assignment will be referred to as binding), and assigns the SLU to the virtual machine as a virtual disk.
- SLUs SLU attributes of storage devices 2000 are given
- Conglomerate LUN Structure is applied to access to an SLU 2400
- an HBA 1200 recognizes an ALU via the SAN 5000 and creates a RAW Device (not shown)
- This binding processing of the SLU is performed for VM image writing at the time of VM creation, and every time the power-on of a VM is executed.
- the release of the assignment of the SLU to the ALU (hereinafter, referred to as unbinding) is executed at the time of the completion of VM creation, or every time the power-off of a VM is executed.
- a storage device 2000 includes: a logic division unit that classifies logical volumes into plural logical partitions (hereinafter, referred to CTGs: Consistency Groups 2500 ); an I/O control unit (data input/output control unit) that is conforming to Conglomerate LUN Structure and transfers an I/O corresponding to a logical volume to another storage device; and a data duplication control unit that duplicates data, which is stored in a logical volume, in a logical volume in another storage device 2000 via a SAN (Storage Area Network) 5000 .
- a logical volume is provided to the host computer 1000 from a logical port 2110 via SAN 5000 a or SAN 5000 b.
- a kind of storage cluster system disclosed in Patent Literature 1 is formed in such a way that, by giving the same virtual resource identifiers to logical volumes that compose a pair, plural logical paths on SAN 5000 a and SAN 5000 b to the logical volumes that compose a pair are recognized by the host computer 1000 as if these logical paths were an alternate path to one logical volume.
- HA High Availability
- the storage device 2000 can receive an I/O issued to any of volumes composing a pair having an HA characteristic (referred to as an HA pair) by the host computer 1000 .
- I/Os issued from the host computer 1000 can be continuously accepted by another volume.
- the storage devices 2000 a and 2000 b provide various configuration management tables included in the storage devices 2000 and management APIs (Application Program Interfaces) provided by configuration control units to a management computer 3000 that is connected to the storage devices 2000 a and 2000 b via a management network 6000 typified by a LAN (Local Area Network).
- the management computer 3000 provides the various management tables of the storage devices 2000 included in the management computer 3000 and the management APIs provided by the configuration control units to the host computer 1000 and a management computer 4000 that are connected to the management computer 3000 via the management network 6000 .
- a storage manager accesses storage management software 3110 via an input/output unit included in the management computer 3000 , and executes the creation, deletion, and configuration change of storage resources such as ALUs
- a virtual machine manager accesses virtual machine management software 4110 via an input/output unit included in the management computer 4000 , and executes the creation, deletion, and configuration change of virtual machines.
- the virtual machine management software 4110 transmits a virtual machine management operation instruction to the host computer 1000 connected to the management network 6000 in accordance with an instruction issued by the virtual machine manager, and transmits instructions regarding the creation, deletion, and configuration change of a logical volume to the storage management software 3110 included in the management computer 3000 .
- the storage devices 2000 a and 2000 b include logical path evaluation units that evaluate the appropriateness of I/O processing per LUN, and define the state in which I/O processing can be performed as “STANDBY”, the state in which I/O processing can be optimally performed as “ACTIVE/OPTIMIZED”, and the state in which I/O processing can be performed but there is a possibility of the retry of the I/O processing being requested as “ACTIVE/NON-OPTIMIZED”.
- the storage devices 2000 a and 2000 b respond to logical path evaluation value notification requests that are issued from the alternate path software 1120 on the host computer 1000 via the SANs 5000 , and the alternate path software 1120 , which have received the responses, issue I/Os preferentially to logical paths evaluated as “ACTIVE/OPTIMIZED” on the basis of the response values.
- the storage device 2000 is an example of an information processing system such as a storage system, and therefore the present invention is not limited to configurations shown by the accompanying drawings.
- a storage system according to the present invention can be a storage system based on software installed on a server.
- the storage system according to the present invention can be a storage system realized by software installed on a virtual machine running on a cloud.
- FIG. 2 shows the configuration of the storage device 2000 .
- the storage device 2000 includes: at least one FEPK (Front End Package) 2100 that is a host I/F unit; at least one MPPK (Microprocessor Package) 2200 that is a control unit; at least one CMPK (Cache Memory Package) 2300 that is a common memory unit; at least one BEPK (Backend Package) 2400 that is a disk I/F unit; an internal network 2500 ; at least one HDD (Hard Disk Drive) 2700 ; and a management server 2600 .
- at least one HDD 2700 at least one another storage device such as at least one SSD (Solid State Drive) can be used.
- SSD Solid State Drive
- the internal network 2500 connects the at least one FEPK 2100 , the at least one MPPK 2200 , the at least one CMPK 2300 , the at least one BEPK 2400 , and the management server 2600 to one another.
- Respective MPs 2210 in the at least one MPPK 2200 are configured to be able to communicate with the at least one FEPK 2100 , the at least one CMPK 2300 , and the at least one BEPK 2400 using the internal network 2500 .
- the at least one FEPK 2100 includes plural logical ports 2110 each of which can be a host I/F.
- a logical port 2110 is connected to an external network 5000 such as the SAN 5000 , and performs protocol control when an I/O request issued by the host computer 1000 and read/write target data are transmitted or received between the logical port 2110 and the host computer 1000 .
- the at least one BEPK 2400 includes plural disk I/Fs 2410 .
- a disk I/F 2410 is connected to the at least one HDD 2700 via, for example, a cable, and at the same time the disk I/F 2410 is connected to the internal network 2500 , so that the disk I/F 2410 plays a role of a mediator for the transmission/reception processing of read/write target data between the internal network 2500 and the at least one HDD 2700 .
- the at least one CMPK 2300 includes a cache memory 2310 for data and a memory 2320 for control information.
- the cache memory 2310 and the memory 2320 for control can be volatile memories, for example, DRAMs (Dynamic Random Access Memories).
- the cache memory 2310 temporarily stores (caches) data to be written in the at least one HDD 2700 , or temporarily stores (caches) data read from the at least one HDD 2700 .
- the memory 2320 for control information stores information necessary for control, for example, configuration information for ALUs 2120 , SLUs 2400 , CTGs 2500 , and the like that are logical volumes.
- the at least one MPPK 2200 includes plural MPs (Micro Processors) 2210 , an LM (Local Memory) 2220 and a bus 2230 that connects the plural MPs 2210 and the LM 2220 .
- An MP 2210 is a processor used for a computer or the like, and plays a role of a logic division unit, an I/O control unit, a configuration control unit, or the like when the MP 2210 operates in accordance with programs stored in the LM 2220 .
- the LM 2220 stores part of control information for I/O control that is stored in the memory 2320 for control.
- the management server 2600 is a computer including management applications. The management server transmits an operation request from the management computer 3000 to a control program that has been loaded from the memory 2320 for control to the LM 2220 and executed by MP 2210 .
- the management server 2600 can also include various programs included in the management computer 3000 .
- the memory 2320 for control stores information that logic division units and I/O control units deal with.
- the storage device management program 3110 in the management computer 3000 can obtain information stored in the memory 2320 for control via the management server 2500 .
- a logic division unit distributes logical volumes (also referred to as LDEVs: Logical Devices) that are provided by the at least one BEPK 2400 and that are used as logical storage areas among the CTGs 2500 , and gives an identifier, which is uniquely identified in each CTG 2500 , to the configuration information of a logical volume 2210 registered in each CTG 2500 , and stores the configuration information in the memory 2320 for control as a CTG management table T 6000 .
- logical volumes also referred to as LDEVs: Logical Devices
- FIG. 3 shows the configuration of the host computer 1000 .
- the host computer 1000 is a computer including: a processor 1400 ; a memory 1200 ; HBAs (Host Bus Adapters) 1300 ; an output unit 1600 ; an input unit 1700 ; a management port 1100 that is a network I/F; and the like, and the host computer is, for example, a personal computer, a workstation, a mainframe, or the like.
- HBAs Hypervisors
- the processor 1400 integratedly controls the entirety of the host computer 1000 , and executes a virtual machine management program 1210 , a storage management program 1220 , a hypervisor 1230 , and the program of alternate path software that are stored in the memory 1200 .
- the processor 1100 issues a read access request or write access request to the storage device 2000 as an access request by executing the hypervisor 1230 .
- the memory 1200 , the virtual machine management program 1210 , the storage management program 1220 , and the hypervisor 1230 .
- the memory 1200 is not only used for storing such programs and the like, but also used as a working memory for the processor 1400 .
- the storage management program 1220 can transmit operation requests toward the logical volumes 2210 such as SLU assignment or the release of SLU assignment issued from a job application 1210 or an OS to the storage device management program 3110 included in the management computer 3000 via the management network 6000 .
- the storage management program 1220 can transmit the operation request of a logical volume to a configuration control unit included in the storage device 2000 via the SAN 5000 .
- An HBA 1300 performs protocol control at the time of communicating with the storage device 2000 . Because the HBA 1300 performs the protocol control, the transmission and reception of data and commands between the host computer 1000 and the storage device 2000 are performed in accordance with, for example, the fiber channel protocol.
- the input unit 1700 includes, for example, a keyboard, a switch, pointing device, a microphone, and the like, and the like.
- the output unit 1600 includes a monitor display, a speaker, and the like.
- FIG. 4 shows the logical block diagram of the host computer 1000 .
- the hypervisor 1230 includes: a virtual machine control unit 1231 that expands the virtual machine; a disk control unit 1232 that forms virtual disks from SLUs obtained from RAW DEVICEs; and an alternate path control unit 1233 that controls logical paths through which the logical volumes of the storage device 2000 are accessed.
- the alternate path control unit 1233 instead of the alternate path control unit 1233 , the logical buses can be controlled by the alternate path software 1120 other than the hypervisor as shown in FIG. 1 .
- the alternate path control unit 1233 can be replaced with “the alternate path software 1120 , and vice versa.
- the RAW DEVICES 1310 correspond to the ALUs 2120 in the storage devices 2000 , and they are devices that can be replaced with accesses to the ALUs 2120 in the host computer 1000 .
- the alternate path control unit 1233 controls logical paths extended to the RAW DEVICEs 1310 .
- the disk control unit 1232 forms one virtual disk corresponding to one SLU 2400 of a storage device 2000 that is accessible via a RAW DEVICE 1310 . Accesses to SLUs 2400 in the host computer 1000 are concentrated on the RAW DEVICEs 1310 .
- an access to an SLU 2400 is executed after the alternate path control unit 1233 specifies the target SLU 2400 via a RAW DEVICE associated with the target SLU 2400 .
- the RAW DEVICEs 1310 are used as devices that reduce overhead by executing I/O processing without temporarily copying data in a page cache (not shown) when a Hyper Visor unit 1400 is accessed.
- the volume identifier of the target SLU that an HBA 1300 obtains has to be globally unique, and the volume identifier is formed as a combination of the serial number (product number) of the relevant storage device 2000 and the local volume identifier of the target SLU in the storage device 2000 .
- the storage device 2000 has a function of sending back the identifier of the relevant ALU in response to the response of the volume identifier.
- FIG. 5 shows the configuration of the management computer.
- a processor 3600 integratedly controls the entirety of the management computer 3000 , and transmits various configuration management operation requests regarding the storage devices 2000 and the host computer 1000 to the storage devices 2000 and the host computer 1000 via a network I/F 3400 and the management network 6000 by executing the storage device management program 3110 and a computer management program (not shown) that are loaded in a memory 3100 .
- the memory 3100 also stores control information that is used by the storage device management program 3110 and the host computer management.
- the storage manager inputs an operation request into an input unit 3700 using a keyboard or a mouse, and can obtain an execution result via an output unit 3600 such as a display or a speaker.
- a storage medium 3300 such as an HDD or an SSD stores the execution logs of the storage device management program 3110 , the host computer management program 3120 , and various control programs.
- the configuration of the management computer 4000 is similar to that of the management computer 3000 shown in FIG. 5 , a virtual machine management program 4110 is loaded in a memory 4100 , and the management computer 4000 transmits the configuration management operation requests regarding a virtual machine 1900 and the storage devices 2000 to the storage devices 2000 and the host computer 1000 via the management network 6000 .
- FIG. 6 is a diagram showing an example of a configuration of a VM management table T 1000 included in the host computer 1000 .
- the VM management table T 1000 is referred to when the host computer 1000 executes the hypervisor 1230 and the virtual machine management program 1210 , and can be accessed from various programs included in the management computers 3000 and 4000 via the management network 6000 using an API provided by the host computer 1000 .
- the VM management table T 1000 includes: the column VM ID T 1010 that registers the identifier of the VM 1900 identifiable in the storage system; the column VOL ID T 1020 that registers the identifiers of SLUs 2400 assigned to the VM 1900 ; the column VOL SIZE T 2030 that registers the capacities of the relevant SLUs 2400 ; the column PROFILE T 1040 that registers a profile regarding the VM 1900 ; and the column STATUS T 1050 that registers the status of the VM 1900 .
- a profile regarding a VM means the characteristic information of the VM that is specified by a VM administrator when the VM is created.
- the characteristic information of the VM can be defined on the virtual machine management software included in the management computer 4000 , and the VM administrator can select one or more pieces of characteristic information from the predefined characteristic information when the VM is created.
- HA High Availability
- the SLUs 2400 assigned to the VM 1900 become pair of logical volumes having a HA characteristic.
- FIG. 7 is a diagram showing an example of a configuration of a logical path management table T 2000 included in the host computer 1000 .
- the logical path management table T 2000 is referred to when processing is performed by the alternate path software 1120 on the host computer 1000 , the alternate path control unit 1231 included in the hypervisor 1230 , or the storage management program 1220 .
- the logical path management table T 2000 can be accessed from various programs included in the management computers 3000 and 4000 via the management network 6000 using the API provided by the host computer 1000 .
- the logical path management table T 2000 includes: the column ALU ID T 2010 that registers the identifier of an ALU 2120 that composes a RAW DEVICE 1310 obtained from an HBA 1300 ; the column INITIATOR WWN T 2020 that registers the WWNs (World Wide Names) of HBAs 1300 that recognize the ALU 2120 ; the column TARGET WWN T 2030 that registers the WWNs of logical ports 2110 in the relevant storage device 2000 where the ALU 2120 can be obtained from the relevant HBA 1300 ; the column LUN T 2040 that registers LUNs (Logical Unit Numbers) that are the TARGET WWNs and belong to the ALU 2120 registered in the column T 2010 are registered; and the column STATUS T 2050 in which the statuses of the LUNs are registered.
- the alternate path software 1120 or the alternate path control unit 1231 and the storage management program 1220 issue an LUN scanning request (REPORT LUN) to each HBA 1300 , and each HBA 1300 transmit
- the logical path management table T 2000 is updated on the basis of the response values.
- STATUS T 2050 registers the response value in the relevant column. For example, a value in the column STATUS T 4040 of the after-mentioned LUN management table T 4000 that registers the status of a LUN is obtained by the alternate path control unit 1231 , and the obtained value is stored in the column STATUS T 2050 for registering the status of the LUN.
- FIG. 8 is a diagram showing an example of a configuration of a bound management table T 3000 included in the host computer 1000 .
- the bound management table T 3000 manages the associations between an ALU and SLUs bound to the ALU.
- the bound management table T 3000 is referred to when processing is performed by the disk control unit 1231 included in the hypervisor 1230 , the storage management program 1220 , or the virtual machine management program 1210 , and can be accessed from various programs included in the management computers 3000 and 4000 via the management network 6000 using the API provided by the host computer 1000 .
- the bound management table T 3000 includes the column ALU ID T 3010 that registers the identifier of an ALU 2120 registered in the logical path management table T 2000 and the column BOUND VOL ID T 3020 that registers the identifiers of SLUs 2400 bound to the ALU 2120 .
- FIG. 9 is a diagram showing examples of configurations of LUN management tables T 4000 included in the storage device 2000 .
- a LUN management table T 4000 manages LUNs provided by the storage device 2000 .
- the storage device 2000 refers the LUN management table T 4000 when the storage device 2000 performs logic path processing or logic path evaluation processing, and the management computer 3000 connected to the management network 6000 can also refer to the LUN management table T 4000 using a management API provided by a storage management server 2500 on the storage device 2000 .
- the LUN management table T 4000 includes: the column PORT ID T 4010 that registers the identifier of a logical port 2110 ; the column PORT WWN T 4020 that registers the WWN of the relevant logical port 2110 ; the column LUN T 4030 that registers the identifier of a LUN; the column STATUS T 4040 that registers the status of the LUN; and the column LDEV ID T 4050 that registers the identity of a logical volume that composes the LUN.
- a LUN is assigned to a logical volume associated with a logical port 2110 as the identifier of the logical volume.
- the after-mentioned LDEV ID identifies a logical volume uniquely in the storage device 2000
- a LUN identifies a logical volume uniquely among logical volumes associated with a logical port 2110 .
- FIG. 10 and FIG. 11 is a diagram showing an example of a configuration of a logical volume management table T 5000 included in the storage device 2000 .
- FIG. 10 shows a logical volume management table T 5000 a included in the storage device 2000 a
- FIG. 11 shows a logical volume management table T 5000 b included in the storage device 2000 b .
- the logical volume management table T 5000 manages logical volumes provided by the storage device 2000 .
- the storage device 2000 refers the logical volume management table T 5000 when the storage device 2000 performs logical volume configuration control or data duplication control, and the management computer 3000 connected to the management network 6000 can also refer to the logical volume management table T 5000 using a management API provided by the storage management server 2500 on the storage device 2000 .
- the logical volume management table T 5000 includes: the column LDEV ID T 5010 that registers the identifiable identifiers of logical volumes in the storage device; the column ATTRIBUTES T 5020 that registers attribute information regarding the relevant logical volumes; the column CAPACITY T 5030 that registers the capacities of the relevant logical volumes; the column PAIR LDEV ID T 5040 that registers the identifiers of logical volumes that belong to another storage device and whose data are obtained by duplicating data of some of the relevant logical volumes; and the column PAIR STATUS T 5050 that registers the statuses of the relevant duplicated pairs.
- FIG. 12 is a diagram showing examples of configurations of CTG management tables included in the storage device 2000 .
- a CTG management table T 6000 is referred to when the configuration of a logical volume is controlled, the duplication processing of data is performed, and the evaluation processing of a logical path is performed in the storage device 2000 , and the CTG management table T 6000 can be referred to through the management computer 3000 connected to the management network 6000 using a management API provided by the storage management server 2500 on the storage device 2000 .
- the CTG management table T 6000 includes the column CTG ID T 6010 that registers CGT identifiers, the column LDEV ID T 6020 that registers the identifiers of logical volumes that are components of the CTG, the column PAIR CTG T 6030 that registers the identifier of a CTG, to which the duplication destination the logical volumes in the CTG belong, the column STATUS T 6040 that registers the integrated evaluation values of the data duplication statuses of the logical volumes in the CTG, and the column VM ID T 6050 that registers the identifier of a VM 1900 that uses the logical volumes in the CGT.
- the integrated evaluation value registered in the column 6040 shows pair statuses as the CTG that are obtained by integratedly evaluating the pair statuses of the logical volumes in the CTG.
- SLUs belonging to the same CTG are bound to one ALU, that is to say, the SLUs in the CTG have one logical path in common. Therefore, the pair statuses of logical volumes in the CTG can be integrally evaluated instead of each SLU being evaluated, with the result that an optimal logical path control can be performed, and the number of I/O transfers among the storage devices 2000 can be reduced.
- FIG. 13 is a diagram showing examples of configurations of bound management tables T 7000 included in the storage device 2000 .
- a bound management table T 7000 is referred to when the binding processing is performed in the storage device 2000 , and the bound management table T 7000 can be referred to through the management computer 3000 connected to the management network 6000 using a management API provided by the storage management server 2500 on the storage device 2000 .
- the bound management table T 7000 includes the column ALU ID T 7010 that registers an identifier of an ALU 2120 included in the storage device 2000 that is identifiable in the storage device, and the column BITMAP T 7020 that registers a bitmap showing a list of bound SLUs.
- Each bit of the bitmap corresponds to the identifier of an SLU 2400 .
- the length of the bitmap is equal to the number of logical volumes included the storage device 2000 .
- the identifier of a logical volume is represented by hexadecimal form “0x0000”
- the first bit represents the relevant volume
- the relevant bit is “1”
- the relevant bit is “0”.
- a bound management table T 7 000 a shows that the logical volume 01:01 is bound to ALU 00:00.
- VM creation processing will be explained with reference to FIG. 14 .
- the VM administrator transmits VM creation instruction to the virtual machine management software 4110 via a GUI (graphical user interface) provided by the input/output unit included in the management computer 4000 (at step S 1000 ).
- the VM creation instruction is given characteristic information required of a VM to be created, and it will be assumed that “HA” that shows high availability is specified in this embodiment.
- the number and capacity of SLUs are specified in this VM creation instruction.
- the virtual machine management software 4110 transmits the VM creation instruction to the virtual machine management program 1210 included in the host computer 1000 via the management network 6000 (at step S 1010 ).
- the virtual machine management program 1210 transmits the creation instruction of logical volumes of which the VM is composed, that is to say, the creation instruction of SLUs 2400 to the storage management program 1220 which is also included in the host computer 1000 (at step S 1020 ).
- the storage management program 1220 transmits the creation instruction of the SLUs 2400 , which indicates that the storage device 2000 under the control of the storage management software 3110 included in the management computer 3000 should create the SLUs 2400 , to the storage management software 3110 (at step S 1030 ).
- the storage management software 3110 instructs the storage device 2000 to create the SLUs 2400 (at step S 1035 ), and after the completion of the creation of the SLUs 2400 , the storage management software 3110 transmits the identifiers of the created SLUs 2400 to the storage management program 1220 .
- the storage management program 1220 transmits the received identifiers of the SLUs 2400 to the virtual machine management program 1210 .
- the virtual machine management program 1210 registers the ID of the new VM in the column T 1010 of the VM management table T 1000 , the received identifiers of the SLUs 2400 and the capacities of the SLUs 2400 in the column T 1020 and the column T 1030 relevant to the new VM respectively, and further registers a profile specified in the VM creation request in the column T 1040 .
- the virtual machine management program 1210 transmits the binding instruction for the SLUs registered in the column T 1020 to the storage management program 1220 (at step S 1040 ), and the storage management program 1220 instructs the storage management software 3110 to execute the specified binding instruction for the SLUs (at step S 1050 ).
- the virtual machine management program 1210 executes VM image write processing for the bound SLUs 2400 to the virtual machine control unit 1231 and the disk control unit 1232 of the hypervisor 1230 (at step S 1060 ).
- the virtual machine management program 1210 issues the unbinding instruction of the relevant SLUs 2400 (at step S 1070 ).
- the storage management program 1220 transmits the unbinding instruction to the storage management software 3110 (at step S 1080 ).
- the storage management software 2110 transmits a VM creation completion response including the identifier of the created VM to the virtual machine management software 4110 , which is the sender of the VM creation instruction, via the storage management program 1220 and the virtual machine management program. 1210 , and the virtual machine management software 4110 informs the VM administrator of the completion of the virtual machine creation processing.
- FIG. 15 is equivalent to the storage resource creation processing (at step S 1035 ) in the VM creation shown in FIG. 14 , in which the storage management software 3110 receives an SLU 2400 creation instruction from the storage management program 1220 , and the following processing starts.
- the storage management software 3110 extracts the identifier (ID) and profile specified for a VM to be created and the number and capacity of logical volumes to be created for the VM to be created from the logical volume creation instruction (at step F 1000 ).
- the storage management software 3110 checks whether “HA” is specified in the profile of the VM to be created or not (at step F 1010 ).
- the storage management software 3110 creates SLUs 2400 , the number and capacity of which are the same as the specified number and capacity, in the primary side storage device 2000 a of the storage cluster (at step F 1060 ), and finishes this processing.
- the storage management software 3110 instructs the storage devices 2000 a and 2000 b , which compose the storage cluster, to create CTGs 2500 corresponding to the VM to be created (at step F 1020 ), and after the completion of the CTGs 2500 , the storage management software 3110 instructs each of the storage devices 2000 a and 2000 b to create SLUs 2400 the number and capacity of which are the same as the specified number and capacity (step F 1030 ).
- the storage management software 3110 instructs the storage devices 2000 a and 2000 b to register the created SLUs 2400 in the CTGs created or extracted at step F 1020 (at step F 1040 ), specifies the CTGs 2500 created at step F 1020 , instructs the storage devices 2000 a and 2000 b to form HA pairs (at step F 1050 ), and finishes this processing.
- each of the configuration control units of the storage devices 2000 a and 2000 b refers to the CTG management table T 6000 , and scans the column T 6050 to check whether there is the specified VM ID of or not. If the specified VM ID is registered in the CTG management table T 6000 , the configuration control unit sends back the CTG ID of a CTG in which the specified VM ID is registered. If the specified VM ID is not registered in the CTG management tables T 6000 , the configuration control unit registers a new CTG, registers the specified VM ID in the column T 6050 of the new CTG, and sends back the ID of the new CTG.
- the configuration control unit creates LDEVs which are given an SLU-attribute indicating the specified number and specified capacity of SLUs, registers the new LDEVs in the logical volume management table T 5000 , and transmits the notification of completion including the LDEV IDs of the new LDEVs
- the configuration control unit refers to the CTG management table T 6000 in order to register the specified LDEVs in the specified CTG, and registers the IDs of the specified LDEVs in the column T 6020 of the specified CTG.
- the data duplication control unit On receiving the HA pair formation instruction at step F 1050 , the data duplication control unit registers the identifier of a data duplication destination CTG in the column 6030 of the CTG management table 6000 , and creates pairs of LDEVs in the CTG.
- the statuses of the primary side LDEVs are set to “COPY (LOCAL)” statuses which mean that the primary side LDEVs are being copied and I/O processing can be performed on the primary side LDEVs
- the secondary side LDEVs are set to “COPY (BLOCK)” statuses which mean that the secondary side LDEVs are being copied and I/O processing can be performed on the secondary side LDEVs.
- the LDEV statuses of both primary side and secondary side LDEVs become “PAIR (MIRROR)”, which means that I/O processing can be performed on both primary side and secondary side LDEVs.
- the primary side LDEVs are in the status that each of pairs is not in synchronization with each other, so that the statuses of the primary side LDEVs become “PSUS (LOCAL)” statuses which mean that I/O processing can be performed only on the primary side LDEVs, and the secondary side LDEVs are in the status that each of pairs is not in synchronization with each other, so that the statuses of the secondary side LDEVs become “PSUS (BLOCK)” statuses which mean that I/O processing cannot be performed on the secondary side LDEVs.
- PSUS LOCAL
- BLOCK BLOCK
- the primary side LDEVs are in the status that each of pairs is not in synchronization with each other, so that the statuses of the primary side LDEVs become “SSWS (BLOCK)” statuses which mean that I/O processing cannot be performed on the primary side LDEVs, and the secondary side LDEVs are in the status that each of pairs is not in synchronization with each other, so that the statuses of the secondary side LDEVs become “SSWS (LOCAL)” statuses which mean that I/O processing can be performed only on the secondary side LDEVs.
- the statuses of these LDEV duplicate pairs are registered in the column T 5050 of the logical volume management table T 5000 .
- the value obtained by integratedly evaluating the pair statuses of all the LDEVs in the relevant CTG is registered in the column T 6040 , and if there is only one pair whose status is not “PAIR” status, the CTG status becomes the status of the pair.
- FIG. 16 is a diagram showing an example of a sequence of LDEV configuration operation instruction that the storage management software 3110 transmits to the storage devices 2000 a and 2000 b .
- the storage management software 3110 transmits the same LDEV configuration operation instruction to the two storage devices 2000 having HV configuration LDEVs, and if both storage devices 2000 finish this configuration operation normally, the storage management software 3110 responses with the normal completion of the configuration operation to the requestor. If one of the two storage devices finishes this configuration operation abnormally, the storage management software 3110 gets back a storage device 2000 that finishes the configuration operation normally in the status before the storage device 2000 is operated, and notifies the requestor of the abnormal completion of the configuration operation. If both storage devices finish this configuration operation abnormally, the storage management software 3110 notifies the requestor of the abnormal completion of the configuration operation.
- FIG. 17 is a diagram showing an example of a configuration of a flowchart of binding processing in the configuration control unit included in the storage device 2000 .
- the binding processing is started on receiving a binding instruction from the storage management software 3110 that receives the constituent VOL binding instruction at step S 1050 .
- the configuration control unit extracts a binding target SLU 2400 from the binding instruction (at step F 2000 ), and checks whether the specified SLU 2400 is an HA configuration volume or not with reference to the logical volume management table T 5000 (at step F 2010 ). If the specified SLU 2400 is not an HA configuration volume (NO at step F 2010 ), the configuration control unit selects an ALU 2120 that is not given HA attribute from the logical volume management table T 5000 (at step F 2060 ), changes the value of a bit relevant to the specified SLU 2400 in a bitmap in the column T 7020 corresponding to the ALU 2120 into “1” with reference to the bound management table T 7000 (at step F 2070 ), and then finishes this processing.
- the configuration control unit extracts a CTG 2500 to which the specified SLU 2400 belongs with reference to the CTG management table T 6000 (at step F 2020 ), and a list of LDEVs of the CTG 2500 extracted with reference to the CTG management table T 6000 is extracted, and further the configuration control unit selects an ALU relevant to any of the extracted LDEV that is bounded with the column T 7020 of the bound management table T 7000 (at step F 2030 ). If there is no relevant ALU at step F 2030 , the configuration control unit extracts one LDEV that is not given
- the configuration control unit performs binding processing by changing the value of a bit relevant to the specified SLU 2400 corresponding to the ALU 2120 selected at step F 2030 into “1” in the column T 7020 (at step F 2040 ), performs logical path evaluation processing (at step F 2050 ), and then finishes this processing.
- This processing is characterized in that SLUs 2400 belonging to the CTG 2500 are bound to the same ALU 2120 .
- FIG. 18 is a diagram showing an example of a configuration of a flowchart of unbinding processing in which the configuration control unit of the storage device 2000 releases the binding of SLUs 2400 .
- the configuration control unit extracts an ALU 2120 to which a specified SLU 2400 is bound to reference to the bound management table T 7000 (at step F 3010 ), and changes the value of a bit of a bitmap in the column 17020 relevant to the specified SLU 2400 corresponding to the ALU 2120 in the bound management table T 7000 into “0” (at step F 3020 ).
- This unbinding processing is characterized in that, if the specified SLU 2400 is HA configuration volume (Yes at step F 3030 ), the logical path control unit performs logical path evaluation processing regarding the ALU 2120 extracted at step F 3010 .
- FIG. 19 is a diagram showing an example of a configuration of a flowchart of the logical path evaluation processing performed by the logical path control unit of the storage device 2000 .
- the logical path evaluation processing is performed after a binding processing control unit binds an SLU 2400 to an ALU 2120 (at step F 2050 ) or when the status of a CTG 2500 is changed in the data duplication control unit (at step F 3040 ).
- the configuration control unit calculates SLUs 2400 bound to an ALU 2120 specified by a specification source control unit with reference to the bound management table T 7000 , and extracts CTGs 2500 to which the SLUs 2400 belong from the CTG management table T 6000 (at step F 4000 ).
- the configuration control unit sets “ACTIVE/OPTIMIZED” as the initial value of a logical path evaluation value X (at step F 4010 ), and performs loop processing at step F 4020 on the CTGs 2500 extracted at step F 4000 .
- the configuration control unit performs the loop processing on every CTG 2500 .
- the configuration control unit refers to the column T 6040 of the CTG management table T 6000 , and if the status of the relevant CTG 2500 is BLOCK status (Yes at step F 4030 ), that is to say, the status in which the relevant CTG 2500 cannot receive an I/O, the configuration control unit checks whether the storage device 2000 includes an inter-storage device I/O transfer function or not, and whether an I/O transfer is available or not (at step F 4040 ).
- the configuration control unit sets the value of the logical path evaluation value X to “STANDBY” (at step F 4050 ), and if the I/O transfer is available (Yes at step F 4040 ), the configuration control unit sets the value of X to “ACTIVE/NON-OPTIMIZED” (at step F 4060 ).
- the configuration control unit registers the value of X in the column T 4040 of a row corresponding to the column T 4050 in which the LDEV ID of the specified ALU 2120 is registered with reference to the LUN management table T 4000 (at step F 4080 ), and then finishes this processing.
- the inter-storage device I/O transfer function will be explained below. If the pair status of an LDEV is “BLOCK” status, it is impossible to perform I/O processing on the LDEV. However, in the case where the storage device 2000 includes the inter-storage device I/O transfer function, an I/O is transferred between the storage devices in order to perform the relevant I/O processing on a volume that makes a pair in cooperation with the LDEV.
- the storage device 2000 b including a secondary side LDEV transfers an I/O to the storage device 2000 a , so that the I/O is transferred to a primary side LDEV (“COPY (LOCAL)”) that makes a pair in cooperation with the secondary side LDEV, and the I/O can be performed by the primary side LDEV.
- a storage device 2000 including the inter-storage device I/O transfer function transfers an I/O to be performed on a logical volume whose status is “PSUS (BLOCK)” or “SSWS (BLOCK)” to another storage device 2000 , and can perform the I/O processing using another logical volume that makes a pair in cooperation with the former logical volume.
- the evaluation value of the logical path of an ALU 2120 in the case where an I/O transfer between the storage devices 2000 or retry does not occur can be kept “ACTIVE/OPTIMIZED” in accordance with binding processing or data duplication processing.
- logical path control in which one or more logical paths the statuses of which are “ACTIVE/OPTIMIZED” are secured per CTG 2500 , that is to say, per VM 1900 , or per job will be explained. Due to this configuration, the status of a logical path can be prevented from being adversely affected by various kinds of jobs.
- FIG. 20 is a diagram showing an example of a configuration of a storage system according to this embodiment.
- Each storage system according to this embodiment is different from the storage system according to the first embodiment in that the host computer 1000 includes a logical partition control unit that classifies VMs into AGs (Availability Groups) 7000 per VM or per job, and other components of this embodiment are the same as those of the first embodiment. It is conceivable that VMs having the same characteristic information are made to belong to the same AG. In other words, the same value is stored in the column PROFILE T 1040 of the VM management table T 1000 regarding each of VMs belonging to the same AG.
- AGs Availability Groups
- a means for classifying VMs 1900 into AGs 7000 that are logical partitions; a storage system including: a means for building up CTGs 2500 per AG 7000 ; a means for assigning ALUs 2120 per CTG 2500 will be explained.
- the VM administrator assigns an identifier for identifying an AG 7000 to the profile given to each VM.
- the identifier can be an arbitrary character string defined by the VM administrator or an identifier defined in advance in the virtual machine management software 4110 by the VM administrator.
- the assigned identifier for the AG 7000 is included the configuration operation instruction used at steps S 1000 to S 1035 in the VM creation sequence shown in FIG. 14 .
- the assigned identifier for the AG 7000 is registered in the column T 1040 of the VM management table T 1000 in the VM creation processing by the host computer 1000 , and the storage device 2000 registers the assigned identifier for the AG 7000 in the column T 8050 of a CTG management table T 8000 shown in FIG. 21 when a CTG 25000 is created.
- FIG. 22 logical volume creation processing for creating logical volumes composing a VM performed in the storage device 2000 will be explained with reference to FIG. 22 .
- processing shown in FIG. 22 is performed instead of the storage resource creation processing in the VM creation shown in FIG. 15 .
- FIG. 22 is a diagram showing an example of a configuration of a flowchart of logical volume creation processing performed by the storage management software 3110 in a VM creation processing sequence.
- the storage management software 3110 On receiving a logical volume creation instruction including the identifier for an AG 7000 , the storage management software 3110 starts this processing, and checks whether HA is specified in a profile obtained from the creation instruction (at step F 4010 ). If HA is not specified (No at step F 4010 ), the storage management software 3110 creates the specified number of SLUs 2400 having specified capacities in a primary side storage device 2000 a of a storage cluster (at step F 4060 ), and finishes this processing.
- the storage management software 3110 creates the specified number of SLUs 24000 having specified capacities in the primary side storage device 2000 a of the storage cluster (at step F 1060 ), and finishes this processing.
- the storage management software 3110 instructs the storage devices 2000 a and 2 000 b that compose the storage cluster to create CTGs 2500 corresponding to the specified AG 7000 (at step F 4020 ).
- the storage management software 3110 instructs the storage devices 2000 a and 2000 b to create ALUs 2120 belonging to the CTGs 2500 created at step F 4020 (at step F 4030 ).
- the storage management software 3110 instructs the storage devices 2000 a and 2000 b to create the specified number of SLUs 2400 having the specified capacities belonging to the CTGs 2500 created at step F 4020 (at step F 4040 ), instructs the storage devices 2000 a and 2000 b to form HA pairs of the created SLUs 2400 (at step F 4050 ), and finishes this processing.
- the storage device 2000 which receives the instruction of the creation of a CTG 2500 at step F 4020 ; extracts the identifier of the AG 7000 in accordance with the creation instruction, refers to the CTG management table T 8000 ; extracts a CTG 2500 corresponding to the specified AG 7000 ; and, if there is a relevant CTG 2500 , the storage device 2000 sends back the relevant CTG 2500 , and if there is no relevant CTG 2500 , the storage device 2000 creates a new CTG 2500 , and registers the identifier of the specified AG 7000 in the column T 8050 of the created CTG 2500 .
- a CTG 2500 can be built up per AG 7000 , and further one ALU 2120 can be set for the CTG 2500 , with the result that, by applying the logical path evaluation processing shown in FIG. 19 , it becomes possible to perform logical path evaluation on the ALU 2120 per AG 7000 .
- the depletion of ALUs 2120 occurs.
- one or more Initiator Ports are assigned to one AG 7000 , and a VM 1900 uses any of Initiator Ports assigned to an AG 7000 to which the VM 1900 belongs.
- a storage system in which the storage device 2000 performs the logical path evaluation of an ALU 2120 per Initiator Port will be explained.
- FIG. 23 is an example of a configuration of a storage system according to this embodiment.
- the hypervisor 1230 of the host computer 1000 includes a virtual port control unit, and performs virtual port allocation in which a virtual port 1210 is assigned per AG 7000 .
- a logical port 2110 of the storage device 2000 holds together Initiator Ports that enables a login (generally referred to as a fabric login) as an HG (Host Group) 2130 that is a logical partition per logical port 2110 .
- An I/O from a logical path that passes through an HG 2130 includes an access restriction control unit that receives only from the WWN of a registered Initiator Port.
- FIG. 24 is a diagram showing examples of configurations of virtual port management tables T 9000 included in the host computer 1000 .
- a virtual port management table T 9000 is referred to when the virtual port control unit (not shown) of the hypervisor 1230 performs virtual port allocation processing and when the disk control unit 1232 and the alternate path control unit 1233 performs I/O processing, and can be accessed from various programs included in the management computers 3000 and 4000 via the management network 6000 using an API provided by the host computer 1000 .
- the virtual port management table T 9000 includes: the column PORT WWN T 9010 that registers the WWN of an HBA 1200 ; the column PORT WWN T 9020 that registers the WWN of a virtual port 1210 created from the HBA 1200 ; the column VM ID T 9030 that registers the identifier of the VM 1900 to which the virtual port 1210 is assigned; and the column AG ID T 9040 that registers the identifier of an AG 7000 to which the VM 1900 belongs.
- the assignment of a virtual port 1210 is executed by a VM administrator using the user interface of a virtual machine management program 1210 provided from an input unit 1700 and an output unit 1600 , and the virtual machine management program 1210 registers information based on the relevant instruction in the virtual port 1210 .
- FIG. 25 is a diagram showing examples of configurations of host group management tables U 1000 included in a logical port 2110 of the storage device 2000 .
- a host group management table U 1000 is referred to when the configuration control unit included in the storage device 2000 performs host group configuration change processing and logical path control processing and when an I/O control unit included in the storage device performs access control processing, and can be accessed from the management computer 3000 that is connected to the management network 6000 using a management API provided by a storage management server 2500 on the storage device 2000 .
- the host group management table U 1000 includes: the column PORT ID U 1010 that registers the identifier of a logical port 1210 ; the column PORT WWN U 1020 that registers the WWN of the relevant logical port 1210 ; the column HG U 1030 that registers the identifiers of HGs 2130 belonging to the relevant logical port 1210 ; the column INITIATOR WWN U 1040 that registers the WWNs of Initiator Ports registered in the relevant HGs 2130 ; the column AG ID U 1050 that registers the identifiers of AGs 7000 corresponding to the relevant HGs 2130 ; the column LUN T 1060 that registers the identifiers of LUNs belonging to the relevant HGs 2130 ; and the column LDEV ID U 1070 that registers the identifiers of LDEVs that are the substances of the relevant LUNs.
- FIG. 26 is a diagram showing an example of a configuration of a flowchart of Initiator Port WWN registration processing in which the WWN of an Initiator Port is registered in a HG 2130 .
- This processing is started when a storage manager requests the configuration control unit of the storage device 2000 to perform this processing via a management user interface provided by storage management software 3110 or by the storage management server 2500 .
- the configuration control unit extracts the identifier of an AG 7000 from request information (at step F 5000 ), and obtains the WWN of a virtual port 1210 assigned to the AG 7000 at step F 5000 from a virtual machine management software 4110 of the management computer 4000 that is connected to the configuration control unit via the management network 6000 (at step F 5010 ).
- the configuration control unit extracts an HG 2130 in which the AG 7000 the identifier of which is extracted at step F 5000 is registered with reference to the host group management table U 1000 (at step F 5020 ), registers the WWN obtained at step F 5010 for the extracted HG 2130 in the relevant column U 1040 of the host group management table U 1000 (at step F 5030 ), and assigns an ALU 2120 to the HG 2310 extracted at step F 5020 .
- the configuration control unit newly creates an HG 2130 in one logical port 1210 that has already been physically connected to the SAN 5000 , and registers the identifier of the AG 7000 extracted at step F 5000 in the column U 1050 of the newly created HG 2130 .
- step F 5020 in the case where the AG 7000 is extracted at step F 5000 , because “HA” is specified in the profiles of VMs 1900 belonging to the relevant AG 7000 , only ALUs 2120 that are given HA attribute are assigned to the relevant HG 2130 , and further in the case where ALUs 2120 are registered in an HG 2120 , which corresponds to the relevant HG 2120 in this processing, in the counterpart storage device 2000 of the storage cluster, ALUs 2120 , the identifiers of the virtual resources of which are the same as those of the ALUs assigned at step S 020 , are assigned.
- FIG. 27 is a diagram showing an example of a configuration of a flowchart of logical bath evaluation processing per HG 2130 in the configuration control unit included in the storage device 2000 according to this embodiment.
- this processing is performed when an SLU 2400 is bound to an ALU 2120 or unbound from an ALU 2120 , or when the reduplication status of an SLU 2400 is changed.
- This processing is performed instead of the processing of the first embodiment shown in FIG. 19 .
- the configuration control unit extracts an HG 2310 , to which an ALU 2120 regarding the occurrence of binding or unbinding belongs, and the identifier of an AG 7000 , to which the HG 2310 belongs, are extracted (at step F 6000 ), and “ACTIVE/OPTIMIZED” is set as the initial value of the evaluation value of the relative logical path (at step F 6010 ).
- the configuration control unit performs a loop processing shown at step F 6020 on all the CTG 250 belonging to the AG 7000 extracted at step F 6000 .
- the configuration control unit checks the status of a selected CTG 2500 (at step F 6030 ), and if the status is “BLOCK”, the configuration control unit checks whether an inter-storage device I/O transfer is available or not (at step F 6040 ). If the inter-storage device I/O transfer is available, the evaluation value X is set to “ACTIVE/OPTIMIZED” (at step F 6060 ), and if not, the evaluation value X is set to “STANDBY” (at step F 6050 ).
- the configuration control unit sets the evaluation values of all LUNs belonging to the HG 2310 extracted at step F 6000 in the column T 4040 of T 4000 to the evaluation value X with reference to the LUN management table T 4000 (at step F 6080 ), and finishes this processing.
- the above-described processing makes it possible to set the evaluation value of the logical path of an ALU 2310 per HG 2310 , that is to say, per VM that uses an Initiator Port having its WWN in the HG 2310 .
- This embodiment relates to blockage processing for Conglomerate LUN Structure in the host computer 1000 , and a storage system including logical path control, in which, even in the case where retries are executed regarding the logical path of an ALU 2120 , and the predefined number of the retries occur, the logical path is not immediately blocked, but the statuses of SLUs 2400 bound to the relevant ALU 2120 are checked, and if there is an SLU 2400 that can receive an I/O, the relevant logical path is set to “ACTIVE/NON-OPTIMIZED”, will be explained.
- the configuration of the host computer 1000 according to this embodiment is the same as that shown in FIG. 3 or that shown in FIG. 4 .
- FIG. 28 is a diagram showing an example of a configuration of a flowchart of logical bath evaluation processing performed in I/O processing, especially in I/O retry processing by the hypervisor 1230 . This processing is performed when the number of retries regarding the logical path to an ALU 2120 exceeds a threshold that is predefined by a VM administrator.
- the configuration control unit or the hypervisor 1230 extracts SLUs 2400 bound to an ALU 2120 assigned to a logical path the number of retries regarding which exceeds the threshold with reference to the logical path management table T 2000 and the bound management table T 3000 (at step F 7000 ), sets the initial value of the evaluation value of the logical path to “STANDBY” (at step F 7010 ), and performs loop processing at step F 7020 on all SLUs 2400 obtained at step F 7000 (at step F 7020 ).
- the disk control 1232 issues a disk read I/O to the relevant SLU 2400
- the alternate path control unit 1233 issues an I/O to the ALU 2120 via the relevant logical path (at step F 7030 )
- the evaluation value is set to “ACTIVE/NON-OPTIMIZED” (at step F 7050 ).
- X is set to the evaluation value of the specified logical path (at step F 7070 ).
- an ALU 2120 assigned to a logical path the number of retries regarding which exceeds the threshold includes only one SLU 2400 capable of performing I/O processing, it becomes possible that the relevant logical path is not blocked but it is set as a non-recommendable logical path in preparation for the issuance of an I/O to the SLU 2400 .
- Virtual Port Management Table U 1000 . . . Host Group Management Table, S 1000 . . . Virtual Server Creation Processing, S 2000 . . . Logical Resource Operation Processing, F 1000 . . . Logical Resource Creation Processing, F 2000 . . . Binding Processing, F 3000 . . . Unbinding Processing, F 4000 . . . Bound Group Assignment Processing, F 5000 . . . ALU Assignment Processing, F 6000 . . . Logical Path Evaluation Processing, F 7000 . . . Logical Path Evaluation Processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Hardware Design (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Hardware Redundancy (AREA)
Abstract
Description
Claims (10)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/070366 WO2018011839A1 (en) | 2016-07-11 | 2016-07-11 | Information processing system and control method for information processing system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190108157A1 US20190108157A1 (en) | 2019-04-11 |
US10762021B2 true US10762021B2 (en) | 2020-09-01 |
Family
ID=60952888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/086,297 Active 2036-09-02 US10762021B2 (en) | 2016-07-11 | 2016-07-11 | Information processing system, and control method of information processing system |
Country Status (3)
Country | Link |
---|---|
US (1) | US10762021B2 (en) |
JP (1) | JP7012010B2 (en) |
WO (1) | WO2018011839A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10552225B2 (en) * | 2017-07-04 | 2020-02-04 | Vmware, Inc. | Virtual device migration or cloning based on device profiles |
US10831409B2 (en) * | 2017-11-16 | 2020-11-10 | International Business Machines Corporation | Volume reconfiguration for virtual machines |
US10592155B2 (en) * | 2018-04-10 | 2020-03-17 | International Business Machines Corporation | Live partition migration of virtual machines across storage ports |
JP2021170196A (en) * | 2020-04-15 | 2021-10-28 | 株式会社日立製作所 | Storage system and computer system |
US11262922B1 (en) | 2020-08-07 | 2022-03-01 | EMC IP Holding Company LLC | Dynamic shared journal |
JP7590316B2 (en) * | 2021-12-23 | 2024-11-26 | 日立ヴァンタラ株式会社 | Information processing system and configuration management method |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050188126A1 (en) * | 2004-02-25 | 2005-08-25 | Hitachi, Ltd. | Information processing system and information processing method |
US20060165002A1 (en) * | 2004-11-01 | 2006-07-27 | Timothy Hicks | Port re-enabling by monitoring link status |
US20070038748A1 (en) * | 2005-08-05 | 2007-02-15 | Yusuke Masuyama | Storage control method and storage control system |
US20070234113A1 (en) * | 2006-03-29 | 2007-10-04 | Yuki Komatsu | Path switching method |
US20110179188A1 (en) | 2009-10-09 | 2011-07-21 | Hitachi, Ltd. | Storage system and storage system communication path management method |
US8407391B2 (en) * | 2009-06-04 | 2013-03-26 | Hitachi, Ltd. | Computer system managing I/O path and port |
US20140101394A1 (en) * | 2012-10-04 | 2014-04-10 | Hitachi, Ltd. | Computer system and volume management method for the computer system |
US8775774B2 (en) | 2011-08-26 | 2014-07-08 | Vmware, Inc. | Management system and methods for object storage system |
US20140201438A1 (en) * | 2013-01-11 | 2014-07-17 | Hitachi, Ltd. | Storage system, method of controlling a storage system and management system for storage system |
US9088591B2 (en) * | 2008-04-28 | 2015-07-21 | Vmware, Inc. | Computer file system with path lookup tables |
WO2015162634A1 (en) | 2014-04-21 | 2015-10-29 | Hitachi, Ltd. | Information storage system |
WO2016009504A1 (en) | 2014-07-16 | 2016-01-21 | 株式会社日立製作所 | Storage system and notification control method |
US20160054946A1 (en) * | 2014-02-18 | 2016-02-25 | Hitachi, Ltd. | System and method for managing logical volumes |
WO2016103416A1 (en) | 2014-12-25 | 2016-06-30 | 株式会社日立製作所 | Storage system, storage device and access control method |
US9514072B1 (en) * | 2015-06-29 | 2016-12-06 | International Business Machines Corporation | Management of allocation for alias devices |
US9569132B2 (en) * | 2013-12-20 | 2017-02-14 | EMC IP Holding Company LLC | Path selection to read or write data |
US10191685B2 (en) * | 2014-06-11 | 2019-01-29 | Hitachi, Ltd. | Storage system, storage device, and data transfer method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014184606A1 (en) * | 2013-05-13 | 2014-11-20 | Hitachi, Ltd | Identifying workload and sizing of buffers for the purpose of volume replication |
-
2016
- 2016-07-11 WO PCT/JP2016/070366 patent/WO2018011839A1/en active Application Filing
- 2016-07-11 US US16/086,297 patent/US10762021B2/en active Active
- 2016-07-11 JP JP2018527043A patent/JP7012010B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050188126A1 (en) * | 2004-02-25 | 2005-08-25 | Hitachi, Ltd. | Information processing system and information processing method |
US20060165002A1 (en) * | 2004-11-01 | 2006-07-27 | Timothy Hicks | Port re-enabling by monitoring link status |
US20070038748A1 (en) * | 2005-08-05 | 2007-02-15 | Yusuke Masuyama | Storage control method and storage control system |
US20070234113A1 (en) * | 2006-03-29 | 2007-10-04 | Yuki Komatsu | Path switching method |
US9088591B2 (en) * | 2008-04-28 | 2015-07-21 | Vmware, Inc. | Computer file system with path lookup tables |
US8407391B2 (en) * | 2009-06-04 | 2013-03-26 | Hitachi, Ltd. | Computer system managing I/O path and port |
US20110179188A1 (en) | 2009-10-09 | 2011-07-21 | Hitachi, Ltd. | Storage system and storage system communication path management method |
JP2012531654A (en) | 2009-10-09 | 2012-12-10 | 株式会社日立製作所 | Storage system and storage system communication path management method |
US8775774B2 (en) | 2011-08-26 | 2014-07-08 | Vmware, Inc. | Management system and methods for object storage system |
US20140101394A1 (en) * | 2012-10-04 | 2014-04-10 | Hitachi, Ltd. | Computer system and volume management method for the computer system |
US20140201438A1 (en) * | 2013-01-11 | 2014-07-17 | Hitachi, Ltd. | Storage system, method of controlling a storage system and management system for storage system |
US9569132B2 (en) * | 2013-12-20 | 2017-02-14 | EMC IP Holding Company LLC | Path selection to read or write data |
US20160054946A1 (en) * | 2014-02-18 | 2016-02-25 | Hitachi, Ltd. | System and method for managing logical volumes |
WO2015162634A1 (en) | 2014-04-21 | 2015-10-29 | Hitachi, Ltd. | Information storage system |
US20160364287A1 (en) * | 2014-04-21 | 2016-12-15 | Hitachi, Ltd. | Information storage system |
US10191685B2 (en) * | 2014-06-11 | 2019-01-29 | Hitachi, Ltd. | Storage system, storage device, and data transfer method |
WO2016009504A1 (en) | 2014-07-16 | 2016-01-21 | 株式会社日立製作所 | Storage system and notification control method |
WO2016103416A1 (en) | 2014-12-25 | 2016-06-30 | 株式会社日立製作所 | Storage system, storage device and access control method |
US9514072B1 (en) * | 2015-06-29 | 2016-12-06 | International Business Machines Corporation | Management of allocation for alias devices |
Also Published As
Publication number | Publication date |
---|---|
JPWO2018011839A1 (en) | 2018-11-22 |
JP7012010B2 (en) | 2022-01-27 |
US20190108157A1 (en) | 2019-04-11 |
WO2018011839A1 (en) | 2018-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10762021B2 (en) | Information processing system, and control method of information processing system | |
US9606745B2 (en) | Storage system and method for allocating resource | |
US11734137B2 (en) | System, and control method and program for input/output requests for storage systems | |
US9223501B2 (en) | Computer system and virtual server migration control method for computer system | |
JP4295184B2 (en) | Virtual computer system | |
US8452923B2 (en) | Storage system and management method thereof | |
US20140351545A1 (en) | Storage management method and storage system in virtual volume having data arranged astride storage device | |
US20080294888A1 (en) | Deploy target computer, deployment system and deploying method | |
US20140143391A1 (en) | Computer system and virtual server migration control method for computer system | |
JP6663478B2 (en) | Data migration method and computer system | |
JP2005216151A (en) | Resource operation management system and resource operation management method | |
US9875059B2 (en) | Storage system | |
US20140195698A1 (en) | Non-disruptive configuration of a virtualization cotroller in a data storage system | |
JP2008033829A (en) | Backup system and backup method | |
JP6055924B2 (en) | Storage system and storage system control method | |
US8949562B2 (en) | Storage system and method of controlling storage system | |
US8838768B2 (en) | Computer system and disk sharing method used thereby | |
CN102959499B (en) | Computer system, storage volume management method | |
US9052839B2 (en) | Virtual storage apparatus providing a plurality of real storage apparatuses | |
US10860235B2 (en) | Storage system having a plurality of storage apparatuses which migrate a first volume group to a second volume group | |
US10514846B2 (en) | Computer system and management method for computer | |
WO2016103416A1 (en) | Storage system, storage device and access control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HITACHI, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NAKAGAWA, HIROTAKA;DEGUCHI, AKIRA;NASU, HIROSHI;AND OTHERS;SIGNING DATES FROM 20180907 TO 20180911;REEL/FRAME:046904/0837 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: HITACHI VANTARA, LTD., JAPAN Free format text: COMPANY SPLIT;ASSIGNOR:HITACHI, LTD.;REEL/FRAME:069518/0761 Effective date: 20240401 |