[go: up one dir, main page]

US10466918B1 - Large size fixed block architecture device support over FICON channel connections - Google Patents

Large size fixed block architecture device support over FICON channel connections Download PDF

Info

Publication number
US10466918B1
US10466918B1 US14/228,945 US201414228945A US10466918B1 US 10466918 B1 US10466918 B1 US 10466918B1 US 201414228945 A US201414228945 A US 201414228945A US 10466918 B1 US10466918 B1 US 10466918B1
Authority
US
United States
Prior art keywords
storage device
access
terabytes
controller
fixed block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/228,945
Inventor
Douglas E. LeCrone
Martin J. Feeney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC IP Holding Co LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC IP Holding Co LLC filed Critical EMC IP Holding Co LLC
Priority to US14/228,945 priority Critical patent/US10466918B1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FEENEY, MARTIN J., LECRONE, DOUGLAS E.
Assigned to EMC IP Holding Company LLC reassignment EMC IP Holding Company LLC ASSIGNMENT OF ASSIGNOR'S INTEREST Assignors: EMC CORPORATION
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Application granted granted Critical
Publication of US10466918B1 publication Critical patent/US10466918B1/en
Assigned to DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), DELL PRODUCTS L.P., DELL USA L.P., SCALEIO LLC, DELL INTERNATIONAL L.L.C., EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.) reassignment DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.) RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), SCALEIO LLC, DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL USA L.P., DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), DELL PRODUCTS L.P. reassignment EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.) RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0661Format or protocol conversion arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • This application relates to the field of computer systems and, more particularly, to device access and connections among computing systems.
  • Host processor systems may store and retrieve data using a storage device containing a plurality of host interface units (I/O modules), disk drives, and disk interface units (disk adapters).
  • the host systems access the storage device through a plurality of channels provided therewith.
  • Host systems provide data and access control information through the channels to the storage device and the storage device provides data to the host systems also through the channels.
  • the host systems do not address the disk drives of the storage device directly, but rather, access what appears to the host systems as a plurality of logical disk units.
  • the logical disk units may or may not correspond to the actual disk drives. Allowing multiple host systems to access the single storage device unit allows the host systems to share data stored therein.
  • Mainframe computers are large scale computer system architectures that are used by large organizations for bulk data processing, such as financial transaction processing. Mainframe computers offer enhanced availability, scalability, reliability and security along with high volume data throughput, among other features.
  • I/O devices may be coupled to interact with mainframe computers that may include an I/O subsystem that communicates with the I/O devices over communication channels.
  • the I/O subsystem controls data flow between I/O devices and main storage.
  • the I/O subsystem may be coupled to the central processors of the main system and may communicate directly with the I/O devices.
  • the I/O subsystem may communicate with the I/O devices using multiple types of interfaces, including, for example, communication channels such as Fibre channels.
  • IBM Corporation's System z® is a mainframe platform and computing environment that is widely used in the industry and that includes z/Architecture-based systems and zSeries mainframes.
  • System z components may operate with IBM's z/OS® (operating system) and/or other zSeries-compatible operating systems.
  • FBA Fixed block architecture
  • data logical volumes
  • a disk drive stores the data in the blocks of fixed size.
  • the FBA architecture has two characteristics: each physical block is the same size and each physical block is individually addressable by a value called the logical block address (LBA).
  • LBA logical block address
  • an FBA device may be characterized by tracks and cylinders and in which the physical disk may contain multiple blocks per track, and the cylinder may be the group of tracks that exists under the disk heads at one point in time without performing a seek operation.
  • Hosts may address FBA devices over a number of channel connections, including Fibre channel connections.
  • a host may include an operating system and a Fibre Channel connection portion, which includes hardware and/or software for facilitating a FICON data connection between the host and a conventional data storage device.
  • FICON Fibre Connection
  • FICON is compatible with z/Architecture computing systems in connection with I/O devices performing I/O processing therewith.
  • zDDB z/OS Distributed Data Backup
  • IBM Corporation's z/OS Distributed Data Backup (zDDB) system as implemented in IBM's D58000® product, and reference is made, for example, to B. Dufrasne et al., “IBM System Storage DS8000: z/OS Distributed Data Backup,” IBM Corporation, Redpaper REDP-4701-00, Nov. 16, 2010, 16 pp., which is incorporated herein by reference.
  • TB terabytes
  • a method for providing storage device access.
  • the method includes using a fixed block command set of a fixed block architecture (FBA) to access at least one storage device using a FICON communication channel of a storage system, wherein the at least one storage device has a size larger than 2 terabytes.
  • a controller of the communication channel is configured to access an extended field of information that provides accessibility to the at least one storage device, having the size larger than 2 terabytes, using the fixed block command set, such as the IBM 3880 fixed block command set described, for example, in Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” which is cited elsewhere herein.
  • the storage system may include z/Architecture components.
  • the storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes.
  • the fixed block command set may support 3880/3370-type operations.
  • the controller and the storage device may operate in a virtualized environment.
  • a non-transitory computer-readable medium stores software for providing storage device access.
  • the software includes executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access a storage device using a FICON communication channel, wherein the storage device has a size larger than 2 terabytes.
  • Executable code is provided that enables a controller of the communication channel to access an extended field of information that provides accessibility to the storage device, having the size larger than 2 terabytes, using the fixed block command set.
  • the storage system may include z/Architecture components.
  • the storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes.
  • the fixed block command set may support 3880/3370-type operations.
  • the controller and the storage device may operate in a virtualized environment.
  • a storage system having a host having an operating system and a FICON controller, at least one storage device, and a FICON connection between the host and the at least one storage device.
  • a non-transitory computer-readable medium stores software for providing storage device access.
  • the software includes executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access a storage device using a communication channel, wherein the storage device has a size larger than 2 terabytes.
  • FBA fixed block architecture
  • Executable code is provided that enables a controller of the communication channel to access an extended field of information that provides accessibility to the storage device, having the size larger than 2 terabytes, using the fixed block command set.
  • the storage system may include z/Architecture components.
  • the storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes.
  • the fixed block command set may support 3880/3370-type operations.
  • the controller and the storage device may operate in a virtualized environment.
  • FIG. 1 is a schematic illustration of a storage system showing a relationship between a host and a storage device that may be used in connection with an embodiment of the system described herein.
  • FIG. 2 is a schematic diagram illustrating an embodiment of the storage device where each of a plurality of directors are coupled to the memory.
  • FIG. 3 is a schematic illustration showing a system with a FICON connection between a host and a data storage device that operates to provide large FBA device support according to an embodiment of the system described herein.
  • FIG. 4 is a schematic illustration showing a FICON connection between a host and a data storage device operating in a virtualized environment according to an embodiment of the system described herein.
  • FIG. 5 is a flow diagram showing processing for providing large size FBA device support over a FICON connection according to an embodiment of the system described herein.
  • FIG. 6 is a flow diagram showing data access processing involving transport mode protocol, such as zHPF, according to an embodiment of the system described herein.
  • transport mode protocol such as zHPF
  • FIG. 1 is a schematic illustration of a storage system 20 showing a relationship between a host 22 and a storage device 24 that may be used in connection with an embodiment of the system described herein.
  • the storage device 24 may be a Symmetrix storage system produced by EMC Corporation of Hopkinton, Mass.; however, the system described herein may operate with other appropriate types of storage devices.
  • another (remote) storage device 26 that may be similar to, or different from, the storage device 24 and may, in various embodiments, be coupled to the storage device 24 , for example, via a network.
  • the host 22 reads and writes data from and to the storage device 24 via an I/O module (IOM) 28 , which facilitates the interface between the host 22 and the storage device 24 .
  • IOM I/O module
  • data from the storage device 24 may be copied to the remote storage device 26 via a link 29 .
  • the transfer of data may be part of a data mirroring or replication process, that causes the data on the remote storage device 26 to be identical to the data on the storage device 24 .
  • the storage device 24 may include a first plurality of adapter units (RA's) 30 a , 30 b , 30 c .
  • the RA's 30 a - 30 c may be coupled to the link 29 and be similar to the I/O Module 28 , but are used to transfer data between the storage devices 24 , 26 .
  • the storage device 24 may include one or more disks, each containing a different portion of data stored on each of the storage device 24 .
  • FIG. 1 shows the storage device 24 including a plurality of disks 33 a , 33 b , 33 c .
  • the storage device (and/or remote storage device 26 ) may be provided as a stand-alone device coupled to the host 22 as shown in FIG. 1 or, alternatively, the storage device 24 (and/or remote storage device 26 ) may be part of a storage area network (SAN) that includes a plurality of other storage devices as well as routers, network connections, etc.
  • the storage devices may be coupled to a SAN fabric and/or be part of a SAN fabric.
  • the system described herein may be implemented using software, hardware, and/or a combination of software and hardware where software may be stored in a computer readable medium and executed by one or more processors.
  • Each of the disks 33 a - 33 c may be coupled to a corresponding disk adapter unit (DA) 35 a , 35 b , 35 c that provides data to a corresponding one of the disks 33 a - 33 c and receives data from a corresponding one of the disks 33 a - 33 c .
  • An internal data path exists between the DA's 35 a - 35 c , the IOM 28 and the RA's 30 a - 30 c of the storage device 24 . Note that, in other embodiments, it is possible for more than one disk to be serviced by a DA and that it is possible for more than one DA to service a disk.
  • the storage device 24 may also include a global memory 37 that may be used to facilitate data transferred between the DA's 35 a - 35 c , the IOM 28 and the RA's 30 a - 30 c .
  • the memory 37 may contain tasks that are to be performed by one or more of the DA's 35 a - 35 c , the IOM 28 and the RA's 30 a - 30 c , and a cache for data fetched from one or more of the disks 33 a - 33 c.
  • the storage space in the storage device 24 that corresponds to the disks 33 a - 33 c may be subdivided into a plurality of volumes or logical devices.
  • the logical devices may or may not correspond to the physical storage space of the disks 33 a - 33 c .
  • the disk 33 a may contain a plurality of logical devices or, alternatively, a single logical device could span both of the disks 33 a , 33 b .
  • the storage space for the remote storage device 26 that comprises the disks 34 a - 34 c may be subdivided into a plurality of volumes or logical devices, where each of the logical devices may or may not correspond to one or more of the disks 34 a 34 c.
  • FIG. 2 is a schematic diagram 40 illustrating an embodiment of the storage device 24 where each of a plurality of directors 42 a - 42 n are coupled to the memory 37 .
  • Each of the directors 42 a - 42 n represents at least one of the IOM 28 , RAs 30 a - 30 c , or DAs 35 a - 35 c .
  • the diagram 40 also shows an optional communication module (CM) 44 that provides an alternative communication path between the directors 42 a - 42 n .
  • CM communication module
  • Each of the directors 42 a - 42 n may be coupled to the CM 44 so that any one of the directors 42 a - 42 n may send a message and/or data to any other one of the directors 42 a - 42 n without needing to go through the memory 26 .
  • the CM 44 may be implemented using conventional MUX/router technology where a sending one of the directors 42 a - 42 n provides an appropriate address to cause a message and/or data to be received by an intended receiving one of the directors 42 a - 42 n .
  • CM 44 may be implemented using one or more of the directors 42 a - 42 n so that, for example, the directors 42 a - 42 n may be interconnected directly with the interconnection functionality being provided on each of the directors 42 a - 42 n .
  • a sending one of the directors 42 a - 42 n may be able to broadcast a message to all of the other directors 42 a - 42 n at the same time.
  • one or more of the directors 42 a - 42 n may have multiple processor systems thereon and thus may be able to perform functions for multiple directors.
  • at least one of the directors 42 a - 42 n having multiple processor systems thereon may simultaneously perform the functions of at least two different types of directors (e.g., an IOM and a DA).
  • at least one of the directors 42 a - 42 n having multiple processor systems thereon may simultaneously perform the functions of at least one type of director and perform other processing with the other processing system.
  • all or at least part of the global memory 37 may be provided on one or more of the directors 42 a - 42 n and shared with other ones of the directors 42 a - 42 n .
  • the features discussed in connection with the storage device 24 may be provided as one or more director boards having CPUs, memory (e.g., DRAM, etc.) and interfaces with Input/Output (I/O) modules.
  • I/O Input/Output
  • the FBA device size may be 64 TB and the architecture is for 512 TB.
  • the system described herein is principally discussed in connection with FBA block sizes of 512 bytes, which is a standardized value.
  • 2 TB-sized devices is the maximum supported with a 32-bit LBA and a 512 byte block size.
  • the system described herein may also be appropriately used in connection with block sizes other than 512 bytes, such as 4096 byte block sizes, and such modification would mean different corresponding device sizes and/or other corresponding values from that presented herein.
  • a Read Device Characteristic (RDC) command (command 64) transfers device characteristic information from the storage director to the channel (e.g. up to 32 bytes of information may be transferred).
  • RDC Read Device Characteristic
  • the commands discussed herein include commands from the IBM 3880 fixed block command set described, for example, in Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” which is cited elsewhere herein.
  • the RDC command returns the device block count as a 32-bit value in the max_prime_blocks field located at offset 14 (decimal) in the output.
  • the Thirty-two (32) bits are not enough to report the one trillion 512-byte blocks that a 512 TB device supports.
  • the system described herein provides for logically expanding the max_prime_blocks field to an extended field that will hold additional bits of information, such as a 48 bit field that has least 16 additional bits of information. There are at least 16 unused bits immediately following the max_prime_blocks field at offset 18, which, under current systems, are set to zero (see TABLE 1).
  • the additional bit (16-bit) field may be used to hold additional bits of device information (see TABLE 2) that will enable direct access to a device larger than 2 TB.
  • the host application can easily test this field to determine if this is a large device; a non-zero field value indicates that the device is larger than 2 TB. This approach obviates the need for an explicit “large device” flag and allows existing software to run on devices smaller than 2 TB.
  • another option is to redefine the max_prime_blocks field as a true 64-bit value, which would require existing software to be modified to support a 64-bit value.
  • application test suites may provide testing of software and/or hardware systems in connection with mainframe computing platforms and may be used and/or otherwise operated in connection with the system described herein. Such test suites may require modification under this option for the extended device access.
  • One such test suite is the STCMON test suite provided by EMC Corporation of Hopkinton, Mass.
  • TABLE 1 shows known device characteristics formatting in connection with an RDC command of a fixed block command set for a direct access storage device using 3370 operation (see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein).
  • TABLE 2 shows device characteristics formatting in connection with an RDC command of a fixed block command set for a direct access storage device using 3370 operation.
  • the system provides for supporting large size FBA device access, such as an FBA device that may, for example, have a size larger than 2 TB, such as 64 TB-512 TB, and provides further expansion to even larger size FBA devices, such as FBA devices having a size of 144 petabytes (PB).
  • FBA device access such as an FBA device that may, for example, have a size larger than 2 TB, such as 64 TB-512 TB
  • PB petabytes
  • a Define Extent (DEX) command (command 63) transfers parameters from the channel to the storage directory, the parameters defining the size and location of a data extent (e.g., up to 16 bytes of parameters may be transferred).
  • the DEX command has three 32-bit LBA fields that are used to define the boundary of the blocks to be accessed.
  • the extent_ph_bn field defines the start of the extent as a relative block number from the beginning of the device. As discussed herein, 32-bit LBA value does not adequately address a device that is larger than 2 TB.
  • the DEX command currently has a reserved byte at offset 1 (see TABLE 3; see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein).
  • the command set defines this parameter to be zero.
  • TABLE 3 shows known device characteristics formatting in connection with a DEX command of a fixed block command set for a direct access storage device using 3370 operation (see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein):
  • TABLE 4 shows parameter formatting in connection with a DEX command of a fixed block command set for a direct access storage device using 3370 operation that enables large FBA device support.
  • Byte 1 may be used to contain an additional eight bits of LBA addressing information. This field is logically the upper eight bits of a 40-bit block number. It is noted that byte 1 will only be appended to bytes 4-7 (offset to first block of extent). The other two fields 8-11 and 12-15 are relative to the offset (4-7). This scheme will provide addressability for a 512 TB device (assuming 512-byte blocks).
  • FBA-specific code may be included that accepts count key data (CKD) Define Extent information when used to read a Shared-with-VM FBA device.
  • CKD count key data
  • bytes may have different attributes in such devices (e.g., Extended CKD (ECKD) systems uses byte 1 for a global attributes field)
  • ECKD Extended CKD
  • code implementation according to the system described herein may be appropriately adjusted to cause the system to behave properly where necessary.
  • FIG. 3 is a schematic illustration showing a system 100 with a FICON connection controller 130 for providing FICON connection between a host 110 and a data storage device 150 and that may operate to provide large size FBA device support according to an embodiment of the system described herein.
  • the host 110 may be a computer running Linux or some other appropriate operating system 120 .
  • the host 110 may have or be coupled with a Peripheral Component Interconnect (PCI) layer 125 that may provide an interconnection for I/O operations.
  • PCI Peripheral Component Interconnect
  • the I/O processing on the host 110 may operate with the FICON connection controller 130 to enable I/O operations with the data storage device 150 .
  • PCI Peripheral Component Interconnect
  • the FICON connection controller 130 may send and receive data to and from the data storage device 140 using a remote connection mechanism 140 , that may include a network (such as the Internet, and appropriate connection thereof) and that may, in some circumstances, be done in a way that is transparent (not detectable) by the PCI layer 125 .
  • the data storage device 150 may include physical storage volumes and/or logical volumes, such as EMC Corporation's Symmetrix data storage facility.
  • the FICON connection controller 130 may act as an I/O subsystem providing FICON communication capability for the system according to control and formatting features discussed elsewhere herein (see, e.g., TABLES 2 and 4), specifically to enable large size (greater than 2 TB) FBA device access.
  • the data storage device 150 may include features and/or components enabling the Fibre channel communication with the host 110 .
  • various components of the system 100 may be emulated, for example, at least a portion of the FICON connection controller may have a portion that emulates the Fibre Channel FC0 physical layer so that the PCI layer 125 sends and receives data as if the PCI layer 125 were connected to a physical Fibre Channel connector.
  • Fibre Channel connection components For further discussion of emulation of I/O computing components, particular Fibre Channel connection components, reference is made to U.S. patent application Ser. No. 14/133,852 to Jones et al., filed Dec. 19, 2013, entitled “Virtual I/O Hardware” and U.S. patent application Ser. No. 12/215,984 to LeCrone et al., filed Jun.
  • the system described herein provides for use of a channel emulator to emulate data transfer paths in I/O operations, and in which the channel emulator may simulate a host channel to provide I/O connectivity with an I/O device and may provide for the I/O connectivity using different channel protocols.
  • connection mechanism 140 may include an Internet connection and/or possibly some other types of connection(s).
  • the connection mechanism 140 may be directly incompatible with a FICON connection.
  • the incompatibility may be hardware incompatibility, software incompatibility, or both.
  • Such connection mechanism 140 may not support a direct FICON connection but, instead, rely on a FICON emulator (and/or other emulator(s)) for providing data in an appropriate format.
  • the data storage device 150 may include or be coupled to a FICON emulator portion that may send and receive data to and from the connection mechanism 140 and also emulates a Fibre Channel FC0 physical layer for the benefit of the data storage device 150 .
  • both the host 110 and the data storage device 150 may operate as if the devices 110 , 150 were communicating using a FICON hardware connection.
  • FIG. 4 is a schematic illustration showing an alternative embodiment of a system 200 where a host 210 having an OS 220 and a FICON connection controller 230 providing extended FBA device access is provided in a virtual environment 202 .
  • the virtual environment 202 may be provided by using products from VMware or similar products or systems.
  • the host 210 and the FICON connection controller 230 may be a virtualized instances of a host and a FICON controller, respectively, running in the virtual environment 202 .
  • the FICON connection controller 230 may thereby include a FICON emulator, as further discussed elsewhere herein.
  • a data storage device 250 may also be provided in a virtual environment 204 that may be implemented using products provided by VMware or similar.
  • the data storage device 250 may have a FICON emulator 255 , as further discussed elsewhere herein.
  • the data storage device 250 and the FICON emulator 255 may be virtualized instances of a data storage device and a FICON emulator, respectively, running in the virtual environment 204 .
  • the host 210 and the FICON connection controller 230 of the virtual environment 202 may be coupled to an actual data storage device like the data storage device 150 described elsewhere herein.
  • the data storage device 250 and the FICON emulator 255 of the virtual environment 204 may be coupled to an actual host, like the host 110 described elsewhere herein.
  • FICON emulators in the virtual environments 202 , 204 may eliminate a need to provided virtual FICON connectivity. Instead, the host 210 and/or the data storage device 250 may use virtual Ethernet connectivity, which is more likely to be generally available in a virtual environment.
  • the virtual environments 202 , 204 may be connected by the connection mechanism 140 discussed elsewhere herein.
  • FIG. 5 is a flow diagram 300 showing processing for providing large size FBA device support over a FICON connection according to an embodiment of the system described herein.
  • a system is configured for providing access to one or more large size direct access FBA devices, such as FBA devices larger than 2 TB, by a host (e.g., like the host 110 and data storage device 150 discussed elsewhere herein).
  • the configuration may be provided to enable use of existing FBA command sets, such as the existing fixed block command sets for 3370 operation of the FBA device(s).
  • the configurations may include modifications of the controller and/or modifications in connection with the storage device to configure one or more directors of the storage device to provide output to the controller, in response to requests, that is formatted to support large size FBA device access according to the features discussed herein.
  • an access command in connection with an FBA device is processed by the system for access over a FICON connection to the FBA device using the controller.
  • information resulting from processing the command is received.
  • it is determined whether the FBA device for which access is being requested is larger than 2 TB. In an embodiment, a device larger than 2 TB may be determined by determining whether an extended field of the output from the FBA device is non-zero.
  • processing proceeds to a step 310 where access processing without using the extended fields, discussed elsewhere herein, is performed (e.g., I/O read/write requests are performed on the small size FBA device). After the step 310 , processing is complete for this iteration of the flow diagram 300 . If, at the test step 308 , it is determined that the FBA device has a size greater than 2 TB, then processing proceeds to a step 312 , where access processing proceeds using extended fields, as further discussed in detail herein (e.g., I/O read/write requests are performed on the large size FBA device). After the step 312 , processing is complete for this iteration of the flow diagram 300 .
  • the transport mode protocol for high performance Fibre Channel connections may be used to enhance and extend access to the large FBA devices.
  • the transport mode protocol is an extension to the Fibre-Channel Single-Byte (FC-SB) command set specification supported by the American National Standards Institute (ANSI) (see, e.g., FC-SB-4 and/or FC-SB-5).
  • FC-SB Fibre-Channel Single-Byte
  • ANSI American National Standards Institute
  • Transport mode complements and extends the legacy command mode protocol, which is also defined in the FC-SB specification.
  • the system described herein may be used with IBM's z high performance FICON (zHPF) transport mode protocol implementation.
  • zHPF enhances z/Architecture and FICON interface architecture to improve data transfer processing.
  • standard FICON architecture operates with the command mode protocol
  • a zHPF architecture operates with the transport mode protocol.
  • zHPF provides a Transport Control Word (TCW) that facilitates the processing of an I/O request by the channel and the controller.
  • TCW Transport Control Word
  • the TCW enables multiple channel commands to be sent to the controller as a single entity (instead of being sent as separate commands as in a FICON channel command word (CCW)).
  • CCW FICON channel command word
  • the channel no longer has to process and keep track of each individual CCW.
  • the channel forwards a chain of commands to the controller for execution.
  • zHPF capable channels may support both FICON and zHPF protocols simultaneously.
  • zHPF reference is made, for example, to C. Cronin, “IBM System z10 I/O and High Performance FICON for System z Channel Performance,” Technical paper, IBM Corporation, Jan. 28, 2009, 33 pp., which is incorporated herein by reference.
  • FIG. 6 is a flow diagram 400 showing data access processing involving the transport mode protocol, such as zHPF, according to an embodiment of the system described herein.
  • an I/O access request is initiated for I/O processing as among a host and data storage device (e.g., like the host 110 and data storage device 150 discussed elsewhere herein).
  • a test step 404 it is determined whether the transport mode protocol (e.g. zHPF) is activated and/or otherwise operational for the system.
  • the transport mode protocol e.g. zHPF
  • processing proceeds to a step 406 , where the I/O processing may proceed using non-transport mode Fibre Channel connection processing and in which the I/O processing may proceed in such manner as that discussed in connection with the flow diagram 300 , particularly noting that the processing may proceed with the large FBA device support access features discussed elsewhere herein.
  • processing is complete for this iteration of the flow diagram 400 .
  • processing proceeds to a step 408 where the I/O processing may proceed using the transport mode Fibre Channel connection processing (e.g., zHPF) and in which the I/O processing may proceed in such manner as that discussed in connection with the flow diagram 300 , particularly noting that the processing may proceed with the large FBA device support access features discussed elsewhere herein.
  • transport mode protocol e.g. zHPF
  • processing is complete for this iteration of the flow diagram 400 .
  • Software implementations of the system described herein may include executable code that is stored in a computer-readable medium and executed by one or more processors.
  • the computer-readable medium may include volatile memory and/or non-volatile memory, and may include, for example, a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, an SD card, a flash drive or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer-readable medium or computer memory on which executable code may be stored and executed by a processor.
  • the system described herein may be used in connection with any appropriate operating system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems and procedures are provided to enable large size fixed block architecture (FBA) device support over FICON. The FBA devices may have a size greater than 2 terabytes. For example, in known storage systems, an FBA device size may be 64 terabytes and an architecture provided for 512-terabyte devices, and the described system supports such large, or even larger, FBA devices. The system may be used with existing fixed block command sets.

Description

TECHNICAL FIELD
This application relates to the field of computer systems and, more particularly, to device access and connections among computing systems.
BACKGROUND OF THE INVENTION
Host processor systems may store and retrieve data using a storage device containing a plurality of host interface units (I/O modules), disk drives, and disk interface units (disk adapters). The host systems access the storage device through a plurality of channels provided therewith. Host systems provide data and access control information through the channels to the storage device and the storage device provides data to the host systems also through the channels. The host systems do not address the disk drives of the storage device directly, but rather, access what appears to the host systems as a plurality of logical disk units. The logical disk units may or may not correspond to the actual disk drives. Allowing multiple host systems to access the single storage device unit allows the host systems to share data stored therein.
Mainframe computers are large scale computer system architectures that are used by large organizations for bulk data processing, such as financial transaction processing. Mainframe computers offer enhanced availability, scalability, reliability and security along with high volume data throughput, among other features. Input/Output (I/O) devices may be coupled to interact with mainframe computers that may include an I/O subsystem that communicates with the I/O devices over communication channels. The I/O subsystem controls data flow between I/O devices and main storage. The I/O subsystem may be coupled to the central processors of the main system and may communicate directly with the I/O devices. The I/O subsystem may communicate with the I/O devices using multiple types of interfaces, including, for example, communication channels such as Fibre channels. For example, IBM Corporation's System z® is a mainframe platform and computing environment that is widely used in the industry and that includes z/Architecture-based systems and zSeries mainframes. System z components may operate with IBM's z/OS® (operating system) and/or other zSeries-compatible operating systems.
Fixed block architecture (FBA) is a disk layout in which each addressable block on disk is of the same size. In an FBA system, data (logical volumes) are mapped over the fixed-size blocks and a disk drive stores the data in the blocks of fixed size. Specifically, the FBA architecture has two characteristics: each physical block is the same size and each physical block is individually addressable by a value called the logical block address (LBA). In some cases, an FBA device may be characterized by tracks and cylinders and in which the physical disk may contain multiple blocks per track, and the cylinder may be the group of tracks that exists under the disk heads at one point in time without performing a seek operation. For further discussion of storage control using FBA, an/or other formats, to provide high-speed direct access storage for general purpose data storage and system residence, reference is made, for example, to IBM Corporation, “IBM 3880 Storage Control: Models 1, 2, 3, 4,” Description Manual, Pub. No. GA26-1661-9, Tenth Ed., September 1987, 446 pp., which is incorporated herein by reference.
Hosts may address FBA devices over a number of channel connections, including Fibre channel connections. For example, in a conventional system, a host may include an operating system and a Fibre Channel connection portion, which includes hardware and/or software for facilitating a FICON data connection between the host and a conventional data storage device. FICON (Fibre Connection) operates on a Fibre Channel protocol from IBM Corporation and may be used in connection with implementing Fibre channel connections to provide high-speed connectivity between a channel and a device and allows multiple data exchanges in full duplex mode. FICON is compatible with z/Architecture computing systems in connection with I/O devices performing I/O processing therewith. For a discussion of features and implementations of FICON systems and suitable Fibre channel protocols operating therewith on z/Architecture computing systems, reference is made to J. Entwistle, “IBM System z10 FICON Express8 FCP Channel Performance Report,” Technical paper, Aug. 2009, 27 pp., which is incorporated herein by reference.
One known system that enables FICON device access of FBA storage devices is IBM Corporation's z/OS Distributed Data Backup (zDDB) system as implemented in IBM's D58000® product, and reference is made, for example, to B. Dufrasne et al., “IBM System Storage DS8000: z/OS Distributed Data Backup,” IBM Corporation, Redpaper REDP-4701-00, Nov. 16, 2010, 16 pp., which is incorporated herein by reference. However, with this system, there is an architectural storage device size limit of two (2) terabytes (TB). Specifically, for a 512-byte block (9 bits) and a 4 byte LBA offset, only 2 TB (32 bits plus 9 bits=41 bits or 2 TB) of a FICON-connected device may be accessed.
Accordingly, it would be desirable to provide improved software and hardware that eliminates or reduces the issues noted above, such as by enabling large FBA device support over FICON connections.
SUMMARY OF THE INVENTION
According to the system described herein, a method is presented for providing storage device access. The method includes using a fixed block command set of a fixed block architecture (FBA) to access at least one storage device using a FICON communication channel of a storage system, wherein the at least one storage device has a size larger than 2 terabytes. A controller of the communication channel is configured to access an extended field of information that provides accessibility to the at least one storage device, having the size larger than 2 terabytes, using the fixed block command set, such as the IBM 3880 fixed block command set described, for example, in Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” which is cited elsewhere herein. The storage system may include z/Architecture components. The storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes. The fixed block command set may support 3880/3370-type operations. The controller and the storage device may operate in a virtualized environment.
According further to the system described herein, a non-transitory computer-readable medium stores software for providing storage device access. The software includes executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access a storage device using a FICON communication channel, wherein the storage device has a size larger than 2 terabytes. Executable code is provided that enables a controller of the communication channel to access an extended field of information that provides accessibility to the storage device, having the size larger than 2 terabytes, using the fixed block command set. The storage system may include z/Architecture components. The storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes. The fixed block command set may support 3880/3370-type operations. The controller and the storage device may operate in a virtualized environment.
According further to the system described herein, a storage system is provided having a host having an operating system and a FICON controller, at least one storage device, and a FICON connection between the host and the at least one storage device. A non-transitory computer-readable medium stores software for providing storage device access. The software includes executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access a storage device using a communication channel, wherein the storage device has a size larger than 2 terabytes. Executable code is provided that enables a controller of the communication channel to access an extended field of information that provides accessibility to the storage device, having the size larger than 2 terabytes, using the fixed block command set. The storage system may include z/Architecture components. The storage device may have a size equal to or larger than 64 terabytes, and may further be sized between 512 terabytes and 144 petabytes. The fixed block command set may support 3880/3370-type operations. The controller and the storage device may operate in a virtualized environment.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the system are described with reference to the several figures of the drawings, noted as follows.
FIG. 1 is a schematic illustration of a storage system showing a relationship between a host and a storage device that may be used in connection with an embodiment of the system described herein.
FIG. 2 is a schematic diagram illustrating an embodiment of the storage device where each of a plurality of directors are coupled to the memory.
FIG. 3 is a schematic illustration showing a system with a FICON connection between a host and a data storage device that operates to provide large FBA device support according to an embodiment of the system described herein.
FIG. 4 is a schematic illustration showing a FICON connection between a host and a data storage device operating in a virtualized environment according to an embodiment of the system described herein.
FIG. 5 is a flow diagram showing processing for providing large size FBA device support over a FICON connection according to an embodiment of the system described herein.
FIG. 6 is a flow diagram showing data access processing involving transport mode protocol, such as zHPF, according to an embodiment of the system described herein.
DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
FIG. 1 is a schematic illustration of a storage system 20 showing a relationship between a host 22 and a storage device 24 that may be used in connection with an embodiment of the system described herein. In an embodiment, the storage device 24 may be a Symmetrix storage system produced by EMC Corporation of Hopkinton, Mass.; however, the system described herein may operate with other appropriate types of storage devices. Also illustrated is another (remote) storage device 26 that may be similar to, or different from, the storage device 24 and may, in various embodiments, be coupled to the storage device 24, for example, via a network. The host 22 reads and writes data from and to the storage device 24 via an I/O module (IOM) 28, which facilitates the interface between the host 22 and the storage device 24. Although the diagram 20 only shows one host 22 and one IOM 28, it will be appreciated by one of ordinary skill in the art that multiple IOM's may be used and that one or more IOM's may have one or more hosts coupled thereto.
In an embodiment of the system described herein, in various operations and scenarios, data from the storage device 24 may be copied to the remote storage device 26 via a link 29. For example, the transfer of data may be part of a data mirroring or replication process, that causes the data on the remote storage device 26 to be identical to the data on the storage device 24. Although only the one link 29 is shown, it is possible to have additional links between the storage devices 24, 26 and to have links between one or both of the storage devices 24, 26 and other storage devices (not shown). The storage device 24 may include a first plurality of adapter units (RA's) 30 a, 30 b, 30 c. The RA's 30 a-30 c may be coupled to the link 29 and be similar to the I/O Module 28, but are used to transfer data between the storage devices 24, 26.
The storage device 24 may include one or more disks, each containing a different portion of data stored on each of the storage device 24. FIG. 1 shows the storage device 24 including a plurality of disks 33 a, 33 b, 33 c. The storage device (and/or remote storage device 26) may be provided as a stand-alone device coupled to the host 22 as shown in FIG. 1 or, alternatively, the storage device 24 (and/or remote storage device 26) may be part of a storage area network (SAN) that includes a plurality of other storage devices as well as routers, network connections, etc. The storage devices may be coupled to a SAN fabric and/or be part of a SAN fabric. The system described herein may be implemented using software, hardware, and/or a combination of software and hardware where software may be stored in a computer readable medium and executed by one or more processors.
Each of the disks 33 a-33 c may be coupled to a corresponding disk adapter unit (DA) 35 a, 35 b, 35 c that provides data to a corresponding one of the disks 33 a-33 c and receives data from a corresponding one of the disks 33 a-33 c. An internal data path exists between the DA's 35 a-35 c, the IOM 28 and the RA's 30 a-30 c of the storage device 24. Note that, in other embodiments, it is possible for more than one disk to be serviced by a DA and that it is possible for more than one DA to service a disk. The storage device 24 may also include a global memory 37 that may be used to facilitate data transferred between the DA's 35 a-35 c, the IOM 28 and the RA's 30 a-30 c. The memory 37 may contain tasks that are to be performed by one or more of the DA's 35 a-35 c, the IOM 28 and the RA's 30 a-30 c, and a cache for data fetched from one or more of the disks 33 a-33 c.
The storage space in the storage device 24 that corresponds to the disks 33 a-33 c may be subdivided into a plurality of volumes or logical devices. The logical devices may or may not correspond to the physical storage space of the disks 33 a-33 c. Thus, for example, the disk 33 a may contain a plurality of logical devices or, alternatively, a single logical device could span both of the disks 33 a, 33 b. Similarly, the storage space for the remote storage device 26 that comprises the disks 34 a-34 c may be subdivided into a plurality of volumes or logical devices, where each of the logical devices may or may not correspond to one or more of the disks 34 a 34 c.
FIG. 2 is a schematic diagram 40 illustrating an embodiment of the storage device 24 where each of a plurality of directors 42 a-42 n are coupled to the memory 37. Each of the directors 42 a-42 n represents at least one of the IOM 28, RAs 30 a-30 c, or DAs 35 a-35 c. The diagram 40 also shows an optional communication module (CM) 44 that provides an alternative communication path between the directors 42 a-42 n. Each of the directors 42 a-42 n may be coupled to the CM 44 so that any one of the directors 42 a-42 n may send a message and/or data to any other one of the directors 42 a-42 n without needing to go through the memory 26. The CM 44 may be implemented using conventional MUX/router technology where a sending one of the directors 42 a-42 n provides an appropriate address to cause a message and/or data to be received by an intended receiving one of the directors 42 a-42 n. Some or all of the functionality of the CM 44 may be implemented using one or more of the directors 42 a-42 n so that, for example, the directors 42 a-42 n may be interconnected directly with the interconnection functionality being provided on each of the directors 42 a-42 n. In addition, a sending one of the directors 42 a-42 n may be able to broadcast a message to all of the other directors 42 a-42 n at the same time.
In some embodiments, one or more of the directors 42 a-42 n may have multiple processor systems thereon and thus may be able to perform functions for multiple directors. In some embodiments, at least one of the directors 42 a-42 n having multiple processor systems thereon may simultaneously perform the functions of at least two different types of directors (e.g., an IOM and a DA). Furthermore, in some embodiments, at least one of the directors 42 a-42 n having multiple processor systems thereon may simultaneously perform the functions of at least one type of director and perform other processing with the other processing system. In addition, all or at least part of the global memory 37 may be provided on one or more of the directors 42 a-42 n and shared with other ones of the directors 42 a-42 n. In an embodiment, the features discussed in connection with the storage device 24 may be provided as one or more director boards having CPUs, memory (e.g., DRAM, etc.) and interfaces with Input/Output (I/O) modules.
According to the system described herein, systems and procedures are provided to enable large FBA device support over FICON connections, in particular for FBA devices having a size greater than 2 TB. For example, in known volume group (VG) storage systems, the FBA device size may be 64 TB and the architecture is for 512 TB. The system described herein is principally discussed in connection with FBA block sizes of 512 bytes, which is a standardized value. As discussed herein, under known systems, 2 TB-sized devices is the maximum supported with a 32-bit LBA and a 512 byte block size. In various embodiments, the system described herein may also be appropriately used in connection with block sizes other than 512 bytes, such as 4096 byte block sizes, and such modification would mean different corresponding device sizes and/or other corresponding values from that presented herein.
For a direct access FBA storage device using 3370 operation, a Read Device Characteristic (RDC) command (command 64) transfers device characteristic information from the storage director to the channel (e.g. up to 32 bytes of information may be transferred). The commands discussed herein include commands from the IBM 3880 fixed block command set described, for example, in Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” which is cited elsewhere herein.
The RDC command returns the device block count as a 32-bit value in the max_prime_blocks field located at offset 14 (decimal) in the output. The Thirty-two (32) bits are not enough to report the one trillion 512-byte blocks that a 512 TB device supports. In an embodiment, the system described herein provides for logically expanding the max_prime_blocks field to an extended field that will hold additional bits of information, such as a 48 bit field that has least 16 additional bits of information. There are at least 16 unused bits immediately following the max_prime_blocks field at offset 18, which, under current systems, are set to zero (see TABLE 1). The additional bit (16-bit) field, that may referred to herein as max_prime_blocks_ext, may be used to hold additional bits of device information (see TABLE 2) that will enable direct access to a device larger than 2 TB. The host application can easily test this field to determine if this is a large device; a non-zero field value indicates that the device is larger than 2 TB. This approach obviates the need for an explicit “large device” flag and allows existing software to run on devices smaller than 2 TB. The number of blocks the device supports (UINT64 device_size, where UINT64 represents a 64-bit unsigned integer value) may be calculated as (EQ. 1):
UINT64 device_size=((UINT64)max_prime_blocks_ext<<32|max_prime_blocks  (EQ. 1)
In another embodiment, another option is to redefine the max_prime_blocks field as a true 64-bit value, which would require existing software to be modified to support a 64-bit value. For example, application test suites may provide testing of software and/or hardware systems in connection with mainframe computing platforms and may be used and/or otherwise operated in connection with the system described herein. Such test suites may require modification under this option for the extended device access. One such test suite is the STCMON test suite provided by EMC Corporation of Hopkinton, Mass.
By way of example, TABLE 1 shows known device characteristics formatting in connection with an RDC command of a fixed block command set for a direct access storage device using 3370 operation (see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein).
TABLE 1
RDC command - known device characteristics format
Byte Bits Description
0 Operation modes
0 Reserved
1 Overrunable
2 On is burst mode and off is byte mode
3 Data chaining allowed
4-7 Reserved
1 Features
0 Reserved
1 Removable device
2 Shared device
3 Reserved
4 Movable access mechanism
5-7 Reserved
2 Device class
3 Unit type
4, 5 Physical record size
6-9 Number of blocks per cyclical group
10-13 Number of blocks per access position
14-17 Number of blocks under movable access mechanism
18-23 Reserved - all zeros
24, 25 Number of blocks in the CE area
26-31 Reserved - all zeroes
According to an embodiment of the system described herein, TABLE 2 shows device characteristics formatting in connection with an RDC command of a fixed block command set for a direct access storage device using 3370 operation. The system provides for supporting large size FBA device access, such as an FBA device that may, for example, have a size larger than 2 TB, such as 64 TB-512 TB, and provides further expansion to even larger size FBA devices, such as FBA devices having a size of 144 petabytes (PB).
TABLE 2
RDC command-Large size FBA device support characteristics
format embodiment
Byte Bits Description
0 Operation modes
0 Reserved
1 Overrunable
2 On is burst mode and off is byte mode
3 Data chaining allowed
4-7 Reserved
1 Features
0 Reserved
1 Removable device
2 Shared device
3 Reserved
4 Movable access mechanism
5-7 Reserved
2 Device class
3 Unit type
4, 5 Physical record size
6-9 Number of blocks per cyclical group
10-13 Number of blocks per access position
14-17 Number of blocks under movable access mechanism
18-19 USED FOR EXTENDED DEVICE ACCESS
20-23 POTENTIALLY USED FOR FURTHER EXPANSION
24, 25 Number of blocks in the CE area
26-31 Reserved - all zeroes
Additionally, for a direct access FBA storage device using 3370 operation, a Define Extent (DEX) command (command 63) transfers parameters from the channel to the storage directory, the parameters defining the size and location of a data extent (e.g., up to 16 bytes of parameters may be transferred). The DEX command has three 32-bit LBA fields that are used to define the boundary of the blocks to be accessed. The extent_ph_bn field defines the start of the extent as a relative block number from the beginning of the device. As discussed herein, 32-bit LBA value does not adequately address a device that is larger than 2 TB. The DEX command currently has a reserved byte at offset 1 (see TABLE 3; see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein). The command set defines this parameter to be zero.
By way of example, TABLE 3 shows known device characteristics formatting in connection with a DEX command of a fixed block command set for a direct access storage device using 3370 operation (see, e.g., Chapter 3: Fixed Block Command Set of “IBM 3880 Storage Control: Models 1, 2, 3, 4,” cited elsewhere herein):
TABLE 3
DEX command - known parameters format
Byte Description
0 Mask byte
1 Must be zero
2, 3 Block size
4-7 Offset to first block of extent
8-11 Relative displacement, in data set, to the first block of the extent
12-15 Relative displacement, in data set, to the last block of the extent
According to an embodiment of the system described herein, TABLE 4 shows parameter formatting in connection with a DEX command of a fixed block command set for a direct access storage device using 3370 operation that enables large FBA device support. Byte 1 may be used to contain an additional eight bits of LBA addressing information. This field is logically the upper eight bits of a 40-bit block number. It is noted that byte 1 will only be appended to bytes 4-7 (offset to first block of extent). The other two fields 8-11 and 12-15 are relative to the offset (4-7). This scheme will provide addressability for a 512 TB device (assuming 512-byte blocks).
TABLE 4
DEX command-Large size FBA device support parameters
format embodiment
Byte Description
0 Mask byte
1 (8 bits) USED FOR EXTENDED DEVICE ACCESS
2, 3 Block size
4-7 Offset to first block of extent
8-11 Relative displacement, in data set, to the first block
of the extent
12-15 Relative displacement, in data set, to the last block
of the extent
It is noted that, according to the system described herein, in various embodiments, FBA-specific code may be included that accepts count key data (CKD) Define Extent information when used to read a Shared-with-VM FBA device. In such case, where bytes may have different attributes in such devices (e.g., Extended CKD (ECKD) systems uses byte 1 for a global attributes field), code implementation according to the system described herein may be appropriately adjusted to cause the system to behave properly where necessary.
Although the discussion is presented principally in connection with 3370 operation and corresponding command sets, it is noted that the system described herein may be suitably used with other appropriate systems and command sets.
FIG. 3 is a schematic illustration showing a system 100 with a FICON connection controller 130 for providing FICON connection between a host 110 and a data storage device 150 and that may operate to provide large size FBA device support according to an embodiment of the system described herein. In an embodiment, the host 110 may be a computer running Linux or some other appropriate operating system 120. In various embodiments, the host 110 may have or be coupled with a Peripheral Component Interconnect (PCI) layer 125 that may provide an interconnection for I/O operations. The I/O processing on the host 110 may operate with the FICON connection controller 130 to enable I/O operations with the data storage device 150. The FICON connection controller 130 may send and receive data to and from the data storage device 140 using a remote connection mechanism 140, that may include a network (such as the Internet, and appropriate connection thereof) and that may, in some circumstances, be done in a way that is transparent (not detectable) by the PCI layer 125. The data storage device 150 may include physical storage volumes and/or logical volumes, such as EMC Corporation's Symmetrix data storage facility. The FICON connection controller 130 may act as an I/O subsystem providing FICON communication capability for the system according to control and formatting features discussed elsewhere herein (see, e.g., TABLES 2 and 4), specifically to enable large size (greater than 2 TB) FBA device access. The data storage device 150 may include features and/or components enabling the Fibre channel communication with the host 110.
It is noted that various components of the system 100 may be emulated, for example, at least a portion of the FICON connection controller may have a portion that emulates the Fibre Channel FC0 physical layer so that the PCI layer 125 sends and receives data as if the PCI layer 125 were connected to a physical Fibre Channel connector. For further discussion of emulation of I/O computing components, particular Fibre Channel connection components, reference is made to U.S. patent application Ser. No. 14/133,852 to Jones et al., filed Dec. 19, 2013, entitled “Virtual I/O Hardware” and U.S. patent application Ser. No. 12/215,984 to LeCrone et al., filed Jun. 8, 2008, entitled “I/O Fault Injection Using Simulated Computing Environments,” which are both incorporated herein by reference. Accordingly, in various embodiments, the system described herein provides for use of a channel emulator to emulate data transfer paths in I/O operations, and in which the channel emulator may simulate a host channel to provide I/O connectivity with an I/O device and may provide for the I/O connectivity using different channel protocols.
The connection mechanism 140 may include an Internet connection and/or possibly some other types of connection(s). In an embodiment herein, the connection mechanism 140 may be directly incompatible with a FICON connection. The incompatibility may be hardware incompatibility, software incompatibility, or both. Such connection mechanism 140 may not support a direct FICON connection but, instead, rely on a FICON emulator (and/or other emulator(s)) for providing data in an appropriate format. It is further noted that where FICON emulation is being performed, the data storage device 150 may include or be coupled to a FICON emulator portion that may send and receive data to and from the connection mechanism 140 and also emulates a Fibre Channel FC0 physical layer for the benefit of the data storage device 150. Thus, in such case involving emulation, both the host 110 and the data storage device 150 may operate as if the devices 110, 150 were communicating using a FICON hardware connection.
FIG. 4 is a schematic illustration showing an alternative embodiment of a system 200 where a host 210 having an OS 220 and a FICON connection controller 230 providing extended FBA device access is provided in a virtual environment 202. The virtual environment 202 may be provided by using products from VMware or similar products or systems. The host 210 and the FICON connection controller 230 may be a virtualized instances of a host and a FICON controller, respectively, running in the virtual environment 202. The FICON connection controller 230 may thereby include a FICON emulator, as further discussed elsewhere herein.
Similarly, a data storage device 250 may also be provided in a virtual environment 204 that may be implemented using products provided by VMware or similar. The data storage device 250 may have a FICON emulator 255, as further discussed elsewhere herein. The data storage device 250 and the FICON emulator 255 may be virtualized instances of a data storage device and a FICON emulator, respectively, running in the virtual environment 204. Note that the host 210 and the FICON connection controller 230 of the virtual environment 202 may be coupled to an actual data storage device like the data storage device 150 described elsewhere herein. Similarly, the data storage device 250 and the FICON emulator 255 of the virtual environment 204 may be coupled to an actual host, like the host 110 described elsewhere herein. It is noted that use of the FICON emulators in the virtual environments 202, 204 may eliminate a need to provided virtual FICON connectivity. Instead, the host 210 and/or the data storage device 250 may use virtual Ethernet connectivity, which is more likely to be generally available in a virtual environment. The virtual environments 202, 204 may be connected by the connection mechanism 140 discussed elsewhere herein.
FIG. 5 is a flow diagram 300 showing processing for providing large size FBA device support over a FICON connection according to an embodiment of the system described herein. At a step 302, a system is configured for providing access to one or more large size direct access FBA devices, such as FBA devices larger than 2 TB, by a host (e.g., like the host 110 and data storage device 150 discussed elsewhere herein). The configuration may be provided to enable use of existing FBA command sets, such as the existing fixed block command sets for 3370 operation of the FBA device(s). The configurations may include modifications of the controller and/or modifications in connection with the storage device to configure one or more directors of the storage device to provide output to the controller, in response to requests, that is formatted to support large size FBA device access according to the features discussed herein. After the step 302, at a step 304, an access command in connection with an FBA device is processed by the system for access over a FICON connection to the FBA device using the controller. After the step 304, at a step 306, information resulting from processing the command is received. After the step 306, at a test step 308, it is determined whether the FBA device for which access is being requested is larger than 2 TB. In an embodiment, a device larger than 2 TB may be determined by determining whether an extended field of the output from the FBA device is non-zero.
If, at the test step 308, it is determined that the FBA device has a size equal to or smaller than 2 TB, then processing proceeds to a step 310 where access processing without using the extended fields, discussed elsewhere herein, is performed (e.g., I/O read/write requests are performed on the small size FBA device). After the step 310, processing is complete for this iteration of the flow diagram 300. If, at the test step 308, it is determined that the FBA device has a size greater than 2 TB, then processing proceeds to a step 312, where access processing proceeds using extended fields, as further discussed in detail herein (e.g., I/O read/write requests are performed on the large size FBA device). After the step 312, processing is complete for this iteration of the flow diagram 300.
According further to the system described herein, the transport mode protocol for high performance Fibre Channel connections may be used to enhance and extend access to the large FBA devices. The transport mode protocol is an extension to the Fibre-Channel Single-Byte (FC-SB) command set specification supported by the American National Standards Institute (ANSI) (see, e.g., FC-SB-4 and/or FC-SB-5). Transport mode complements and extends the legacy command mode protocol, which is also defined in the FC-SB specification.
In an embodiment, the system described herein may be used with IBM's z high performance FICON (zHPF) transport mode protocol implementation. zHPF enhances z/Architecture and FICON interface architecture to improve data transfer processing. In z/OS, standard FICON architecture operates with the command mode protocol, and a zHPF architecture operates with the transport mode protocol.
zHPF provides a Transport Control Word (TCW) that facilitates the processing of an I/O request by the channel and the controller. The TCW enables multiple channel commands to be sent to the controller as a single entity (instead of being sent as separate commands as in a FICON channel command word (CCW)). The channel no longer has to process and keep track of each individual CCW. The channel forwards a chain of commands to the controller for execution. zHPF capable channels may support both FICON and zHPF protocols simultaneously. For a more detailed discussion of zHPF, reference is made, for example, to C. Cronin, “IBM System z10 I/O and High Performance FICON for System z Channel Performance,” Technical paper, IBM Corporation, Jan. 28, 2009, 33 pp., which is incorporated herein by reference.
FIG. 6 is a flow diagram 400 showing data access processing involving the transport mode protocol, such as zHPF, according to an embodiment of the system described herein. At a step 402, an I/O access request is initiated for I/O processing as among a host and data storage device (e.g., like the host 110 and data storage device 150 discussed elsewhere herein). After the step 402, at a test step 404, it is determined whether the transport mode protocol (e.g. zHPF) is activated and/or otherwise operational for the system. If not, then processing proceeds to a step 406, where the I/O processing may proceed using non-transport mode Fibre Channel connection processing and in which the I/O processing may proceed in such manner as that discussed in connection with the flow diagram 300, particularly noting that the processing may proceed with the large FBA device support access features discussed elsewhere herein. After the step 406, processing is complete for this iteration of the flow diagram 400.
If, at the test step 404, it is determined that transport mode protocol (e.g. zHPF) is activated and/or otherwise operational for the system, then processing proceeds to a step 408 where the I/O processing may proceed using the transport mode Fibre Channel connection processing (e.g., zHPF) and in which the I/O processing may proceed in such manner as that discussed in connection with the flow diagram 300, particularly noting that the processing may proceed with the large FBA device support access features discussed elsewhere herein. After the step 408, processing is complete for this iteration of the flow diagram 400.
Various embodiments discussed herein may be combined with each other in appropriate combinations in connection with the system described herein. Additionally, in some instances, the order of steps in the flow diagrams, flowcharts and/or described flow processing may be modified, where appropriate. Further, various aspects of the system described herein may be implemented using software, hardware, a combination of software and hardware and/or other computer-implemented modules or devices having the described features and performing the described functions. The system may further include a display and/or other computer components for providing a suitable interface with a user and/or with other computers.
Software implementations of the system described herein may include executable code that is stored in a computer-readable medium and executed by one or more processors. The computer-readable medium may include volatile memory and/or non-volatile memory, and may include, for example, a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, an SD card, a flash drive or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer-readable medium or computer memory on which executable code may be stored and executed by a processor. The system described herein may be used in connection with any appropriate operating system.
Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.

Claims (18)

What is claimed is:
1. A method for providing storage device access, comprising:
using a fixed block command set of a fixed block architecture (FBA) to access at least one storage device using a FICON communication channel of a storage system, wherein the at least one storage device has a size larger than 2 terabytes;
using a reserved field of a define extent command to enable direct access to more than 2 TB of the storage device, wherein the reserved field is used to increase an offset field of a first block of a data extent of the at least one storage device by an amount corresponding to two raised to a power of a number of bits in the reserved field;
configuring a controller of the communication channel to access the reserved field that provides accessibility to the at least one storage device, having the size larger than 2 terabytes, by sending the define extent command to the controller;
determining that the at least one storage device has a size larger than 2 terabytes by determining whether the reserved field has a non-zero value;
determining whether a transport mode protocol is enabled for the storage system; and
processing access commands for the at least one storage device based on the determination of whether the reserved field has a non-zero value and whether the transport mode protocol is enabled for the storage system,
wherein at least a portion of the controller emulates a physical layer of a Fibre Channel protocol for a PCI layer.
2. The method according to claim 1, wherein the storage system includes z/Architecture components.
3. The method according to claim 1, wherein the at least one storage device has a size equal to or larger than 64 terabytes.
4. The method according to claim 3, wherein the at least one storage device has a size of between 512 terabytes and 144 petabytes.
5. The method according to claim 1, wherein the fixed block command set provides commands for 3880/3370 operations.
6. The method according to claim 1, wherein the controller and the at least one storage device operate in a virtualized environment.
7. A non-transitory computer-readable medium storing software for providing storage device access, the software comprising:
executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access a storage device using a FICON communication channel of a storage system, wherein the storage device has a size larger than 2 terabytes;
executable code that uses a define extent command to enable direct access to more than 2 TB of the storage device, wherein a reserved field is used to increase an offset field of a first block of a data extent of the at least one storage device by an amount corresponding to two raised to a power of a number of bits in the reserved field;
executable code that enables a controller of the communication channel to access the reserved field that provides accessibility to the storage device, having the size larger than 2 terabytes, by sending the define extent command to the controller;
executable code that determines that the at least one storage device has a size larger than 2 terabytes by determining whether the reserved field has a non-zero value;
executable code that determines whether a transport mode protocol is enabled for the storage system;
executable code that processes access commands for the at least one storage device based on the determination of whether the reserved field has a non-zero value and whether the transport mode protocol is enabled for the storage system; and
executable code that enables at least a portion of the controller to emulate a physical layer of a Fibre Channel protocol for a PCI layer.
8. The non-transitory computer-readable medium according to claim 7, wherein the storage system includes z/Architecture components.
9. The non-transitory computer-readable medium according to claim 7, wherein the storage device has a size equal to or larger than 64 terabytes.
10. The non-transitory computer according to claim 9, wherein the storage device has a size of between 512 terabytes and 144 petabytes.
11. The non-transitory computer-readable medium according to claim 7, wherein the fixed block command set provides commands for 3880/3370 operations.
12. The non-transitory computer-readable medium according to claim 7, wherein the controller and the storage device operate in a virtualized environment.
13. A storage system, comprising:
a host having an operating system and a FICON controller;
at least one storage device;
a FICON connection between the host and the at least one storage device; and
a non-transitory computer-readable medium storing software for providing storage device access, the software comprising:
executable code that recognizes a fixed block command set of a fixed block architecture (FBA) to access the at least one storage device using a communication channel, wherein the storage device has a size larger than 2 terabytes;
executable code that uses a define extent command to enable direct access to more than 2 TB of the storage device, wherein a reserved field is used to increase an offset field of a first block of a data extent of the at least one storage device by an amount corresponding to two raised to a power of a number of bits in the reserved field;
executable code that enables the FICON controller to access the reserved field that provides accessibility to the storage device, having the size larger than 2 terabytes, by sending the define extent command to the controller;
executable code that determines that the at least one storage device has a size larger than 2 terabytes by determining whether the reserved field has a non-zero value;
executable code that determines whether a transport mode protocol is enabled for the storage system;
executable code that processes access commands for the at least one storage device based on the determination of whether the reserved field has a non-zero value and whether the transport mode protocol is enabled for the storage system; and
executable code that enables at least a portion of the controller to emulate a physical layer of a Fibre Channel protocol for a PCI layer.
14. The system according to claim 13, wherein the storage system includes z/Architecture components.
15. The system according to claim 13, wherein the at least one storage device has a size equal to or larger than 64 terabytes.
16. The system according to claim 15, wherein the at least one storage device has a size of between 512 terabytes and 144 petabytes.
17. The system according to claim 13, wherein the fixed block command set provides commands for 3880/3370 operations.
18. The system according to claim 13, wherein the controller and the storage device operate in a virtualized environment.
US14/228,945 2014-03-28 2014-03-28 Large size fixed block architecture device support over FICON channel connections Active 2035-05-13 US10466918B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/228,945 US10466918B1 (en) 2014-03-28 2014-03-28 Large size fixed block architecture device support over FICON channel connections

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/228,945 US10466918B1 (en) 2014-03-28 2014-03-28 Large size fixed block architecture device support over FICON channel connections

Publications (1)

Publication Number Publication Date
US10466918B1 true US10466918B1 (en) 2019-11-05

Family

ID=68391846

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/228,945 Active 2035-05-13 US10466918B1 (en) 2014-03-28 2014-03-28 Large size fixed block architecture device support over FICON channel connections

Country Status (1)

Country Link
US (1) US10466918B1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3976976A (en) * 1975-04-04 1976-08-24 The United States Of America As Represented By The Secretary Of The Navy Method and means to access and extended memory unit
US5579503A (en) * 1993-11-16 1996-11-26 Mitsubishi Electric Information Technology Direct cache coupled network interface for low latency
US20040068635A1 (en) * 2002-10-03 2004-04-08 International Business Machines Corporation Universal disk format volumes with variable size
US20050102682A1 (en) * 2003-11-12 2005-05-12 Intel Corporation Method, system, and program for interfacing with a network adaptor supporting a plurality of devices
US7702762B1 (en) * 2000-10-11 2010-04-20 International Business Machines Corporation System for host-to-host connectivity using ficon protocol over a storage area network
US20110153715A1 (en) * 2009-12-17 2011-06-23 Microsoft Corporation Lightweight service migration
US20120265967A1 (en) * 2009-08-04 2012-10-18 International Business Machines Corporation Implementing instruction set architectures with non-contiguous register file specifiers

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3976976A (en) * 1975-04-04 1976-08-24 The United States Of America As Represented By The Secretary Of The Navy Method and means to access and extended memory unit
US5579503A (en) * 1993-11-16 1996-11-26 Mitsubishi Electric Information Technology Direct cache coupled network interface for low latency
US7702762B1 (en) * 2000-10-11 2010-04-20 International Business Machines Corporation System for host-to-host connectivity using ficon protocol over a storage area network
US20040068635A1 (en) * 2002-10-03 2004-04-08 International Business Machines Corporation Universal disk format volumes with variable size
US20050102682A1 (en) * 2003-11-12 2005-05-12 Intel Corporation Method, system, and program for interfacing with a network adaptor supporting a plurality of devices
US20120265967A1 (en) * 2009-08-04 2012-10-18 International Business Machines Corporation Implementing instruction set architectures with non-contiguous register file specifiers
US20110153715A1 (en) * 2009-12-17 2011-06-23 Microsoft Corporation Lightweight service migration

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
B. Dufrasne et al., "IBM System Storage DS8000: z/Os Distributed Data Backup," IBM Corporation, Redpaper REDP-4701-00, Nov. 16, 2010, 16 pp.
C. Cronin, "IBM System z10 I/O and High Performance FICON for System z Channel Performance," IBM Corporation, Technical paper, Jan. 28, 2009, 33 pp.
IBM Corporation, "IBM 3880 Storage Control: Models 1, 2, 3, 4," Description Manual, Pub. No. GA26-1661-9, Tenth Ed., Sep. 1987, 446 pp.
J. Entwistle, "IBM System z10 FICON Express8 FCP Channel Performance Report," IBM Corporation, Technical paper, Aug. 2009, 27 pp.
Rogers, Paul. z/OS Version 1, Release 11 Implementation. IBM International Technical Support Organization. "High Performance FICON for z". p. 425-440. 2010. (Year: 2010). *
Schmid, Patrick and Roos, Achim. "Hitachi's 4 TB Hard Drives Take on the 3 TB Competition". Tom's Hardware. Published Apr. 24, 2012. <http://www.tomshardware.com/print/4tb-3tb-hdd,reviews-3183.html>. *
U.S. Appl. No. 12/215,984, filed Jun. 8, 2008, LeCrone et al.
U.S. Appl. No. 14/133,852, filed Dec. 19, 2013, Jones et al.
Waldspurger, Carl and Rosenblum, Mendel. "I/O Virtualization". ACM. Published Jan. 2012. *

Similar Documents

Publication Publication Date Title
JP5181141B2 (en) Apparatus, method and computer program for providing indirect data addressing in an input / output processing system
EP3206124B1 (en) Method, apparatus and system for accessing storage device
JP2011512591A5 (en)
US9304710B2 (en) Storage system and data transfer method of storage system
CN1965298A (en) Method, system, and program for managing parity RAID data reconstruction
CN109313593B (en) Storage system
US20130275668A1 (en) Data processing method and device
CN105739930A (en) Storage framework as well as initialization method, data storage method and data storage and management apparatus therefor
US11106605B2 (en) Enhanced tape drive communication
US9921770B1 (en) Extending fixed block architecture device access over ficon using transport mode protocol
US11262952B2 (en) Concurrent tape modification
US11226756B2 (en) Indirect storage data transfer
US10852962B1 (en) Parallel input/output (I/O) replicating incoming data emulating fiber connection (FICON) in shared memory
US10466918B1 (en) Large size fixed block architecture device support over FICON channel connections
US10853208B2 (en) Transferring a writable data set to a cloud service that is separate from the writable data set and terminate a snapshot after transfer
US10657082B1 (en) Automated transformation from command mode to transport mode
US10705905B2 (en) Software-assisted fine-grained data protection for non-volatile memory storage devices
US10437497B1 (en) Active-active host environment
US10067888B1 (en) I/O optimization with track splitting
US11461054B2 (en) Concurrent tape access
US10852983B2 (en) Data migration with write back to source with data pre-copy
US20230236865A1 (en) Locating Virtual Tape File Systems on a Virtual Tape Emulator
US10678466B2 (en) Data migration with write back to source
US11922073B2 (en) Non-disruptive updating of virtual tape emulation applications
US10936242B2 (en) Cloud access through tape transformation

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4