[go: up one dir, main page]

CN118689819A - A data processing method, system and computing device based on complex programmable logic device CPLD - Google Patents

A data processing method, system and computing device based on complex programmable logic device CPLD Download PDF

Info

Publication number
CN118689819A
CN118689819A CN202410751588.8A CN202410751588A CN118689819A CN 118689819 A CN118689819 A CN 118689819A CN 202410751588 A CN202410751588 A CN 202410751588A CN 118689819 A CN118689819 A CN 118689819A
Authority
CN
China
Prior art keywords
hard disk
data
ubm
controller
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410751588.8A
Other languages
Chinese (zh)
Inventor
张君望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
XFusion Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XFusion Digital Technologies Co Ltd filed Critical XFusion Digital Technologies Co Ltd
Priority to CN202410751588.8A priority Critical patent/CN118689819A/en
Publication of CN118689819A publication Critical patent/CN118689819A/en
Priority to PCT/CN2025/082724 priority patent/WO2025256210A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/387Information transfer, e.g. on bus using universal interface adapter for adaptation of different data processing systems to different peripheral devices, e.g. protocol converters for incompatible systems, open system
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0028Serial attached SCSI [SAS]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0044Versatile modular eurobus [VME]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a data processing method, a system and a computing device based on a complex programmable logic device CPLD. The CPLD is arranged on a hard disk backboard of the server, and the hard disk backboard supports a universal backboard management UBM protocol; the hard disk backboard comprises an HFC connector used for connecting with the RAID controller and a DFC connector used for connecting with the hard disk slot position, and the CPLD is respectively connected with the HFC connector and the DFC connector; under the condition that a hard disk is inserted into a hard disk slot, judging the interface type of the hard disk according to the signal level transmitted by the DFC; processing interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, wherein the interactive data comprises data indicating the type of the hard disk interface, data indicating the state of the hard disk and read/write operation command data; and transmitting the processed data to a RAID controller or a hard disk to realize management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type. Thus, a new logic function is added on the original logic chip of the hard disk backboard, and the function of the UBM protocol is realized.

Description

Data processing method, system and computing device based on complex programmable logic device CPLD
Technical Field
The invention relates to the technical field of hard disk backplates, in particular to a data processing method, a system and computing equipment based on a complex programmable logic device CPLD.
Background
With the increasing demands of market demands on data read-write rate and data reliability of servers, the market has created a technical solution for implementing management of NVME hard disks based on RAID controllers, the solution relies on a hard disk backplane meeting UBM protocol to implement the landing of the solution, and the universal backplane management (Universal Backplane Management, abbreviated as UBM) specification provides a framework for a universal backplane management host for determining the characteristics of the hard disk backplane connected to SAS/SATA/NVME hard disks and providing access to drive slot information and controls.
In the related art, the function of the UBM protocol is realized through a proprietary FPGA chip, or the function of the UBM protocol is realized through a proprietary FPGA chip plus a general purpose input/Output (GPIO) extension chip. However, the current chip for realizing the UBM protocol is a special FPGA chip, and there is no advantage in demand and cost of the chip itself, and there is a risk that the special FPGA chip is supplied exclusively, and the price of the FPGA chip is high. In addition, the special FPGA chip has poor expandability, and a server manufacturer cannot realize the differentiation characteristic of functions in a mode of changing source codes. Therefore, there is a need for a system and method that can implement UBM protocols on a hard disk backplane and that is cost-effective, and supply-controllable to implement the functionality of UBM protocols.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the application provides a data processing method, a system and a computing device based on a complex programmable logic device CPLD.
In a first aspect, an embodiment of the present application provides a data processing method based on a complex programmable logic device CPLD, where the CPLD is disposed on a hard disk back plane of a server, and the hard disk back plane supports a universal back plane management UBM protocol; the hard disk backboard comprises an HFC connector used for connecting with the RAID controller and a DFC connector used for connecting with the hard disk slot position, and the CPLD is respectively connected with the HFC connector and the DFC connector; the method comprises the following steps: under the condition that a hard disk is inserted into a hard disk slot, judging and judging the interface type of the hard disk according to the signal level transmitted by the DFC; processing interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, wherein the interactive data comprises data indicating the type of the hard disk interface, data indicating the state of the hard disk and read/write operation command data; and transmitting the processed data to a RAID controller or a hard disk to realize management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type.
In this embodiment, by using a CPLD in a server, which is typically disposed in a hard disk backplane, a protocol processing module is disposed in the CPLD, so as to implement management of SAS/SATA/NVME hard disks based on a RAID controller under a backplane management framework provided by the UBM specification. The dependence on chips such as special FPGA and the like in the market is reduced, the CPLD size is not required to be increased, and the compact design of the hard disk backboard is maintained. The hard disk backboard can be more flexibly adapted to hard disks with different interface types, and the integration level of the system is improved. In addition, the CPLD is used for simulating the UBM protocol, so that the cost of additionally purchasing and integrating chips such as a special FPGA (field programmable gate array) is avoided, the hardware cost is reduced, and the universality and the expandability of the hard disk backboard are enhanced. The programmable characteristic of the CPLD enables the hard disk backboard to flexibly adapt to different types of hard disks, and the support of a new hard disk interface protocol can be realized without replacing hardware. Support for the updated protocol may be achieved by software updating the CPLD, for example, when a new hard disk interface protocol is present, the new protocol may be supported by updating the logic code in the CPLD.
In some possible examples, the CPLD includes a SAS/SATA protocol processing module and a UBM protocol processing module; processing interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, including: when the hard disk interface type is SAS/SATA, using an SAS/SATA protocol processing module to process interactive data from a RAID controller and a first hard disk, wherein the first hard disk comprises a hard disk of the SAS/SATA interface type; when the hard disk interface type is NVME, processing interactive data from the RAID controller and a second hard disk by using a UBM protocol processing module, wherein the second hard disk comprises the hard disk of the NVME interface type; and the UBM protocol processing module processes the interactive data according to the UBM protocol.
In this embodiment, in the protocol processing module of the CPLD, the SAS/SATA hard disk and the NVME hard disk are processed by the SAS/SATA protocol processing module and the UBM protocol processing module, respectively, where the UBM protocol processing module is implemented by programming, and interactive data between the NVME hard disks of the RAID controller is processed according to the UBM protocol, so as to ensure accuracy and consistency of data transmission and state management.
In some possible examples, the UBM protocol processing module includes a plurality of UBM controllers, the RAID controller being connected to at least one UBM controller by HFC connectors, the UBM controller being connected to at least one DFC connector; processing interactive data from the second hard disk using the UBM protocol processing module, comprising: the target UBM controller receives the interactive data sent by the RAID controller; the target UBM controller is one of a plurality of UBM controllers; the interaction data includes a plurality of fields including one or more of: a read/write data enable bit, a read/write data address bit, a command bit, a data bit to be written; analyzing the interaction data by using a UBM protocol to obtain a plurality of interface data; the plurality of interface data includes one or more of: first interface data indicating read data enable; second interface data indicating a read/write data address; third interface data indicating write data enable; fourth interface data indicating an operation command; fifth interface data indicating data to be written; reading the value of at least one interface data from the first interface data to the fifth interface data, analyzing the value of at least one interface data from the first interface data to the fifth interface data, and sending the analyzed command to the DFC connector connected with the second hard disk.
In this embodiment, a plurality of UBM controllers are provided in the UBM protocol processing module, so that the RAID controller is connected up through the plurality of UBM controllers, and various types of hard disks are connected down. The UBM controller analyzes the interactive data transmitted by the RAID controller through the HFC connector. By arranging a plurality of UBM controllers, the system can process interactive data of a plurality of hard disks simultaneously, and the processing efficiency of the interactive data and the response speed of the system are improved. The interactive data comprises a plurality of fields, so that the transmission of commands and data is more flexible, the fields of the commands can be expanded or modified according to specific requirements, and the accurate issuing and execution of each operation are ensured. The interface data are read and analyzed, so that specific operation commands are generated, the automation degree of the system is improved, and manual intervention is reduced.
In some possible examples, the plurality of fields in the interaction data further include UBM controller address bits, the UBM controller address bits used to determine the target UBM controller; the target UBM controller receives the interaction data sent by the RAID controller, including: the RAID controller sending the interaction data to the UBM controller via the serial data bus, determining a target UBM controller based on UBM controller address bits in the interaction data, comprising: determining one UBM controller in the at least one UBM controller as a target UBM controller based on the address identification and the address identification comparison result indicated by the UBM controller address bit being consistent; each of the at least one UBM controller has a unique address identification.
In this embodiment, by setting address bits of the UBM controller in the interaction data, the RAID controller may accurately determine the target UBM controller; each UBM controller has a unique address identification, which can prevent address collision and data transmission errors in a plurality of UBM controller environments. And the system can extend the new UBM controller without causing address collision.
In some possible examples, the UBM protocol processing module further includes a FRU that stores initial configuration information of the hard disk backplane, the initial configuration information including a mapping relationship of the plurality of UBM controllers and HFC connectors, DFC connectors.
In this embodiment, the initial configuration information is stored by the FRU, so that the mapping relationship between each UBM controller and the interface is not required to be reconfigured when the system is started, the initialization speed of the system is improved, and the starting time is shortened. The FRU stored configuration information ensures that the system can quickly revert to the previous configuration state after a reboot or hardware change. And new hard disk and UBM controllers can be easily extended and added.
In some possible examples, the method further comprises: the target UBM controller returns an interactive data processing result to the RAID controller, wherein the processing result indicates that the execution result of the interactive data is successful or failed; based on the successful execution result, the RAID controller sends the next piece of interaction data; based on the failure of the execution result, the RAID controller, for example, logs or performs an error handling procedure to notify the RAID controller to handle the error.
In the embodiment, by executing the success and failure indication, the system can timely detect and process the abnormal situation, and the stability and maintainability of the system are enhanced.
In some possible examples, the method further comprises: and transmitting the processed data to a hard disk indicator lamp corresponding to the hard disk so as to realize management of the hard disk indicator lamp.
In this embodiment, the system may display the status of the hard disk (such as normal running, existence, loading failure, etc.) in real time by transmitting the processed data to the hard disk indicator corresponding to the hard disk.
In some possible examples, the method further comprises: the hard disk indicator lamps of the first hard disk and the second hard disk are controlled through the same group of signal wires.
In some possible examples, the method further comprises: the hard disk indicator lamp is connected with the hard disk backboard through the DFC connector. The DFC connector is connected to the hard disk indicator lamp through a multiplexed line.
In a second aspect, an embodiment of the present application provides a data processing system based on a complex programmable logic device CPLD, where the CPLD is disposed on a hard disk backplane of a server, and the hard disk backplane includes an HFC connector for connecting to a RAID controller and a DFC connector for connecting to a hard disk slot; the system comprises: and the protocol processing module is used for processing the interactive data between the RAID controller and the hard disk according to the type of the hard disk interface and transmitting the processed data to the RAID controller or the hard disk so as to realize the management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type.
In some possible examples, the protocol processing module includes: the system comprises an SAS/SATA protocol processing module, a RAID controller and a first hard disk, wherein the SAS/SATA protocol processing module is used for processing interactive data of the RAID controller and the first hard disk when the hard disk interface type is SAS/SATA, and the first hard disk comprises a hard disk of the SAS/SATA interface type; the UBM protocol processing module is used for processing interactive data from the RAID controller and a second hard disk by utilizing the UBM protocol when the hard disk interface type is NVME, wherein the second hard disk comprises the hard disk of the NVME interface type; and the UBM protocol processing module processes the interactive data according to the UBM protocol.
In some possible examples, the UBM protocol processing module includes: a plurality of UBM controllers, each of the plurality of UBM controllers connected to the RAID controller through an HFC connector and connected to the hard disk slot through a DFC connector, each UBM controller comprising: the data receiving module is used for receiving the interactive data sent by the RAID controller through a serial data bus and processing the interactive data to obtain a plurality of interface data; the command analysis module is used for acquiring a plurality of interface data to analyze to obtain a first command and a second command; a command execution module for sending a first command to the DFC connector to which the UBM controller is connected; a second command is sent to a hard disk indicator light of a second hard disk connected with the UBM controller; the second hard disk includes a NVME interface type hard disk.
In this embodiment, the process of processing the interactive data is divided into a plurality of processing stages, so that the closed-loop operation of data processing in each stage is completed, that is, the information flow analysis processing in each implementation module is self-closed-loop. The structured processing mode improves the data processing efficiency and reduces the waiting time. The stability and consistency of each stage in the data processing process are ensured. Even if one module fails, other modules can continue to work normally, so that the stability of the system is improved. Each module completes data processing in an independent closed loop, and errors and delays caused by data transmission among a plurality of modules are reduced.
In a third aspect, embodiments of the present application provide a computing device comprising: a hard disk backplane, the hard disk backplane including a CPLD thereon for emulating a UBM controller, the UBM controller performing the data processing method of any one of the first and second aspects; and a RAID controller connected with the UBM controller of the computing device, wherein the RAID controller is used for: generating control information of a hard disk and control information of a hard disk indicator lamp, and sending the control information and the control information of the hard disk indicator lamp to a target UBM controller; the target UBM controller is used for analyzing the control information of the hard disk and the control information of the hard disk indicator lamp, and obtaining a control command and sending the control command to the DFC connector connected with the target hard disk and the target hard disk indicator lamp.
It will be appreciated that the advantages of the second and third aspects may be found in the relevant description of the first aspect and are not described in detail herein.
Drawings
FIG. 1 is a schematic diagram of a system architecture of a computing device 200 according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a system architecture of yet another computing device 200 provided in an embodiment of the present application;
Fig. 3 is a schematic system architecture diagram of a UBM protocol processing module according to an embodiment of the present application;
fig. 4 is an I2C communication schematic diagram of UBM controllers and UBM FRUs of a plurality of hard disk backplates according to an embodiment of the present application;
fig. 5 is a schematic diagram of an internal implementation of a UBM controller;
Fig. 6 is a schematic structural diagram of a sub-module included in each module of the UBM controller in fig. 5;
FIG. 7 is a schematic diagram of a first register in a receiving module;
FIG. 8 is a diagram of a second register in the command parsing module;
fig. 9 is a schematic diagram of UBM structures of a multi-RAID controller and a multi-hard disk backplane according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which embodiments of the application have been described in connection with the description of the objects having the same attributes. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the technical solution of the present application, the related terms referred to herein are explained below.
Universal backplane management (Universal Backplane Management, UBM for short). The UBM specification is intended to define a generic backplane management scheme to effectively manage and monitor various hardware devices on the backplane, such as hard disks and the like. UBM specifications typically include detailed specifications for hardware connections, communication protocols, management functions, and the like. RAID controllers are typically located on a motherboard through which hard disks are managed.
A complex programmable logic device (complex programming logic device, abbreviated as CPLD), which is a digital integrated circuit with logic functions built by users according to the needs. The method is characterized in that a corresponding target file is generated by means of an integrated development software platform through methods such as a schematic diagram, a hardware description language and the like, and a designed digital system is realized by transmitting codes to a target chip.
The hard disk backboard is used for a circuit board of a computing device for accessing more hard disks, is generally used in the field of servers, and can also be used for building a personal storage system. The hard disk backboard is directly connected with the main board or various adapter cards through cables, the connectable hard disks on each backboard are different, the backboard is generally classified according to different types of data interfaces, and serial advanced technology attachment (SERIAL ADVANCED technology attachment, SATA) backboard, 6G backboard (such as SFF-8087) and 12G (such as SFF-8643) backboard are commonly classified.
And (3) a main board: one of the main components of the server, a power supply, a central processing unit, a baseboard management controller, an internal memory, a memory controller, a RAID controller, a PCIE interface and other connectors reserved for the expansion card can be generally arranged on the main board.
RAID controller: the expansion card is a special expansion card for managing a plurality of hard disk drives, realizing data storage and protection functions, and is usually installed on a PCIE slot of a main board through a slot.
The hard disk backboard is used as an important component part of the server and is used for connecting the hard disk with an upstream board card, wherein the upstream board card comprises a server main board, a RAID controller and the like. The hard disk back plate may also interface with one or more hard disks. The hard disk may be divided into serial attached small computer system interface/serial advanced technology attachment (SERIAL ATTACHED SMALL Computer SystemInterface/SERIAL ADVANCED Technology Attachment, SAS/SATA) hard disk and nonvolatile memory host controller interface (Non-Volatile Memory Express, NVME) hard disk. The hard disk types can be mechanical hard disks, solid state disks and the like, the solid state disks are divided into SAS/SATA/NVMe solid state disks, and the SAS/SATA interface hard disk comprises both the mechanical hard disk and the solid state disk.
To manage SAS/SATA/NVMe hard disks, different links and protocols are required. SAS/SATA hard disks typically communicate with a CPU on a motherboard using SAS or SATA protocols that define specifications for data transmission, command interaction, error handling, etc., enabling the CPU to efficiently manage the hard disk. The NVMe hard disk communicates using NVMe protocol. The NVMe protocol is a high performance, low latency storage protocol specifically designed for solid state disks. Compared with the traditional SATA and SAS interfaces, NVMe communicates through the PCIE bus, and the characteristics of high bandwidth and low delay are utilized to achieve higher data transmission rate and lower access delay.
In servers that do not use RAID controllers, the hard disk backplane management scheme is not the same for SAS/SATA/NVME hard disks. If one hard disk backplane needs to support SAS/SATA/NVME hard disks at the same time, then SAS/SATA/NVME hard disks need to have separate upstream connectors while on the hard disk backplane, different links are needed to support, and two management schemes need to be supported on the backplane. The SAS/SATA hard disk is required to be managed through an SGPIO management scheme, the NVME hard disk is required to be managed through a VPP management scheme, the back plate management is complex, and the material cost is high.
As market demands increase in data read-write rate and data reliability for servers, RAID controllers appear, and some servers start to support a RAID (Redundant Array of INDEPENDENT DISKS ) management mode. In the management mode, the RAID controller is interacted with the CPU through the PCIE protocol in the uplink and is electrically connected with the hard disk backboard in the downlink, so that interaction with the hard disk backboard is realized.
The RAID controller can improve the data reliability, performance and expansibility of the system, simplify the management and maintenance work of the storage system, and can continue to work without losing data by distributing data and check information among a plurality of hard disks even if one hard disk fails.
However, when using a RAID controller to manage different types of hard disks, the manner of managing SAS/SATA hard disks and NVMe hard disks is different because of different protocols supported by the hard disks. Different connectors and controllers are required for different interface types of hard disks, adding to the complexity and cost of the system design. The need for upstream devices (e.g., RAID controllers) to support multiple different protocols and command sets increases the difficulty of software development and maintenance. Therefore, to address the complexity of managing and monitoring different interface types of hard disks in the same system, UBM protocols were introduced that provide a standardized management framework that manages and monitors a variety of different interface type hardware devices on the backplane.
Fig. 1 is a schematic system architecture diagram of a computing device 200 according to an embodiment of the present application. As shown in fig. 1, the computing device 200 includes a motherboard 10, a hard disk back plate 20, and a storage device 30. The motherboard 10 includes a processor 11 and a RAID controller 12. The hard disk back plate 20 includes a host-oriented Connector (Host Facing Connector, abbreviated HFC) 21, a hard disk-oriented Connector (DRIVE FACING Connector, abbreviated DFC) 22, a CPLD23, and a protocol processing module 27 in the CPLD 23.
The processor 11 and the RAID controller 12 may be connected by a high-speed serial computer expansion bus standard (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, abbreviated as PCIE) interface, or may be connected by other interfaces by communication, which is not limited in the embodiment of the present application. For example, the processor 11 provides high-speed PCIE signals to the RAID controller 12. The processor 11 may be a central processing unit (Central Processing Unit, CPU for short), or may be other devices with processing capabilities. The processor 11 may send read and write commands and management configuration commands to the RAID controller 12 to enable the reading and writing of data and the management of RAID groups.
The RAID controller 12 manages the storage device 30 via the CPLD 23. The CPLD23 includes a protocol processing module 27, and the protocol processing module 27 parses instructions sent by the RAID controller 12 and then issues the instructions to the storage device 30. The protocol processing module 27 also sends the monitored status information of the storage device 30 to the RAID controller 12, and the RAID controller 12 may implement management of the storage device 30 according to the corresponding instruction given by the latest status of the storage device 30.
The RAID controller 12 may be connected by a high-speed signal line, a sideband signal line, and an HFC connector 21. The high-speed signal line is used for high-speed data transmission between the RAID controller 12 and the HFC connector 21, for example, through a PCIE bus and an SAS/SATA/NVME interface. The sideband signal lines are used to carry control and management signals between HFC connector 21 and protocol processing module 27. The RAID controller 12 transmits data to the HFC connector 21 through a high-speed signal line, the HFC connector 21 transmits a high-speed signal to the DFC connector 22, and the DFC connector 22 finally transmits the high-speed signal to the interface of the corresponding storage device 30, thereby implementing high-speed data read-write operation on the storage device 30. HFC connector 21 is a connector connected to the motherboard, indicating a connector assembly connected to the backplane of the host, referred to in the present embodiment as HFC connector 21 connected to the RAID controller.
HFC connector 21 is connected to protocol processing module 27 via a sideband signal line. The sideband signal lines may carry, for example, 2-Wire signals, which are low speed control and management signals, and other types of control and management signals, such as integrated circuit bus (Inter-INTEGRATED CIRCUIT, I2C) signals, between HFC connector 21 and protocol processing module 27 on hard disk backplane 20.
The RAID controller 12 sends read/write instructions to the protocol processing module 27 through the HFC connector 21.
The protocol processing module 27 is connected to the DFC connector 22, and the instruction is transmitted to the DFC connector 22 after being processed by the protocol processing module 27. The protocol processing module 27 implements the UBM protocol function through CPLD simulation, and specifically, the protocol processing module 27 is responsible for analyzing and processing the transmitted sideband signal, managing and monitoring the state and operation of the hard disk. The DFC connector 22 is a connector connected to a hard disk drive, indicating that the hard disk back plate is connected to a connector assembly of the hard disk drive, in the present embodiment referred to as a connector connected to a hard disk.
The DFC connector 22 is connected to the storage device 30, and the DFC connector 22 transmits the instruction parsed by the protocol processing module 23 to the storage device 30. The state control of the storage device 30 is completed.
Similarly, the storage device 30 sends a signal including the type, state and presence information of the storage device 30 to the protocol processing module 27 through the DFC connector 22, and the raid controller 12 obtains the latest signal including the type, state and presence information of the storage device 30 through the HFC connector 21 and issues a corresponding operation instruction.
It should be understood that a plurality of HFC connectors 21 may be disposed on the hard disk back plate 20, and each HFC connector 21 may be connected to a plurality of DFC connectors 22.
The DFC connector 22 has a standard interface and can support connection of at least one of PCIE hard disk, SATA hard disk and SAS hard disk. At least one HFC connector 21 supports serial bus connections. The serial bus may include, but is not limited to, the serial bus shown below: an integrated Circuit (Inter-IntegratedCircuit, IIC or I2C) bus, an upgraded INTER INTEGRATED Circuit (I3C) bus, and a serial peripheral interface (SERIAL PERIPHERAL INTERFACE, SPI) bus.
The storage device 30 includes, but is not limited to, a hard disk as follows: NVMe hard disk, solid state hard disk, mechanical hard disk, hybrid hard disk, or the like. The hard disk slots in the hard disk backplane 20 may support at least one of the three types of interface modes SATA/SAS/NVME.
The hard disk backplane 20 is a three-mode Tri-mode backplane that supports SATA/SAS/NVME three types of interface modes.
In one possible implementation, in one computing device 200, the number of motherboards may be one or more, and the corresponding number of RAID controllers 12 is also one or more. For example, in one computing device 200, two motherboards 10 may be included, each motherboard 10 may include two RAID controllers.
The hard disk backboard 20 may further comprise a hard disk indicator light 40, and the protocol processing module 27 is connected with the hard disk indicator light 40.
The RAID controller 12 also performs high-speed reading/writing of data to the storage device 30 via a high-speed signal transmission path with the HFC connector 21 and the DFC connector 22.
According to the embodiment of the application, the CPLD which is usually arranged in the hard disk backboard in the server is used, and the protocol processing module is arranged in the CPLD, so that under the backboard management framework provided by the UBM specification, the management of the SAS/SATA/NVME hard disk is realized based on the RAID controller. The dependence on chips such as special FPGA and the like in the market is reduced, the CPLD size is not required to be increased, and the compact design of the hard disk backboard is maintained. The hard disk backboard can be more flexibly adapted to hard disks with different interface types, and the integration level of the system is improved. In addition, the CPLD is used for simulating the UBM protocol, so that the cost of additionally purchasing and integrating chips such as a special FPGA (field programmable gate array) is avoided, the hardware cost is reduced, and the universality and the expandability of the hard disk backboard are enhanced. The programmable characteristic of the CPLD enables the hard disk backboard to flexibly adapt to different types of hard disks, and the support of a new hard disk interface protocol can be realized without replacing hardware. Support for the updated protocol may be achieved by software updating the CPLD, for example, when a new hard disk interface protocol is present, the new protocol may be supported by updating the logic code in the CPLD.
In one possible implementation, the protocol processing module 27 is implemented by a controller such as a CPLD, field programmable gate array FPGA, or the like.
It should be understood that the number of RAID controllers 12, storage devices 30, hard disk backplane 20, DFC connectors 22, HFC connectors 21, and hard disk indicator lights 40 in the system architecture diagram shown in FIG. 1 is exemplary only and that greater or lesser numbers are within the scope of the present application.
In one possible implementation, HFC connector 21 is a connector that meets the SFF-8643 specification, which is typically used to connect between a RAID controller and a hard disk backplane to achieve high speed data transfer and management functions. For example, the connector may support PCIE X16 lanes.
In one possible implementation, the DFC connector 22 may be a connector that meets the SFF-8639 specification, SFF-8639 being a connector standard defined by the SFF (Small Form Factor) Committee for connecting solid state disks to other storage devices. It is also known as U.2 connector. U.2 support NVMe protocols, are compatible with SAS and SATA protocol specifications, and can connect different types of hard disks of SAS, SATA and NVMe.
By way of example, RAID controller 12 has X16 channels and two HFC connectors 21 connected to RAID controller 12, each HFC connector 21 having an X8 channel, wherein each HFC connector 21 connects four DFC connectors and each DFC22 has an X2 channel for connecting to a hard disk backplane, receiving data transmitted from RAID controller 12 and transmitting the data to the connected hard disk. Each DFC connector 22 may be connected to a hard disk. Each hard disk has an X2 channel for receiving data transmitted from the DFC connector 22 and performing read/write operations on the hard disk data.
The number of channels provided by HFC connector 21 will be affected by the type and number of hard disks connected. For example, HFC connector 21 is an SFF-8639 connector. The connector comprises 6 channels, wherein 4 PCIE channels support connection with NVME hard disks, and the other two channels can be connected with SAS/SATA hard disks. The maximum bandwidth of PCIE X4 (Lane 0/1/2/3) can be supported, and if the inserted hard disk is PCIE X1, lane0 on an SFF-8639 interface is used; if the inserted hard disk is a PCIE X2 hard disk, using Lane0/1 on an SFF-8639 interface; if the inserted hard disk is PCIE X4, lane0/1/2/3/4 on the SFF-8639 interface is used. The number of channels required for HFC connector 21 should be determined based on the type and number of hard disks required for the backplane.
In an embodiment of the present application, computing device 200 may be, but is not limited to, the devices shown below: servers, tablet computers, laptop computers, desktop computers, laptop portable computers, and the like. The server may be, but is not limited to, a device as shown below: the system comprises a high-density server, a rack server, a GPU server, a tower server, a blade server and a whole cabinet server. The RAID controller 12 may be any controller that can be used to manage a hard disk in the related art.
Fig. 2 is a schematic system architecture diagram of a computing device 200 according to an embodiment of the present application. The difference between fig. 2 and fig. 1 is that the protocol processing module 27 in the CPLD controller 23 in fig. 2 includes a SAS/SATA protocol processing module 25 and a UBM protocol processing module 26, and processes different types of storage devices through different channels according to the type of the storage device 30. According to the embodiment of the application, the CPLD on the hard disk backboard is used for realizing the function of the UBM protocol, and the UBM protocol analysis module 26 is used for analyzing the data transmitted between the CPLD and the NVME hard disk by using the UBM protocol, so that the RAID controller is used for managing the NVME hard disk. Meanwhile, the SAS/SATA protocol processing module 25 uses, for example, SAS/SATA protocols to parse data transmitted between the CPLD and the SAS/SATA hard disk, so as to implement management of the SAS/SATA hard disk by the RAID controller. Therefore, links processed by the RAID controller 12 on the SAS/SATA hard disk and the NVME hard disk are unified in the CPLD, processing logic is simplified, and space occupied by a hard disk backboard is reduced.
As shown in fig. 2, the RAID controller 12 determines the interface type of the hard disk connected to the hard disk back plate 20 by sampling the DFC IFDET signal, and determines the signal type for communication with the hard disk back plate 20 according to the interface type of the hard disk.
The RAID controller 12 transmits the interactive data to the hard disk back plate 20 through the I2C protocol, the hard disk back plate 20 selects an SAS/SATA protocol processing module or a UBM protocol processing module 26 according to the interface type of the inserted hard disk, and then transmits the data to the hard disk. Referring to fig. 2, the raid controller 12 sends a high-speed signal to the HFC21 through a high-speed channel (e.g., PCIE channel), and the high-speed signal is transmitted to the DFC22 through the HFC21, and finally reaches the storage device 30. The RAID controller 12 sends the interaction data to the hard disk platter 20 via, for example, the I2C protocol, and the HFC21 receives the I2C signal from the RAID controller 12 and passes it to a signal determination module 28 on the hard disk platter 20. The signal determination module 28 receives IFDET signals, determines the hard disk type, and selects the SAS/SATA protocol processing module 25 or UBM protocol processing module 26 for processing.
The DFC IFDET signal is a signal on the hard disk backplane that indicates the type of communication interface used by the connected hard disk, including, for example, SGPIO, 2-Wire (I2C), or other types. I2C is a serial bus standard, also commonly referred to as a 2-Wire bus, that uses two signal lines (serial data line and serial clock line) to transfer data. By detecting the state of the DFC IFDET signal, the system can determine the type of communication interface of the hard disk.
For example, the DFC IFDET signal is low, indicating that the hard disk backplane is connected to a serial universal Input/Output (SGPIO) signal, which is a universal Input/Output signal for SAS/SATA hard disk interfaces. When the RAID controller recognizes that the hard disk back plane communicates through the SGPIO signal, the SAS/SATA protocol processing module 25 parses data transmitted between the RADI controller 12 and the SAS/SATA hard disk according to the SAS/SATA protocol. Specifically, RAID controller 12 sends sideband signals to SAS/SATA protocol processing module 25 over HFC21, the sideband signals being encapsulated into a format recognizable by the SAS/SATA protocol, including commands and necessary control data. The SAS/SATA protocol processing module 25 parses the interaction data according to the SAS/SAATA protocol specifications and sends the parsed command to the DFC22. The DFC22 sends the command to the hard disk indicator of the connected SAS/SATA hard disk (first hard disk) and SAS/SATA hard disk to realize control of the hard disk indicator of the SAS/SATA hard disk (first hard disk) and SAS/SATA hard disk. The SAS/SATA protocol processing module may also send status information of the SAS/SATA hard disk (first hard disk) to the RAID controller 12 via SGPIO.
For example, the DFC IFDET signal is high, indicating that the hard disk backplane is communicating using the 2-Wire protocol, and the RAID controller is communicating with the hard disk backplane 20 via the UBM protocol processing module 26. The RAID controller 12 controls and manages the connected NVME hard disk by using signals of the 2-Wire protocol specification. Such as monitoring the status of the hard disk drive, controlling the status of the hard disk indicator lights, and detecting hot plug events. When the RAID controller 12 performs disk array rebuilding, data recovery, reading of status information of the hard disk back plate, acquisition of hard disk status, control of powering up and down, monitoring of a failed disk, indication of a hard disk status lamp, RAID information clearing, RAID reconfiguration, etc., or performs failure diagnosis, a signal of 2-Wire protocol specification is sent to instruct the corresponding hard disk drive to perform read/write operation or display the corresponding status indicator lamp.
In other words, the RAID controller 12 determines according to the received hard disk type information, if the storage device is an SAS/SATA hard disk, the RAID controller and the hard disk back plate communicate in SGPIO mode, and the RAID controller sends SGPIO signals to the corresponding SAS/SAT protocol processing module 25 for analysis and processing, and outputs control information of the SAS/SATA hard disk and a hard disk lighting signal after processing. The CPLD converts the received SGPIO signal into script execution using its own editable capabilities.
If the storage device 30 is an NVME hard disk, then the RAID controller 12 and the hard disk backplane 20 communicate in 2-Wire mode, and the 2-Wire signals are parsed by the UBM protocol processing module 26. Thereby realizing the acquisition of the NVME hard disk state and the issuing of the command.
In one possible implementation, the processing links for the different signals may be implemented by adding a channel selector. The channel selector may be located in the RAID controller 12 or in a protocol processing module 27 in the hard disk backplane 20. When judging that the RAID controller output signal is SGPIO, switching to the SAS/SATA protocol processing module 25 for data analysis; when the output signal of the RAID controller 12 is determined to be a 2-Wire signal, the UBM protocol processing module 26 provided in the embodiment of the present application is switched to perform data analysis.
In the protocol processing module of the CPLD, the embodiment of the application processes the SAS/SATA hard disk and the NVME hard disk through the SAS/SATA protocol processing module and the UBM protocol processing module respectively, wherein the UBM protocol processing module is realized through programming, and the interactive data between the NVME hard disks of the RAID controller are processed according to the UBM protocol, so that the accuracy and the consistency of data transmission and state management are ensured.
Fig. 3 is a schematic system architecture of a UBM protocol processing module according to an embodiment of the present application, as shown in fig. 3, and the technical solution is further described in fig. 3 with respect to fig. 2. In the hard disk back plate 20, the UBM protocol processing module 26 includes a plurality of UBM controllers and UBM FRUs. A UBM controller may be correspondingly coupled to an HFC and a plurality of DFCs, each DFC of the plurality of DFCs coupled to at least one hard disk.
By way of example, RAID controller 12 implements management of the hard disk through four UBM controllers. The UBM controller provides a 2-Wire slave interface providing back plane functionality and DFC status and control information. UBM signal processing module 26 includes UBM controller 1 and UBM FRU1, UBM controller 2 and UBM FRU2, UBM controller 3 and UBM FRU3, UBM controller 4 and UBM FRU4. The UBM controller and UBM FRU may be arranged in a one-to-one correspondence, for example. The number of UBM controllers and UBM FRUs provided by the RAID controller, the number of DFCs, the number of HFCs, etc., are determined according to the specific hardware type and requirements, and embodiments of the present application provide one possible implementation. The UBM controller is coupled to the HFC for receiving sideband signals, such as UBM I2C, 2wire_reset#, change_ ECTECT #, etc., sent by the RAID controller 12 over the HFC. HFC1 is connected to UBM controller 1 and UBM FRU1, and HFC1 is also connected to UBM controller 2 and UBM FRU 2. HFC2 is connected to UBM controller 3 and UBM FRU3, and HFC2 is also connected to UBM controller 4 and UBM FRU4. Each set of UBM controllers and UBM FRUs are connected to the same 2-Wire interface. The UBM controller uses sideband signals in a standard protocol over a 2-Wire interface.
The sideband signals are, for example, ubm1_i2c_scl signal, ubm1_i2c_sda signal, 2wire_reset signal, and change_detect signal.
The FRU information in the hard disk backplane is typically stored in Memory on the hard disk backplane, which may be an electrically erasable programmable read-Only Memory (EEPROM) or other type of non-volatile Memory. The FRU has a standard format for storing device information and status.
UBM FRUs are addressed using single byte 2-Wire, specifically UBM FRUs are addressed using 0 xAE. The UBM FRU records initial configuration information of the hard disk back plane, including, for example, a plurality of UBM controllers and HFC connectors, a mapping relationship of DFC connectors, the number of HFCs, ports, channel rates, the number of DFCs, the type and number of connected hard disks, UBM port routing information of the UBM protocol processing module 26, and the like. At system power up, RAID controller 12 reads information in UBM FRU, and completes initialization of RAID controller 12 and UBM protocol processing module 26.
For example, the initial configuration information is written in the FRU, UBM FRU1 stores the initial configuration information of UBM controller 1, UBM FRU2 stores the initial configuration information of UBM controller 2, UBM FRU3 stores the initial configuration information of UBM controller 3, and UBM FRU4 stores the initial configuration information of UBM controller 4.
According to the embodiment of the application, the FRU stores the initial configuration information, and the mapping relation between each UBM controller and the interface is not required to be reconfigured when the system is started, so that the initialization speed of the system is improved, and the starting time is shortened. The FRU stored configuration information ensures that the system can quickly revert to the previous configuration state after a reboot or hardware change. And new hard disk and UBM controllers can be easily extended and added.
Illustratively, the DFC includes DFC1-DFC8, UBM controller 1 connects DFC1 and DFC2 correspondingly, UBM controller 2 connects DFC3 and DFC4 correspondingly, UBM controller 3 connects DFC5 and DFC6 correspondingly, UBM controller 4 connects DFC7 and DFC8 correspondingly, and DFC1-DFC8 connects DFC1-DFC8 with NVME hard disks 40-1 to 40-8 correspondingly. Specifically, DFC1 is connected to UBM controller 1 through dfc1_prsnt/dfc1_ IFDET/dfc1_perst, and DFC2 is connected to UBM controller 2 through dfc2_prsnt/dfc2_ IFDET/dfc2_perst. DFC3, DFC4, DFC5, DFC6 and DFC8 are the same.
In one possible implementation, HFC1 and UBM controller 1, HFC1 and UBM controller 2, HFC2 and UBM controller 3, and HFC2 and UBM controller 4 communicate using I2C. I2C is a serial communication protocol that uses two lines (one for data transmission SDA and one for transmission clock signal SCL) for communication, and is therefore also referred to as 2-Wire communication.
The UBM controller 1 receives the ubm_i2c_sda signal and ubm_i2c_scl signal sent from the RAID controller 12 through the HFC1, and processes the signals to obtain control signals of DFC1 and DFC2 and control signals of hard disk indicator lights dfc1_led and dfc2_led connected to DFC1 and DFC 2. The UBM controller 1 outputs control signals of DFC1/DFC2 to be transmitted to DFC1/DFC2, dfc1_led/dfc2_led.
The UBM controller 2 receives the ubm_i2c_sda signal and the ubm_i2c_scl signal sent by the RAID controller 12 through the HFC1, and processes them to obtain control signals of DFC3 and DFC4 and control signals of hard disk indicator lights dfc3_led and dfc4_led connected to DFC3 and DFC 4. The UBM controller 2 outputs control signals of DFC3/DFC4 to be transmitted to DFC3/DFC4, dfc3_led/dfc4_led.
UBM controller 3 and UBM FRU3, UBM controller 4 and UBM FRU4 are the same and are not described in detail herein.
In the embodiment of the application, a plurality of UBM controllers are arranged in the UBM protocol processing module so as to be connected with the RAID controller in an uplink mode and connected with various types of hard disks in a downlink mode through the plurality of UBM controllers. The UBM controller analyzes the interactive data transmitted by the RAID controller through the HFC connector. By arranging a plurality of UBM controllers, the system can process interactive data of a plurality of hard disks simultaneously, and the processing efficiency of the interactive data and the response speed of the system are improved. The interactive data comprises a plurality of fields, so that the transmission of commands and data is more flexible, the fields of the commands can be expanded or modified according to specific requirements, and the accurate issuing and execution of each operation are ensured. The interface data are read and analyzed, so that specific operation commands are generated, the automation degree of the system is improved, and manual intervention is reduced.
The RAID controller 12, through the architecture of the hard disk backplane 20 provided by the embodiment of the present application, can determine the backplane function, the status and control information of the DFC connector, and read the routing information from the DFC connector to the HFC connector on the backplane. The hard disk backplane 20 may implement multiple DFCs connected by high-speed signal paths 50 per HFC, e.g., high-speed signal paths 50 of X1, X2, X4 or other path bandwidths.
It should be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the UBM controller and UBM FRU, hard disk, and RAID controller in UBM protocol processing module 26. In other embodiments of the present application, the UBM controller and UBM FRU, hard disk, and RAID controller in UBM protocol processing module 26 may include more or fewer components than shown, or may combine certain components, or may split certain components, or may have a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
By way of example, each hard disk includes 3 states: a normal running (Active) state, a hard disk Present (Present) state, or a hard disk load failure (Fault) state. The operating state of the hard disk is indicated to be in a normal operating (Active) state by the lighting color of the hard disk indicator lamp (for example, when the hard disk is normally identified and participates in data read-write operation, the hard disk indicator lamp is displayed in green), a hard disk existence (Local) state (for example, the hard disk is detected by a system and does not participate in the activity of the RAID array yet, the hard disk indicator lamp is displayed in yellow), or in a hard disk loading failure (Fault) state (for example, when the hard disk cannot normally operate due to hardware failure, configuration error or data damage, etc., the hard disk indicator lamp is displayed in red), so that the operating state of the hard disk can be effectively reflected.
The dfc1_led may be controlled by a hard disk indicator signal fault/local/active sent from the UBM controller 1, and the dfc2_led may be controlled by a hard disk indicator signal fault/local/active sent from the UBM controller 2. The hard disk state information may be indicated by UBM protocol processing module 26: active, local, fault state. According to the embodiment of the application, the system can display the state of the hard disk (such as normal operation, existence, loading failure and the like) in real time by transmitting the processed data to the hard disk indicator lamp corresponding to the hard disk.
In some possible implementations, the hard disk indicator lights of the first hard disk and the second hard disk are connected and controlled through the same set of signal lines.
The UBM controller needs to be initialized when the system is powered up, and the initialization includes the following steps:
S100, initializing sideband signal of DFC
Setting an input/output (I/O) signal of a hard disk connected to the DFC, comprising: an appropriate power signal is provided to the connected hard disk to facilitate booting the hard disk. The signal line communicating with the hard disk is reset to a consistent state.
S110, resetting the DFC_PERST signal.
For example, pulling the DFC per signal low ensures that the signal on each DFC interface is in the correct state at device start-up or operation.
S120, disabling the reference clock.
It is ensured that unnecessary problems are not caused by the interference of the clock signal during the initialization.
S130, enabling a power supply (not shown in the figure).
S140, initializing sideband signals output by the HFC.
And S150, setting the running state of the UBM controller to be in initialization.
S160, enabling a 2-Wire slave interface of the UBM controller.
According to the standard I2C protocol, the I2C control module of the Master is generally called an I2C Master module, and the I2C module of the Slave is generally called an I2C Slave module. Each I2C device has a unique I2C address for accessing the device. The host recognizes other I2C devices through the address. The above-mentioned host refers to a device initializing the I2C bus, and other I2C devices addressed by the host are called slaves. In the embodiment of the present application, the host is the RAID controller 12, and the slave is the UBM controller in the UBM protocol processing module 26.
The RAID controller 12 is a host, transmits commands and requests, and a plurality of UBM controllers are slaves, responding to instructions of the host. When a host (RAID controller) transmits data, each slave (UBM controller) recognizes whether or not it is data transmitted to itself by matching an address.
S170, setting the running state of the UBM controller to be ready for all 2-Wire slave interfaces.
The UBM controller operation status of all 2-Wire slave interfaces is set to READY. The RAID controller starts to monitor the state of the hard disk and the change of the bit signal corresponding to the DFC through the UBM controller. The initialization of the UBM controller ends. The UBM controller begins to monitor the DFC input for changes after initialization is complete. Illustratively, the UBM controller pulls the CHANGE DETECT signal low.
The RAID controller 12 attempts to communicate with the UBM controller and retries the communication again if the UBM controller does not respond. If the UBM controller responds to READY. The UBM controller pulls the CHANGE DETECT signal high. The RAID controller needs to resolve the routing of at least one DFC through the UBM controller to the corresponding HFC.
Table 1 shows examples of multiple channel DFCs to two HFC origination channels. DFCs, HFCs, and hard disks may use their location or number on the backplane to represent a physical address. For example, each DFC, HFC, and hard disk identification number may be 1,2, 3, 4, etc. The numbering may be continuous or discontinuous.
As shown in table 1, HFC0 has 4 channels, channel identification 0/1/2/3, channel 0 and channel 1 connect channel identification 0 DFC0, and channel 2 and channel 3 connect channel identification 1 DFC1.DFC0 has a channel 0 and a channel 1, and can be connected to hard disk 0 and hard disk 1, respectively, and DFC1 has a channel 2 and a channel 3, and can be connected to hard disk 2 and hard disk 3, respectively. If the HFC design supports a PCIE interface, its number of lanes may match the number of lanes of the PCIE bus, and the data transmission bandwidth of each PCIE lane may typically be X1/X4/X8/X16.
Examples of representing multiple channel DFC to two HFC origination channels
HFC1 of HFC sign 1 has 6 passageway, and passageway sign is 0/1/2/3/4/5 respectively, and passageway 0, passageway 1 passageway 2 and passageway 3 connect the DFC2 of passageway sign 2, and passageway 4 and passageway 5 connect the DFC3 of passageway sign 3, and DFC2 has passageway 0-passageway 3, can connect hard disk 4-hard disk 7 respectively. The DFC3 has a channel 4 and a channel 5, to which a hard disk 8 and a hard disk 9 can be connected, respectively.
Further, by way of example, 4 channels of 16 channels may be allocated to DFC0, the remaining 12 channels may be allocated to DFC1, 4 channels owned by DFC0 may be respectively connected to 4 hard disks, and 12 channels owned by DFC1 may be respectively connected to 12 hard disks, as further described with 16 channels of HFC 0.
Illustratively, the RAID controller sends a reset command to hard disk 1 and hard disk 3. In the reset command, UBM port routes carrying the hard disk 1 and the hard disk 3 are needed, and according to the information given in table 1, the UBM port route of the hard disk 1 is UBM port route index=0, hfc identification=0, hfc channel=1, dfc identification=0, dfc channel=lane1. UBM port routing for hard disk 3 is UBM port routing index=1, hfc identification=0, hfc channel=3, dfc identification=1, dfc channel=lane3. Thereby, the reset of the hard disk 1 and the hard disk 3 can be completed.
Fig. 4 is an I2C communication schematic diagram of UBM controllers and UBM FRUs of a plurality of hard disk backplates according to an embodiment of the present application. As shown in fig. 4, the RAID controller is connected to 4 hard disk backplanes, hard disk backplane 1-hard disk backplane 4. Each hard disk backboard is provided with a UBM protocol processing module which comprises a plurality of UBM controllers and UBM FRUs. The RAID controller 12 generates and decodes address select line signals to gate UBM protocol processing modules on a particular hard disk backplane. And setting an address selector in the RAID controller, and when the number of the address selection lines is n, enabling the n address selection lines to gate 2 n UBM protocol processing modules at most. For example, as shown in fig. 4, when there are two address selection lines, four signals may be output 00/01/10/11, corresponding to ports 1 to 4, and a UBM protocol processing module on a hard disk back plate is selected according to the address strobe logic. Thus, at most 4 UBM protocol processing modules may be gated by two address lines. The logic to decode the address select line signals may also be implemented by a hard disk backplane.
Taking the hard disk back plate 3 as an example, the RAID controller as a host connects HF0, HFC1 to HFCn in the hard disk back plate 3. The connection between RAID controller and HFC1 is illustrated, HFC1 connects UBM controller 1, UBM FRU1, UBM controller 2, UBM FRU2 … UBM controller N, UBM FRUN.
Similarly, at least one UBM controller and UBM FRU are correspondingly connected in HFC 0-HFCn.
The external interface between the HFC and the UBM controller is an I2C interface, so that the RAID controller can be connected with the UBM controller through an I2C bus, and correspondingly, the communication mode adopted by the RAID controller and each UBM controller corresponding to the external interface can be an I2C bus communication mode. On the I2C bus, each slave (UBM controller) has a unique address for I2C communication, this address is commonly referred to as an I2C address.
RAID controller 12 sends the I2C address of the UBM controller to be accessed to each UBM controller in the UBM protocol processing module over the HFC. When the UBM controller receives an address matching the own I2C address, it responds to the communication request of the RAID controller. The RAID controller 12 may thus access any one of the UBM controller and UBM FRU so that status signals of at least one hard disk managed by the UBM controller and the issuing hard disk count signal may be obtained.
In one possible implementation, the RAID controller 12 sends data and the UBM controller receives the data in the following flow:
Transmitting a start condition and a slave address: the RAID controller first sends a Start condition (Start) signal and then sends the slave address. The I2C devices all have, for example, a unique 7-bit address, and the UBM controller acknowledges the address: after receiving the address sent by the RAID controller 12, the target UBM controller confirms whether the address is correct. The UBM controller compares its own address with the value of the address field in the data sent by the RAID controller, and if the addresses match, the target UBM controller will be ready to send or receive data.
Read/write data bit judgment: after the master transmits the slave, e.g., 7-bit address, the lowest bit (bit 0) of the address may represent the direction of data transmission. For example, this least significant bit is 0, then it indicates that the master is about to write data to the slave (write operation); for example, this least significant bit is a1, indicating that the master reads data from the slave (read operation).
The RAID controller receives data: at the lowest bit (direction bit) of 1, the RAID controller 12 reads data in the target UBM controller. After reading the data, the RAID controller 12 may choose to send a reply signal to request more data, or send a non-reply signal to indicate that all data has been read. If the RAID controller sends a non-reply signal, this generally means that the RAID controller does not need more data.
Stop condition: when the master device completes the data read-write operation on the slave device, a stop condition is generated to inform the slave device that the transmission session is finished. The stop condition may be that the master device generates a high signal on SDA (data line), for example, while SCL (clock line) also generates a high level.
According to the embodiment of the application, the RAID controller can accurately determine the target UBM controller by setting the address bit of the UBM controller in the interactive data; each UBM controller has a unique address identification, which can prevent address collision and data transmission errors in a plurality of UBM controller environments. And the system can extend the new UBM controller without causing address conflicts.
Fig. 5 is a schematic diagram of an internal implementation of the UBM controller, as shown in fig. 5, where the UBM controller includes a data receiving module, a command parsing module, and a command executing module.
And the data receiving module is used for receiving the interactive data sent by the RAID controller through the serial data bus and processing the interactive data to obtain a plurality of interface data. The data receiving module receives serial bus signals from the RAID controller and stores the serial bus signals in a register; the serial bus signal indicates a read/write data command to a second hard disk or a second hard disk indicator lamp, the second hard disk including a hard disk of an NVME interface type; the serial bus signal includes a plurality of fields including one or more of: a read/write data enable bit, a read/write data address bit, a command bit, a data bit to be written; analyzing the interaction data by using a UBM protocol to obtain a plurality of interface data; the plurality of interface data includes one or more of: first interface data indicating read data enable; second interface data indicating read/write data addresses; third interface data indicating write data enable; fourth interface data indicating an operation command; fifth interface data indicating data to be written.
A command analysis module: the method is used for acquiring a plurality of interface data and analyzing the interface data to obtain a first command and a second command. And reading the value of at least one interface data in the first interface data to the fifth interface data, and analyzing the value of at least one interface data in the first interface data to the fifth interface data to obtain a first command and a second command.
A command execution module: a DFC connector for transmitting a first command to the UBM controller connection; the second command is sent to a hard disk indicator light of the target hard disk connected to the UBM controller.
The embodiment of the application provides a closed-loop operation of data processing in each stage, namely self-closed loop of information flow analysis processing in each realization module, by dividing the process of processing interactive data into a plurality of processing stages. The structured processing mode improves the data processing efficiency and reduces the waiting time. The stability and consistency of each stage in the data processing process are ensured. Even if one module fails, other modules can continue to work normally, so that the stability of the system is improved. Each module completes data processing in an independent closed loop, and errors and delays caused by data transmission among a plurality of modules are reduced.
The data receiving module receives the interactive data sent by the RAID controller through the HFC, for example, when the i2C_SCL on the I2C bus is detected to be kept at a high level, the I2C_SDA is changed from the high level to the low level, and the data receiving module starts to receive the data. The data receiving module processes the data to determine whether it is the UBM controller that is sent to the current address and sends data indicating read data or write data. Specifically, read data or write data is determined according to the value of the read/write bit of the transmission data, and whether to receive the data is determined according to the address bit of the transmission data. Each of the at least one UBM controller has a unique address identification, and one UBM controller of the at least one UBM controller is determined to be the target UBM controller based on the address identification and the address identification indicated by the address bits being consistent as a result of the address identification comparison.
The data receiving module reads the data bits on the I2C data line SDA, for example, on the rising edge of the clock signal SCL and parses them into valid data bytes, according to the I2C protocol, for example, the data is transmitted in LSB-first order, i.e. the lowest Bit (LSB, least Significant Bit) is transmitted first, then the next lowest Bit is transmitted until the highest Bit (MSB, most Significant Bit) transmission is completed. The data receiving module needs to reorder the data bits of each byte to get the correct data value. The clock signal i2c_scl may employ the system master clock sys_clk.
The minimum unit in the I2C data transmission is one data bit, and the data receiving module reads the data bit on the rising edge or the falling edge of the SCL, and stores the byte after receiving 8 data bits, i.e. one byte.
The data receiving module transmits the result of the data processing as interface data to the command parsing module by constructing at least one interface. For example, the data receiving module adds a third data interface write_en indicating that the received data is a write data enable command, and assigns the third data interface write_en as "1" or "0" by reading a specific location of the data of the received byte, for example, a specific location of "1" indicates a read command and "0" indicates a write command. Adding the first data interface read_en indicates that the received data is read data enabled, and the read_en is assigned to "1" or "0" by reading a specific location of the data of the received byte. The data receiving module adds a fifth data interface write_data, and indicates that the received data is a state signal of a target hard disk or a lighting signal of a hard disk indicator lamp.
The embodiment of the application also defines that a plurality of other interfaces are transmitted to the command parsing module, including a data interface Start_ condition, stop _ condition, restart _ condition, initianl _finish and busy_en.
In one possible implementation manner, the command parsing module is configured to obtain data provided by the interface of the data receiving module at intervals of a preset time, parse the interface data to obtain a first command and a second command, send the control command to the command executing module, execute the first command to at least one DFC, and send the second command to a hard disk indicator corresponding to at least one hard disk.
In one possible implementation, the data receiving module receives the complete byte and stores the complete byte in the buffer, and sets a data receiving completion flag bit or triggers an interrupt event to notify the command parsing module of the data reading.
For example, the command parsing module invokes the interface function provided by the data receiving module, accesses the buffer area storing the received data, and reads the data bytes to parse the command.
In one possible implementation, the command parsing module is further configured to store a status signal of at least one hard disk and a status signal of a hard disk indicator corresponding to the hard disk.
Fig. 6 is a schematic structural diagram of a sub-module included in each module of the UBM controller in fig. 5. As shown in fig. 6:
the data receiving module comprises a data receiving sub-module, a first interface sub-module and a first register.
The command analysis module comprises a command receiving sub-module, a second interface sub-module, a hard disk state acquisition sub-module and a command output sub-module.
The command execution module comprises a first command execution sub-module and a second command execution sub-module.
The data receiving sub-module listens to the data transmission on the I2C bus, receives the data sent by the RAID controller over the HFC, combines the data into bytes and stores them in the first register (the buffer mentioned in fig. 6).
Illustratively, the first interface sub-module provides the byte data in the first register to the command parsing module.
The data receiving sub-module ensures that the received data is stored in the correct order. After the data is stored, for example, a corresponding interrupt signal may be triggered to the first interface sub-module to process the received original data. And transmitting the processed data to a command analysis sub-module.
The command analysis sub-module calls the interface data of the first interface sub-module and analyzes the data.
It should be noted that, the analysis of the data by the command analysis submodule is to interpret and analyze the interface data, and determine the meaning of the command and the operation to be executed.
The command analysis sub-module is further configured to store a result of the execution of the interface data in the second interface sub-module, and the RAID controller may periodically query the second interface sub-module to obtain a result of the execution of the command, where the result of the execution of the command is returned to the RAID controller through the I2C bus. Illustratively, the second interface submodule includes a plurality of data interfaces: rdchecksum _ valid, readdata _valid read_data. The target UBM controller returns an execution result state to the RAID controller, wherein the execution result state indicates that the execution result of the serial bus signal is successful or failed; based on the successful execution result, the RAID controller sends the next serial bus signal. Based on the failure of the execution result, the RAID controller, for example, logs or performs an error handling procedure to notify the RAID controller to handle the error.
According to the embodiment of the application, through the indication of success and failure of execution, the system can timely detect and process abnormal conditions, and the stability and maintainability of the system are enhanced.
The hard disk state acquisition sub-module acquires state information of a hard disk connected with the DFC through the DFC. Hard disk state information includes, but is not limited to: hard disk bit information dfc_prsnt, hard disk type information dfc_ IFDET, and hard disk set information dfc_perst. And storing the hard disk state information in a second register.
And the command output sub-module is used for sending the analyzed command to the command execution module.
The first command execution submodule is to receive a control command for the DFC, and illustratively, the first command execution submodule sends a dfc_perst signal to the DFC. Whether to reset the hard disk connected to the DFC is determined according to the state of the DFC per.
The second command execution submodule is used for sending a command for controlling the hard disk indicator lamp to the hard disk indicator lamp of the corresponding hard disk. For example, the second command execution sub-module controls the brightness and the flicker frequency of the hard disk indicator lamp through corresponding voltage or current information. Specifically, the second command execution submodule receives the command output by the command output submodule, and determines the hard disk indicator lamp to be lighted and the state of the indicator lamp.
In the following description of the hot plug process of the hard disk, the hard disk state acquisition submodule detects a state signal of the hard disk, stores a new state signal of the hard disk in the second register as a current state signal of the hard disk when the state signal of the hard disk changes, and instructs the analysis submodule to notify the RAID controller of the change of the state signal of the hard disk. And the RAID controller receives the state change signal of the hard disk through the I2C data channel, so that hot plug management is carried out on the hard disk with the state change.
Fig. 7 is a schematic diagram of a structure of a first register in the receiving module, as shown in fig. 7, where the first register stores I2C data received by the data receiving module, and the I2C data includes a plurality of fields including a read/write data enable bit, a read/write data address bit, a command bit, and a data bit to be written. Further included are a start bit, UBM controller address bit, a check bit, and a stop bit.
The first register includes a plurality of memory locations that form a data block for storing a complete command. The number of memory cells is the same as the number of commands in the UBM command set defined in the UBM specification.
For example, one memory cell of the first register is a 1-bit value, one memory cell stores one bit of data, and a plurality of memory cells store one command. A command is defined by a start bit and an end bit of the received data, and at least a read/write data enable bit, a read/write data address bit, a command bit, and a data bit to be written are included between the start bit and the end bit.
It should be noted that, the embodiment of the present application is described in terms of a data transmission format, where the number and meaning of the field bits included in the data transmission format are only schematically represented, and should not be taken as limiting the protection scope of the embodiment of the present application.
The data receiving module takes a storage unit with the same identification as the command bit in the first register as a target storage unit according to the command bit of the received data, and stores the received data in the target storage unit.
The definition of the first to fifth interface data in the first interface sub-module is as follows: first interface data: indicating read data enable. Second interface data: indicating the read/write data address. Third interface data: indicating write data enable. Fourth interface data: indicating an operation command. Fifth interface data: indicating the data to be written.
When data starts to be transmitted, a start bit is sent, and the first interface submodule detects and stores a start bit signal. After the data storage is completed, the first interface sub-module sets the start_condition signal to a predetermined active level, and the change of the level value indicates the command parsing sub-module to start receiving data.
And then the first interface submodule judges a read data bit/write data enabling bit, and if the read data bit/write data enabling bit is indicated to be read, the first interface submodule sets a read_en interface of the first data interface as a contracted effective level value and instructs the command analysis submodule to start the read operation. If the instruction is a write operation, the first interface sub-module sets the data of the third data interface write_en to a contracted effective level value, and instructs the command analysis sub-module to start the write operation. The state machine in the command parsing sub-module automatically jumps to the corresponding read or write operation state according to the read_en or write_en signal. Write operation data is provided from the fifth data interface. The command analysis submodule analyzes the data according to the UBM specification and outputs a control signal to the command output submodule to finish corresponding operation.
Some commands require a continuous write operation, for example, command i has a relationship with command j nested therein, and a start bit of command j is indicated by a data interface restart_condition of the first sub-interface sub-module.
After the data is received, the first interface sub-module sets the stop_condition interface of the data interface as a contracted effective level value, and indicates the command analysis sub-module to end the command and prepare to process the next command.
The data receiving sub-module may check whether the command is received completely, for example, the command i defines 12 bits of data, if the data receiving sub-module only receives 10 bits of data, when the received command i includes fields stored in the first register, the 12bit register unit allocated to the command i cannot be full, the remaining 2 bits of memory units have no new data to refresh the data stored in the command i last time, and the data receiving sub-module may consider the received field of the command i invalid. The RAID controller is required to retransmit command i.
The correctness of the data receiving is checked by the command i checksum field on one hand, and the command analyzing submodule also performs integral check on the command i on the other hand. This verification process may be implemented by using an existing technical solution, which is not described herein. After the command analysis submodule finishes checking the command i, if the checking fails, the result is returned to the RAID controller through the data interface rdchecksum _valid of the second interface submodule, and the RAID controller resends the data with failed reception according to the data sending rule after receiving the feedback information.
In summary, the data interface in the first interface sub-module operates the hard disk or the hard disk indicator lamp after being analyzed by the command analysis module. The interface data in the second interface sub-module is fed back to the RAID controller, and the instruction is not received successfully, so that the RAID controller is informed of retransmission; alternatively, the command has been processed and the next command may be sent.
For example, the RAID controller sends a command to execute power-down to a certain hard disk, after the command analysis submodule completes the command, the command analysis submodule feeds back to the RAID controller to execute the command, if the command is executed successfully, the RAID controller sends a next command to instruct the hard disk to power-up or set.
In one possible implementation, if the DFC connected to the UBM controller detects that a certain hard disk slot is not inserted into the hard disk, the system will release the memory space occupied by the second register for other UBM controllers. In this way, a usage optimization of the storage space can be achieved.
It should be noted that, the interface structure and the data format of the command parsing sub-module are already agreed to the data receiving sub-module, and the data exchange mode between the command parsing sub-module and the data receiving sub-module is specified. The rules include information such as the type, location, format, and timing of reading and writing of data. Therefore, the command parsing sub-module parses the data of the data receiving sub-module interface according to the rules and performs the corresponding operations.
The command analysis sub-module calls the data of the first interface sub-module, and analyzes the interface data after the interface data is acquired.
For example, the command parsing sub-module may create a plurality of processes, obtain data provided by the first interface sub-module according to a preset rule, where the preset rule may be to query the first interface sub-module every other time period.
In one possible implementation, the hard disk state acquisition submodule includes an interface circuit in which multiple pins receive dfc_perst, dfc_ifdet, dfc_prsnt signals from the DFC. The pins are programmed to be configured as input ports to the hard disk state acquisition sub-module to receive hard disk state signals from the DFC acquisition. When signals including dfc_perst, dfc_ifdet, dfc_prsnt are received by the interface circuit, the hard disk state acquisition sub-module stores the current hard disk state in the second register, and the RAID controller may determine state information of the hard disk according to the level (high level or low level) of the signals.
In the UBM protocol specification, several bit positions in the DFC state and control information indicate the hard disk type in the DFC.
Whether a hard disk is inserted into the slot can be determined by detecting a signal of a first pin of the slot.
In one possible implementation, the signal of the first pin is a dfc_prsnt signal for determining whether the hard disk is inserted into a slot, and when the signal of the first pin is high, it indicates that the hard disk is pulled out; when the signal of the first pin is low level, the hard disk is inserted into the slot.
In one possible implementation, the signal of the second pin is the DFC IFDET signal. The type of hard disk may be determined by detecting the signal of the second pin. If the type of the second pin is a low level signal, determining that the type of the inserted hard disk is SAS or SATA; if the signal of the second pin is a high level signal, the type of the hard disk is determined to be NVME.
The first pin and the second pin signals are received through an interface circuit in the hard disk state acquisition sub-module, and the first pin and the second pin signals are stored in a second register.
In one possible implementation, the signal of the third pin is a dfc_perst signal that determines the hard disk set state, and the third pin signal may also be stored in the second register.
Specifically, the hard disk state acquisition sub-module stores the level of the signal sent by the interface circuit in the second register, and can be realized by writing logic codes. This will be explained in detail later.
If hard disk information has been placed in the second register and the host needs to read this information, the second register address may be agreed in the I2C communication protocol, e.g. 0x1000.
The command execution submodule receives the analyzed data sent by the command analysis submodule, and the state signal used for controlling the hard disk by the first command execution submodule comprises: the signals for controlling the hard disk indicator lamp by the second command execution sub-module include: led_data, led_en, and led_status.
For example, the first command execution submodule selects whether to gate the dfc_data signal according to the state of the dfc_valid signal through the multiplexer, and the output of the command execution submodule is connected with the set signal dfc_perst of the DFC-provided hard disk to output the VALID or invalid hard disk set signal dfc_perst.
The embodiment of the application allows the reset signal of the hard disk to be transmitted through the DFC, the UBM controller can intensively manage the reset operation of the hard disk, the RAID controller can dynamically control the reset of the hard disk, the hard disk with a certain slot position under the connection of the DFC can be selected to be reset, and finer control can be realized.
By way of example, the second command execution sub-module maps the signal led_status reflecting the STATUS of the LED to a specific STATUS of the hard disk indicator, such as fault, active, local. The command execution submodule controls the LED circuit according to the specific state of the indicating lamp so that the hard disk indicating lamp is displayed according to the state indicated by the control signal.
The led_en signal is used to enable or disable the hard disk indicator, and when the led_en signal is in an enabled state, the command execution module adjusts the on or off and the state of the indicator, such as brightness level, color, flashing frequency, according to the led_data and led_status signals.
For example, when led_status indicates that the hard disk fails, the second command execution submodule generates a FAULT control signal to the FAULT indicator. When the led_status signal indicates that the hard disk is in an ACTIVE state, a control signal ACTIVE is generated to illuminate the ACTIVE indicator light. When the led_status signal indicates that the hard disk is in a LOCAL state, a control signal LOCAL is generated to illuminate a LOCAL indicator light. The LED_DATA signal is used for transmitting control information such as brightness or color of the LED lamp.
Fig. 8 is a schematic diagram of a second register in the command parsing module, and as shown in fig. 8, dfc_perst, dfc_ifdet, dfc_prsnt signals of each of the hard disks 0 to N are stored in the second register, and the second register stores a first pin signal, a second pin signal, and a third pin signal of at least one hard disk connected to at least one DFC connected to the first command parsing sub-module.
The second register includes a plurality of memory locations, for example, a base address of the second register is set to 0x1000, and an offset of each memory location relative to the sum address is set for determining a location of each memory location in the register space.
Register offset refers to the offset of each memory location relative to the base address in the register map of the device. The base address is the starting address of the device register space and the offset is an offset value relative to the starting address for determining the location of each memory location in the register space.
The method comprises the steps of detecting a hard disk state signal in a second register corresponding to the hard disk, acquiring the latest state of the hard disk through an interface circuit when the hard disk state signal changes, and storing the latest state in an address offset storage space corresponding to the second register.
For example, in the UBM controller 1, the hard disk state acquisition sub-module may acquire the dfc1_perst signal of the hard disk 1 connected to DFC1, the dfc2_perst signal of the hard disk 2 connected to DFC2, the dfc1_ PESNT signal of the hard disk 1 connected to DFC1, the dfc2_ PESNT signal of the hard disk 2 connected to DFC2, the dfc1_ IFDET signal of the hard disk 1 connected to DFC1, and the dfc2_ IFDET signal of the hard disk 2 connected to DFC 2. The state of each hard disk in the second register is stored in a memory location.
Continuing with the description of the third pin signal DFC per st signal, the first pin signal DFC per st may update the DEF per st signal state of the hard disk by the DFC after the hard disk is inserted into the slot, and may also update the DEF per st signal of the hard disk in the second register by the RAID controller sending a reset command.
For example, the RAID controller needs to reset the hard disk 1, the RAID controller sends a reset command to an UBM controller address connected to the hard disk 1 through an I2C signal channel, a data receiving module in the UBM controller receives the command, determines a start bit, a read-write bit, a command bit, a data bit, a checksum bit, and an end bit, stores the command in a corresponding storage unit in the first register, sends processed data to a corresponding data interface, and a process in the command parsing module queries an interface provided by the data receiving module, executes data provided by the interface and returns an execution result to the RAID controller. By way of example, the command parsing module queries the interface data provided by the data receiving module in a periodic polling manner. The embodiment of the application does not limit the mode of acquiring the interface data by the command interface module.
Taking the case that the RAID controller reads the hard disk state as an example, a process of receiving and analyzing data by the UBM controller is described. The RAID controller sends an instruction for reading the hard disk state, and sends a command for reading the hard disk state to a data receiving module in the target UBM controller through the I2C bus. It should be noted that, in the embodiment of the present application, the data sent by the RAID controller is a specific byte sequence, and is encoded according to the provision of the UBM protocol. The data receiving module receives the command sent by the RAID controller from the I2C bus, extracts the value of the first data signal in the received data, and sets the value of the interface write_en to "1" if a read command is indicated. The hexadecimal value of the corresponding command is obtained by extracting the value of a second data signal in the received data, the value of the second data signal indicating the command to be executed. According to a preset command mapping relationship, for example, a command field of 07h is mapped to 01. The command field with 01 identification in a plurality of storage units is searched from a first register, if matching is successful, the received data is stored in the storage unit, and the command stored in the unit before is covered. The command indicates the read hard disk status, and the read UBM port routing information is included in the data field, which indicates the specific path from HFC to DFC (as shown in table 1) of the hard disk that needs to be read. After the command analysis module obtains the data of the read_en interface provided by the data receiving module, the command analysis module reads the command and the data field of the command from the first register, and obtains the hard disk state from the second register according to the command indicated by the command field and the command data indicated by the data field and the port routing information.
One or more hard disk backplanes may be provided on the computing device, the one or more hard disk backplanes being connected to the RAID controller. In order to configure the one or more hard disk backplanes, after the computing device is powered on, the RAID controller may read information of the one or more hard disk backplanes through the UBM controller in the one or more hard disk backplanes, where the information of the one or more hard disk backplanes may include identifiers of the one or more hard disk backplanes, where the identifiers of the one or more hard disk backplanes are respectively used to indicate hardware resources of the one or more hard disk backplanes, where the hardware resources of the hard disk backplanes may include a number of interfaces of the hard disk backplanes, a number of supported maximum hard disks, a type of supported hard disks, and the like, and where the hardware resources corresponding to different types of hard disk backplanes are different. The identification of a hard disk backplane may be a backplane ID of the hard disk backplane, or may be other information (such as a name of the hard disk backplane) that may identify the type of the hard disk backplane or a hardware resource.
The embodiment of the application provides a data processing method based on a complex programmable logic device CPLD, wherein the CPLD is arranged on a hard disk backboard of a server, and the hard disk backboard supports a universal backboard management UBM protocol; the hard disk backboard comprises an HFC connector used for connecting with the RAID controller and a DFC connector used for connecting with the hard disk slot position, and the CPLD is respectively connected with the HFC connector and the DFC connector; the method comprises the following steps:
step S310, judging and judging the interface type of the hard disk according to the signal level transmitted by the DFC under the condition that the hard disk is inserted into the hard disk slot;
Step S320, processing interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, wherein the interactive data comprises data indicating the type of the hard disk interface, data indicating the state of the hard disk and read/write operation command data;
Step S330, the processed data is transmitted to a RAID controller or the hard disk to realize the management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type.
The embodiment of the application provides a data processing method based on a complex programmable logic device CPLD, which is applied to a hard disk backboard, wherein the hard disk backboard comprises an HFC connector for connecting a RAID controller and a DFC connector for connecting a hard disk slot, the HFC and the DFC are connectors meeting a universal hard disk backboard management standard UBM, the HFC is connected with the RAID controller, the HFC is connected with at least one DFC through the CPLD, and the DFC is connected with at least one hard disk; the hard disk backboard comprises a plurality of UBM controllers, and the UBM controllers are connected with the RAID controller through HFCs; the UBM controller is connected to the hard disk through a DFC, the method comprising:
Responsive to the RAID controller sending a serial bus signal, the target UBM controller receiving the serial bus signal; the target UBM controller is one of a plurality of UBM controllers; the serial bus signal comprises a plurality of fields, wherein the fields at least comprise read/write data enabling bits, read/write data address bits, command bits and data bits to be written;
Analyzing the serial bus signal to obtain a plurality of interface data; the plurality of interface data at least comprises a first data interface indicating read data enabling, a second data interface indicating read/write data address, a third data interface indicating write data enabling, a fourth data interface indicating an operation command, and a fifth data interface indicating data to be written; the target hard disk is at least one of a plurality of hard disks;
reading the value of at least one data interface data in the first interface data to the fifth interface data, analyzing the value of at least one interface data in the first interface data to the fifth interface data, and sending a read/write command to a target hard disk connected DFC or a target hard disk indicator lamp.
Wherein the serial bus signals include at least a first serial bus signal, a second serial bus signal, a third serial bus signal, and a fourth serial bus signal. The first serial bus signal indicates a read data command to the target hard disk, the second serial bus signal indicates a write data command to the target hard disk, the third serial bus signal indicates a write data command to the target hard disk indicator, and the fourth serial bus signal indicates a read data command to the target hard disk indicator.
In response to the RAID controller sending a first serial bus signal, the target UBM controller receiving the first serial bus signal, the first serial bus signal including a plurality of fields including at least a read data bit, a target hard disk address identification bit; analyzing the first serial bus signal to obtain a plurality of interface data; at least a first data interface data of the plurality of interface data indicates read data enable, and a second interface data indicates UBM port routing of the target hard disk; reading values of at least first interface data and second interface data, analyzing the values of the first interface data and the second interface data, and determining a register unit address of state data of a target hard disk based on UBM port routing; and reading the state data of the target hard disk stored in the register unit address, and sending the state data to the RAID controller through the serial bus.
Responsive to the RAID controller sending the second serial bus signal, the target UBM controller receiving the second serial bus signal; the second serial bus signal indicates a reset command to the target hard disk; the second serial bus signal includes a plurality of fields including at least write data enable bits, write data address bits, and command bits; analyzing the second serial bus signal to obtain a plurality of interface data; at least a third interface data of the plurality of interface data indicates write data enabling, the second interface data indicates that a write data address is a UBM port route of the target hard disk, and the fourth interface data indicates an operation command to the target hard disk; the operation command indicates to set the reset signal of the target hard disk to a valid value; reading the values of at least the second interface, the third interface and the fourth interface, and analyzing the values of the second interface, the third interface and the fourth interface to obtain an operation command for the target hard disk; and sending the operation command to the DFC correspondingly connected with the target hard disk, and resetting the target hard disk through the DFC.
Responsive to the RAID controller sending the third serial bus signal, the target UBM controller receiving the third serial bus signal; the third serial bus signal indicates a lighting command of the target hard disk indicator lamp; the third serial bus signal comprises a plurality of fields, wherein the fields at least comprise write data enabling bits, write data address bits and data bits to be written, and the target hard disk indicator lamp corresponds to the target hard disk one by one; analyzing the third serial bus signal to obtain a plurality of interface data; at least third interface data in the plurality of interface data indicates write data enabling, the second interface data comprises port routing information, a control register address of a target hard disk indicator lamp is determined according to the port routing information, a fourth data interface indicates an operation command to be a lighting command to the target hard disk indicator lamp, and a fifth data interface indicates data to be written to be a lighting signal to the target hard disk indicator lamp; reading values of at least a second interface, a third interface, a fourth interface and a fifth interface, and analyzing the values of the second interface, the third interface, the fourth interface and the fifth interface to obtain a lighting command of the target hard disk indicator lamp; and sending the lighting command to the target hard disk indicator lamp.
Responsive to the RAID controller sending the fourth serial bus signal, the target UBM controller receiving the fourth serial bus signal; the fourth serial bus signal indicates a read data command to the target hard disk indicator; the fourth serial bus signal comprises a plurality of fields, and the plurality of fields at least comprise a read data enabling bit and a read data address; analyzing the fourth serial bus signal to obtain a plurality of interface data; at least the first interface data of the plurality of interface data indicates read data enable, and the second interface data includes a read data address, and determines a register unit address of the target hard disk indicator. Reading the values of at least a first interface and a second interface, analyzing the values of the first interface and the second interface, and obtaining a register unit address of the state of the target hard disk indicator lamp based on UBM port routing; and reading the state data of the target hard disk indicator lamp stored in the register unit address, and sending the state data to the RAID controller through the serial bus.
It should be noted that, in the embodiment of the present application, only the first serial bus signal to the fourth serial bus signal are described in an exemplary manner, and the RAID controller may also send serial bus signals representing other commands, which are not described herein.
Fig. 9 is a schematic diagram of UBM structures of a multi-RAID controller and a multi-hard disk backplane according to an embodiment of the present disclosure. As shown in fig. 9, there is more than one backplane, hard disk backplane 0 and hard disk backplane 1, within the chassis of the computing device. The RAID controller may also be plural, such as RAID controller 0 and RAID controller 1. The RAID controller 0 is connected with the hard disk backboard 0. The hard disk back plate 0 includes a plurality of UBM controllers and UBM FRUs, and the hard disk back plate 1 includes a plurality of UBM controllers and UBM FRUs.
The RAID controller 0 is connected with the hard disk backboard 0, and the RAID controller 1 is connected with the hard disk backboard 1. The RAID controller 0 is connected with HFC0 and HFC1 in the hard disk backboard 0, and the RAID controller 1 is connected with HFC0 and HFC1 in the hard disk backboard 1. The RAID controller 0/1 can send sideband signals through HFC0 and HFC1 in the hard disk backboard 0/1, acquire state information and bit information of at least one hard disk connected by HFC0 and HFC1 in the hard disk backboard 0/1, and send control commands to the at least one hard disk.
The embodiment of the present application further provides a computing device 1000, as shown in fig. 10, the computing device 1000 includes: a hard disk backplane comprising a CPLD thereon for emulating UBM controllers, the plurality of UBM controllers performing the data processing method as described in any of fig. 2-9.
A RAID controller coupled to the UBM controller of the computing device 1000, the RAID controller configured to: generating control information of a hard disk and control information of a hard disk indicator lamp, and sending the control information and the control information of the hard disk indicator lamp to a target UBM controller; the target UBM controller is used for analyzing the control information of the hard disk and the control information of the hard disk indicator lamp, and obtaining a control command and sending the control command to the DFC connector connected with the target hard disk and the target hard disk indicator lamp.
It is to be appreciated that the processor in embodiments of the application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application Specific Integrated Circuits (ASICs), field programmable gate arrays (field programmable GATE ARRAY, FPGAs), or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by executing software instructions by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (erasable PROM, EPROM), electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Drive (SSD)), etc.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.

Claims (10)

1. The data processing method based on the complex programmable logic device CPLD is characterized in that the CPLD is arranged on a hard disk backboard of a server, and the hard disk backboard supports a universal backboard management UBM protocol; the hard disk backboard comprises an HFC connector used for being connected with a RAID controller and a DFC connector used for being connected with a hard disk slot position, and the CPLD is respectively connected with the HFC connector and the DFC connector; the method comprises the following steps:
Under the condition that a hard disk is inserted into the hard disk slot, judging the interface type of the hard disk according to the signal level transmitted by the DFC connector;
Processing interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, wherein the interactive data comprises data indicating the type of the hard disk interface, data indicating the state of the hard disk and read/write operation command data;
And transmitting the processed data to a RAID controller or the hard disk to realize management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type.
2. The method of claim 1, wherein the CPLD includes a SAS/SATA protocol processing module and a UBM protocol processing module; the processing the interactive data between the RAID controller and the hard disk according to the hard disk interface type comprises the following steps:
When the hard disk interface type is SAS/SATA, an SAS/SATA protocol processing module is used for processing interactive data from the RAID controller and a first hard disk, wherein the first hard disk comprises a hard disk of the SAS/SATA interface type;
When the hard disk interface type is NVME, processing interactive data from the RAID controller and a second hard disk by using a UBM protocol processing module, wherein the second hard disk comprises a hard disk of the NVME interface type; and the UBM protocol processing module processes the interactive data according to the UBM protocol.
3. The method of claim 2, wherein the UBM protocol processing module comprises a plurality of UBM controllers, the RAID controller being coupled to at least one UBM controller via an HFC connector, the UBM controller being coupled to at least one of the DFC connectors;
processing interactive data from the second hard disk using the UBM protocol processing module, comprising:
The target UBM controller receives the interactive data sent by the RAID controller; the target UBM controller is one of a plurality of UBM controllers; the interaction data includes a plurality of fields including one or more of: a read/write data enable bit, a read/write data address bit, a command bit, a data bit to be written;
Analyzing the interaction data by using a UBM protocol to obtain a plurality of interface data; the plurality of interface data includes one or more of: the first interface data may be a first set of data, the first interface data indicates read data enable; second interface data indicating a read/write data address; the third interface data is used to store the data of the third interface, the third interface data indicates write data enable; fourth interface data indicating an operation command; fifth interface data indicating data to be written;
Reading the value of at least one interface data from the first interface data to the fifth interface data, analyzing the value of at least one interface data from the first interface data to the fifth interface data, and sending the analyzed command to the DFC connector connected with the second hard disk.
4. The method of claim 3, wherein the plurality of fields in the interaction data further comprise UBM controller address bits, the UBM controller address bits to determine the target UBM controller; the target UBM controller receiving the interaction data sent by the RAID controller, including:
the RAID controller sending the interaction data to the UBM controller over a serial data bus, determining the target UBM controller based on the UBM controller address bits in the interaction data, comprising:
Determining one UBM controller in at least one UBM controller as a target UBM controller based on the consistency of the address identification and the address identification indicated by the address bit of the UBM controller; each of the at least one UBM controller has a unique address identification.
5. The method of claim 3, wherein the UBM protocol processing module further comprises an FRU that stores initial configuration information for the hard disk backplane, the initial configuration information comprising a mapping of a plurality of UBM controllers and the HFC and DFC connectors.
6. A method according to claim 3, characterized in that the method further comprises: the target UBM controller returns the interactive data processing result to the RAID controller, wherein the processing result indicates that the interactive data execution result is success or failure;
based on the successful execution result, the RAID controller sends the next piece of interaction data;
And based on the failure of the execution result, the RAID controller is informed of the processing error by the execution error processing flow.
7. The method according to claim 1, wherein the method further comprises: and transmitting the processed data to a hard disk indicator lamp corresponding to the hard disk so as to realize management of the hard disk indicator lamp.
8. The data processing system based on the complex programmable logic device CPLD is characterized in that the CPLD is arranged on a hard disk backboard of a server, and the hard disk backboard comprises an HFC connector used for connecting a RAID controller and a DFC connector used for connecting a hard disk slot position; the system comprises:
And the protocol processing module is used for processing the interactive data between the RAID controller and the hard disk according to the type of the hard disk interface, and transmitting the processed data to the RAID controller or the hard disk so as to realize the management of the hard disk, wherein the hard disk comprises a hard disk of an SAS/SATA/NVME interface type.
9. The system of claim 8, wherein the protocol processing module comprises:
the system comprises an SAS/SATA protocol processing module, a RAID controller and a first hard disk, wherein the SAS/SATA protocol processing module is used for processing interactive data of the RAID controller and the first hard disk when the hard disk interface type is SAS/SATA, and the first hard disk comprises a hard disk of the SAS/SATA interface type;
The UBM protocol processing module is used for processing interactive data from the RAID controller and a second hard disk by utilizing the UBM protocol when the hard disk interface type is NVME, wherein the second hard disk comprises a hard disk of the NVME interface type; and the UBM protocol processing module processes the interactive data according to the UBM protocol.
10. A computing device, comprising:
a hard disk backplane comprising thereon a CPLD for emulating a UBM controller, the UBM controller performing the data processing method of any of claims 1-7;
a RAID controller connected to the UBM controller of the computing device, the RAID controller configured to: generating control information of a hard disk and control information of a hard disk indicator lamp, and sending the control information and the control information of the hard disk indicator lamp to a target UBM controller; and the target UBM controller is used for analyzing the control information of the hard disk and the control information of the hard disk indicator lamp to obtain a control command and sending the control command to the DFC connector connected with the target hard disk and the target hard disk indicator lamp.
CN202410751588.8A 2024-06-11 2024-06-11 A data processing method, system and computing device based on complex programmable logic device CPLD Pending CN118689819A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410751588.8A CN118689819A (en) 2024-06-11 2024-06-11 A data processing method, system and computing device based on complex programmable logic device CPLD
PCT/CN2025/082724 WO2025256210A1 (en) 2024-06-11 2025-03-14 Complex programmable logic device (cpld)-based data processing method and system, and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410751588.8A CN118689819A (en) 2024-06-11 2024-06-11 A data processing method, system and computing device based on complex programmable logic device CPLD

Publications (1)

Publication Number Publication Date
CN118689819A true CN118689819A (en) 2024-09-24

Family

ID=92764043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410751588.8A Pending CN118689819A (en) 2024-06-11 2024-06-11 A data processing method, system and computing device based on complex programmable logic device CPLD

Country Status (2)

Country Link
CN (1) CN118689819A (en)
WO (1) WO2025256210A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118860966A (en) * 2024-09-27 2024-10-29 苏州元脑智能科技有限公司 Computer system, signal processing method, device, medium and product
CN118899007A (en) * 2024-09-29 2024-11-05 苏州元脑智能科技有限公司 A hard disk backplane structure and control method thereof, and server
CN119629047A (en) * 2024-11-29 2025-03-14 新华三信息技术有限公司 Device type identification method, device, device and readable storage medium
CN120353734A (en) * 2025-06-20 2025-07-22 苏州元脑智能科技有限公司 Hard disk control circuit, method, device, medium and program product
WO2025256210A1 (en) * 2024-06-11 2025-12-18 超聚变数字技术有限公司 Complex programmable logic device (cpld)-based data processing method and system, and computing device
CN121349942A (en) * 2025-12-19 2026-01-16 苏州元脑智能科技有限公司 Storage device backboard, device control method, device and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11592991B2 (en) * 2017-09-07 2023-02-28 Pure Storage, Inc. Converting raid data between persistent storage types
CN108563549A (en) * 2018-04-09 2018-09-21 郑州云海信息技术有限公司 A kind of interface hard disk state instruction control system and method based on CPLD-FPGA
US11061837B2 (en) * 2018-08-21 2021-07-13 American Megatrends International, Llc UBM implementation inside BMC
CN117093514A (en) * 2023-08-30 2023-11-21 苏州浪潮智能科技有限公司 Identification system, method, device and equipment for uplink board connected to hard disk backplane
CN117806918A (en) * 2023-11-28 2024-04-02 苏州元脑智能科技有限公司 Hard disk lighting device and method
CN118069052A (en) * 2024-02-02 2024-05-24 超聚变数字技术有限公司 Information processing method and related device
CN118689819A (en) * 2024-06-11 2024-09-24 超聚变数字技术有限公司 A data processing method, system and computing device based on complex programmable logic device CPLD

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025256210A1 (en) * 2024-06-11 2025-12-18 超聚变数字技术有限公司 Complex programmable logic device (cpld)-based data processing method and system, and computing device
CN118860966A (en) * 2024-09-27 2024-10-29 苏州元脑智能科技有限公司 Computer system, signal processing method, device, medium and product
CN118899007A (en) * 2024-09-29 2024-11-05 苏州元脑智能科技有限公司 A hard disk backplane structure and control method thereof, and server
CN118899007B (en) * 2024-09-29 2025-02-07 苏州元脑智能科技有限公司 Hard disk backboard structure, control method thereof and server
CN119629047A (en) * 2024-11-29 2025-03-14 新华三信息技术有限公司 Device type identification method, device, device and readable storage medium
CN120353734A (en) * 2025-06-20 2025-07-22 苏州元脑智能科技有限公司 Hard disk control circuit, method, device, medium and program product
CN121349942A (en) * 2025-12-19 2026-01-16 苏州元脑智能科技有限公司 Storage device backboard, device control method, device and device

Also Published As

Publication number Publication date
WO2025256210A1 (en) 2025-12-18

Similar Documents

Publication Publication Date Title
CN118689819A (en) A data processing method, system and computing device based on complex programmable logic device CPLD
CN1124551C (en) Method and system used for hot insertion of processor into data processing system
US7412544B2 (en) Reconfigurable USB I/O device persona
US11809364B2 (en) Method and system for firmware for adaptable baseboard management controller
US6671748B1 (en) Method and apparatus for passing device configuration information to a shared controller
US9367510B2 (en) Backplane controller for handling two SES sidebands using one SMBUS controller and handler controls blinking of LEDs of drives installed on backplane
CN118708519B (en) Server expansion module, server, configuration method, device and medium
US7162554B1 (en) Method and apparatus for configuring a peripheral bus
US10268483B2 (en) Data protocol for managing peripheral devices
US10324888B2 (en) Verifying a communication bus connection to a peripheral device
CN119807106B (en) Expansion card circuit and communication method capable of being inserted into external device
CN114741350A (en) A method, system, device and medium for cascading multiple NVME hard disk backplanes
CN118132458A (en) MMIO address resource allocation method, device, computing device and storage medium
US20150161069A1 (en) Handling two sgpio channels using single sgpio decoder on a backplane controller
CN116009785A (en) Method and computing device for hard disk management
CN118643000A (en) Generating method, sending method and device of configuration information table of server PCIe port
CN117667818B (en) Signal transmission structure, server and signal transmission method
CN115981971A (en) Lighting method and server for server hard disk
CN116185505B (en) Configuration method of hard disk backboard and computing equipment
CN117194299A (en) Hot-plug methods, PCIE devices and management controllers
CN120523764A (en) Method, system and server for locating PCIe device function failure
US7065661B2 (en) Using request and grant signals to read revision information from an adapter board that interfaces a disk drive
TWI851403B (en) Computing system, multi-nodes servers and computer-implemented method
CN116225560B (en) Mirror image file transmission method and computing device
US12001373B2 (en) Dynamic allocation of peripheral component interconnect express bus numbers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 450000 Henan Province, Zhengzhou City, Free Trade Zone Zhengzhou Area (Zhengdong), Inner Ring North Road of Longhu, No. 99

Applicant after: Super Fusion Digital Technology Co.,Ltd.

Address before: 450000 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu smart Island, Zhengdong New District, Zhengzhou City, Henan Province

Applicant before: xFusion Digital Technologies Co., Ltd.

Country or region before: China