US20130151747A1 - Co-processing acceleration method, apparatus, and system - Google Patents
Co-processing acceleration method, apparatus, and system Download PDFInfo
- Publication number
- US20130151747A1 US20130151747A1 US13/622,422 US201213622422A US2013151747A1 US 20130151747 A1 US20130151747 A1 US 20130151747A1 US 201213622422 A US201213622422 A US 201213622422A US 2013151747 A1 US2013151747 A1 US 2013151747A1
- Authority
- US
- United States
- Prior art keywords
- processing
- card
- processed data
- request message
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3877—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
- G06F9/3879—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor for non-native instruction execution, e.g. executing a command; for Java instruction set
- G06F9/3881—Arrangements for communication of instructions and data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
Definitions
- the present invention relates to the computer field, and in particular, to a co-processing acceleration method, an apparatus, and a system.
- cluster system is a high-performance system formed of multiple autonomous computers and relevant resources which are connected through a high-speed network, in which each autonomous computer is called a compute node.
- a CPU central processing unit, central processing unit
- each compute node is designed as a general-purpose computing device, and therefore in some specific application fields, such as image processing and audio processing, processing efficiency is usually not high, so that many coprocessors emerge, such as a network coprocessor, a GPU (Graphics processing unit, graphics processing unit), and a compression coprocessor.
- coprocessors may aid the compute node in task processing, that is, co-processing.
- a task where a coprocessor aids the compute node in processing is called a co-processing task.
- how to use the coprocessor to aid the compute node in co-processing has direct relation to the work efficiency of a computer system.
- a coprocessor is mostly added into a computer system in a manner of a PCIE (Peripheral Component Interconnect Express, peripheral component interconnect express) co-processor card, a compute node of the computer system controls the coprocessor to process a co-processing task, and meanwhile a memory of the compute node is used as a data transmission channel of a co-processor card and the compute node, so as to transfer to-be-processed data and data which has been completely processed through the co-processor card.
- PCIE Peripheral Component Interconnect Express, peripheral component interconnect express
- Embodiments of the present invention provide a computer system, a co-processing acceleration method, a co-processing task management apparatus, and an acceleration management board, so as to reduce memory overheads of a computer system and increase a co-processing speed of a coprocessor in the computer system.
- An embodiment of the present invention provides a computer system, including: at least one compute node, a bus exchanger, and at least one co-processor card, where the computer system further includes: a public buffer card and a co-processing task management apparatus; the public buffer card provides temporary storage for data transmission between each compute node and each co-processor card in the computer system; the public buffer card and the at least one co-processor card are interconnected through the bus exchanger;
- the compute node is configured to send a co-processing request message
- the co-processing task management apparatus is configured to: receive the co-processing request message, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node; according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in the public buffer card; and allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- An embodiment of the present invention provides a co-processing acceleration method, including:
- the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node;
- the to-be-processed data is data on which processing is requested by the co-processing request message;
- An embodiment of the present invention provides a co-processing task management apparatus, including:
- a message receiving module configured to receive at least one co-processing request message sent by a compute node in a computer system, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node;
- a first data transfer module configured to, according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in a public buffer card; where the to-be-processed data is data on which processing is requested by the co-processing request message; and
- a second data transfer module configured to allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- An embodiment of the present invention provides an acceleration management board, including: a controller and a PCIE interface unit; where, the controller is coupled in data connection to a bus exchanger of a computer system through the PCIE interface unit; the controller is configured to receive at least one co-processing request message sent by a central processing unit CPU of the computer system, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the CPU; and according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data from a hard disk in the computer system; and store the to-be-processed data in a public buffer unit; and
- the controller is further configured to allocate the to-be-processed data stored in the public buffer unit to an idle GPU acceleration card in the computer system for processing, and the GPU acceleration card is connected, through its own first PCIE interface, to the bus exchanger of the computer system.
- a public buffer card is used as a public data buffer channel between each compute node and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data.
- FIG. 1 is an architectural diagram of a co-processing system in the prior art
- FIG. 2 is a flow chart of a co-processing acceleration method according to Embodiment 1 of the present invention.
- FIG. 3 is a flow chart of a co-processing acceleration method according to Embodiment 2 of the present invention.
- FIG. 4 is a schematic diagram of a co-processing task management apparatus according to Embodiment 3 of the present invention.
- FIG. 5 is a schematic diagram of a second data transfer module according to Embodiment 3 of the present invention.
- FIG. 6 is a structural diagram of a computer system according to Embodiment 4 of the present invention.
- FIG. 7 is a schematic diagram of an acceleration management board according to Embodiment 5 of the present invention.
- a co-processor card is placed on an input/output box through a PCIE interface, to help a compute node complete a co-processing task.
- the input/output box is coupled in data connection to the compute node through a PCIE bus exchanger.
- Step 1 Compute node 1 copies data from a hard disk to a memory of the compute node 1 .
- Step 2 Compute node 1 uses a DMA (Direct Memory Access, direct memory access) technology to copy data from the memory of compute node 1 to a memory of a co-processor card for processing.
- DMA Direct Memory Access, direct memory access
- Step 3 Compute node 1 uses the DMA to copy the processed data from the memory of the co-processor card to the memory of compute node 1 .
- Step 4 The compute node 1 performs further processing on the data or re-saves the data in the hard disk.
- Embodiment 1 of the present invention provides a co-processing acceleration method, which is used to increase a speed of co-processing in a computer system.
- the method includes:
- S 101 Receive at least one co-processing request message sent by a compute node in a computer system, where the co-processing request message carries address information of to-be-processed data.
- the to-be-processed data is data on which processing is requested by the compute node through the co-processing message, and explanations about to-be-processed data in all embodiments of the present invention are all the same as this.
- the co-processor card may aid the compute node in task processing, that is, co-processing.
- the compute node needs aid of the co-processor card in task processing, the compute node sends a co-processing request message.
- the co-processing request message may be a data packet including several fields.
- the co-processing request message specifically includes, but is not limited to, the following information:
- At least one compute node exists, and a request compute node identifier is used to identify and distinguish a compute node which initiates a service request.
- each compute node in the computer system may be allocated a unique ID number, and when a certain compute node sends a co-processing request message, an ID number of the compute node is used as a request compute node identifier.
- a request type is used to indicate a co-processing type requested by a compute node.
- Common co-processing types include: a graphics processing type, a floating-point operation type, a network type, and a Hash operation type.
- a field in a co-processing request message may be used to indicate the request type.
- a request type field being graphic indicates the graphics processing type
- a request type field being float indicates the floating-point operation type
- a request type field being net indicates the network type
- a request type field being Hash indicates the Hash operation type.
- one or more types of co-processor card may be configured, and therefore, an allowable request type needs to be determined according to the type of a co-processor card configured in the current computer system.
- a co-processor card such as a GPU acceleration card may be configured in a system, and in this case, the request type includes only the graphics processing type;
- multiple types of co-processor cards such as a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card and a GPU acceleration card, may be configured in a system at the same time, and in this case, the request type correspondingly includes the floating-point operation type, the Hash operation type, the network type, the graphics processing type and so on, which is not specifically limited in the embodiments of the present invention.
- address information of to-be-processed data may include a source address and a length of the to-be-processed data.
- the source address indicates a starting address of a storage space where data waiting to be processed by a co-processor card (that is, to-be-processed data) is located.
- the source address may be a certain address in a non-volatile storage device of a computer system.
- the non-volatile storage device may be a hard disk or a flash (a flash memory).
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- the length of the to-be-processed data indicates the size of a storage space required by the to-be-processed data.
- a destination address is a final storage address of data which has been completely processed by a co-processor card.
- the destination address may be a certain address in a hard disk of a computer system, for example, a certain address in a hard disk.
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- Request priority is designated by a compute node according to the nature, degree of urgency, or origin of a co-processing task.
- the request priority may be divided into three levels, high, medium, and low.
- the priority may further be divided into more levels, such as extremely high, high, ordinary, normal, low, extremely low, and may also be priority levels represented by Arabic numbers 1, 2, 3 and so on, which is not specifically limited in this embodiment.
- information such as the request compute node identifier, the request type, the source address, the length of the to-be-processed data, the destination address and the request priority may be added into a co-processing request message in a form of fields separately, and the fields together form one co-processing request message.
- the public buffer card provides a temporary buffer for data transmission between each compute node and each co-processor card in the computer system.
- the to-be-processed data may be obtained from a hard disk of the computer system.
- the address information in the co-processing request message includes: a source address and a length of to-be-processed data.
- the to-be-processed data is obtained according to information of two fields which are the source address and the length of the to-be-processed data and are in the co-processing request message.
- the to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card.
- the source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data.
- the to-be-processed data is stored in the public buffer card.
- a copying or migration manner may be adopted for storing the to-be-processed data in the public buffer card.
- a copying or migration operation may be performed in a DMA manner.
- an I/O interface of a storage device where the to-be-processed data is located first sends a DMA request instruction, to make a bus request to a bus logic controller of the computer system.
- the bus logic controller outputs a bus reply, which indicates that the DMA has already responded, and gives the bus control right to a DMA controller.
- the DMA controller After obtaining the bus control right, the DMA controller notifies the I/O interface of starting DMA transmission, where the I/O interface is the I/O interface of the storage device where the to-be-copied data is located; and outputs a read/write command, to directly control data transmission.
- the whole data transmission process does not need involvement of the compute node in the computer system, which effectively saves sources in the system.
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- the public buffer card is added in the computer system, and as public temporary storage for each compute node and each co-processor card to perform data transmission, it is different from a buffer of a co-processor card, such as a buffer of a GPU acceleration card.
- the public buffer card is a buffer area shared by all co-processor cards in the computer system, and is used as a buffer channel for the hard disk and all co-processor cards of the computer system to transmit data.
- the public buffer card may be any storage medium having a fast accessing capability.
- the public buffer card may be a PCIE public buffer card, and its storage medium is a Flash SSD (Solid State Storage, solid state drive), a PCM SSD, a DRAM (dynamic random access memory) or the like.
- the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load or is relatively idle.
- a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- the to-be-processed data in the public buffer card is allocated to the idle processor for processing. For example, in an embodiment, if a certain compute node requests for a graphics co-processing service, CPU utilization rates of all GPU acceleration cards in a current computer system are obtained through a system function call.
- a CPU utilization rate of a certain GPU acceleration card is less than 5%, it may be judged that the GPU acceleration card is in an idle state, and then the to-be-processed data is copied or migrated from the public buffer card to a storage device of the GPU acceleration card for processing.
- a certain compute node requests for another type of co-processing service, such as a floating-point operation type, it should be judged whether there is any floating-point operation co-processor card being idle, which is not described in detail again herein.
- S 103 may specifically include the following steps.
- a method for determining the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of same request priority and a same request type queue in a corresponding message queue and in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue.
- Embodiment 1 of the present invention through the foregoing technical solution, according to a co-processing request message sent by each compute node in the computer system, to-be-processed data on which processing is requested by each compute node is allocated to the idle co-processor card in the system for processing.
- the compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself.
- the public buffer card is used as a public data buffer channel between each compute node and each co-processor card of the computer system, and the to-be-processed data does not need to be transferred by the memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data.
- An embodiment of the present invention provides a co-processing acceleration method, which is used to increase a speed of co-processing in a computer system. As shown in FIG. 3 , the method includes:
- S 201 Receive at least one co-processing request message sent by a compute node in a computer system.
- each co-processing message carries address information of to-be-processed data (that is, to-be-processed data on which processing is requested by the by the compute node through the co-processing message) corresponding to the co-processing message.
- the co-processor card may aid the compute node in task processing, that is, co-processing.
- the compute node needs aid of the co-processor card in task processing, the compute node sends a co-processing request message.
- the co-processing request message may be a data packet including several fields.
- the co-processing request message specifically includes, but is not limited to, the following information:
- At least one compute node exists, and a request compute node identifier is used to identify and distinguish a compute node which initiates a service request.
- each compute node in the computer system may be allocated a unique ID number, and when a certain compute node sends a co-processing request message, an ID number of the compute node is used as a request compute node identifier.
- a request type is used to indicate a co-processing type requested by a compute node.
- Common co-processing types include: a graphics processing type, a floating-point operation type, a network type, and a Hash operation type.
- a field in a co-processing request message may be used to indicate the request type.
- a request type field being graphic indicates the graphics processing type
- a request type field being float indicates the floating-point operation type
- a request type field being net indicates the network type
- a request type field being Hash indicates the Hash operation type.
- one or more types of co-processor card may be configured, and therefore, an allowable request type needs to be determined according to the type of a co-processor card configured in the current computer system.
- a co-processor card such as a GPU acceleration card may be configured in a system, and in this case, the request type includes only the graphics processing type;
- multiple types of co-processor card such as a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card, and a GPU acceleration card, may be configured in a system at the same time, and in this case, the request type correspondingly includes the floating-point operation type, the Hash operation type, the network type, the graphics processing type and so on, which is not specifically limited in the embodiments of the present invention.
- address information of to-be-processed data may include a source address and a length of the to-be-processed data.
- the source address indicates a starting address of a storage space where data waiting to be processed by a co-processor card (that is, to-be-processed data) is located.
- the source address may be a certain address in a non-volatile storage device of a computer system.
- the non-volatile storage device may be a hard disk or a flash (a flash memory).
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- the length of the to-be-processed data indicates the size of a storage space required by the to-be-processed data.
- a destination address is a final storage address of data which has been completely processed by a co-processor card.
- the destination address may be a certain address in a hard disk of a computer system, for example, a certain address in a hard disk.
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- Request priority is designated by a compute node according to the nature, degree of urgency or origin of a co-processing task.
- the request priority may be divided into three levels, high, medium, and low.
- the priority may further be divided into more levels, such as extremely high, high, ordinary, normal, low, extremely low, and may also be priority levels represented by Arabic numbers 1, 2, 3 and so on, which is not specifically limited in this embodiment.
- information such as the request compute node identifier, the request type, the source address, the length of the to-be-processed data, the destination address and the request priority may be added into a co-processing request message in a form of fields separately, and the fields together form one co-processing request message.
- Step S 202 Apply for a storage space in a public buffer card, so as to buffer to-be-processed data, where the public buffer card is disposed in the computer system, and provides temporary storage for data transmission between each compute node and each co-processor card in the computer system.
- the public buffer card is applied to for a memory space of a size corresponding to the length of the to-be-processed data, where the memory space is used to buffer the to-be-processed data.
- S 203 According to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in the storage space which is applied for in the public buffer card.
- the to-be-processed data may be obtained from a hard disk of the computer system.
- the address information in the co-processing request message includes: a source address and a length of to-be-processed data.
- the to-be-processed data is obtained according to information of two fields which are the source address and the length of the to-be-processed data and are in the co-processing request message.
- the to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card.
- the source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data.
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- a copying or migration manner may be adopted for storing the to-be-processed data in the public buffer card.
- the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load and is relatively idle.
- a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- the to-be-processed data in the public buffer card is allocated to the idle processor for processing. For example, in an embodiment, if a certain compute node requests for a graphics co-processing service, CPU utilization rates of all GPU acceleration cards in a current computer system are obtained through a system function call.
- a CPU utilization rate of a certain GPU acceleration card is less than 5%, it may be judged that the GPU acceleration card is in an idle state, and then the to-be-processed data is copied or migrated from the public buffer card to a storage device of the GPU acceleration card for processing.
- a certain compute node requests for another type of co-processing service, such as a floating-point operation type, it should be judged whether there is any floating-point operation co-processor card being idle, which is not described in detail again herein.
- S 204 may specifically include the following steps.
- a method for determining the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of same request priority and a same request type queue in a corresponding message queue and in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue.
- the co-processing acceleration method provided by Embodiment 2 of the present invention further includes:
- the destination address is the destination address carried in the co-processing request message, and it indicates a final storage address of the data which has been completely processed by the co-processor card.
- the co-processing acceleration method provided by Embodiment 2 of the present invention further includes:
- the service request complete message may be a data packet which includes a field having a specific meaning.
- the specific field included by the packet may be “finish”, “ok” or “yes”, and is used to indicate that a current co-processing task has already been completed.
- Embodiment 2 of the present invention through the foregoing technical solution, according to a co-processing request message sent by each compute node in the computer system, to-be-processed data on which processing is requested by each compute node is allocated to the idle co-processor card in the system for processing.
- the compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself.
- the public buffer card is used as a public data buffer channel between each compute node and each co-processor card of the computer system, and the to-be-processed data does not need to be transferred by the memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data.
- the embodiment of the present invention provides a co-processing task management apparatus, configured to manage co-processing tasks in a computer system in a unified manner.
- the co-processing task management apparatus includes:
- a message receiving module 420 is configured to receive at least one co-processing request message sent by a compute node in the computer system, where the co-processing request message carries address information of to-be-processed data.
- the compute node if the compute node needs a co-processor card to process the to-be-processed data, the compute node sends a co-processing request message to the message receiving module 420 .
- the message receiving module 420 receives the co-processing request message sent by the compute node.
- Content included in the co-processing request message are exactly the same as content of the co-processing request message described in S 101 of Embodiment 1 of the present invention, and is not described in detail again in this embodiment.
- the message receiving module 420 is further configured to, after the co-processor card has completely processed the data, send, according to a request compute node identifier in the co-processing request message, a service request complete message to the compute node which initiates the co-processing request.
- the message receiving module 420 sends, according to the request compute node identifier in the co-processing request message, the service request complete message to the compute node which initiates the co-processing request.
- the service request complete message may be a data packet which includes a field having a specific meaning. The specific field included by the packet may be “finish”, “OK” or “yes”, and is used to indicate that a current co-processing task has already been completed.
- a first data transfer module 430 is configured to, according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in a public buffer card.
- the first data transfer module 430 may, according to the address information carried in the co-processing request message, obtain the to-be-processed data from a hard disk of the computer system.
- the address information in the co-processing request message includes: a source address and a length of to-be-processed data.
- the first data transfer module 430 obtains the to-be-processed data according to information of two fields which is the source address and the length of the to-be-processed data and is in the co-processing request message.
- the to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card.
- the source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data.
- the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- the public buffer card is added in the computer system, and as temporary storage for each compute node and each co-processor card to perform data transmission, it is different from a buffer of a co-processor card, such as a buffer of a GPU acceleration card.
- the public buffer card is a buffer area shared by all co-processor cards in the computer system, and is used as a buffer channel for the hard disk and all co-processor cards of the computer system to transmit data.
- the public buffer card may be any storage medium having fast accessing capability.
- the public buffer card may be a PCIE public buffer card, and its storage medium may be a Flash SSD, a PCM SSD, a DRAM or the like.
- a second data transfer module 440 is configured to allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load and is relatively idle.
- a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- the second data transfer module 440 judges whether there is an idle co-processor card matching the request type in the co-processing request message. If there is a matching idle co-processor card, the second data transfer module 440 allocates the to-be-processed data in the public buffer card to the idle processor for processing.
- the second data transfer module 440 obtains, through a system function call, CPU utilization rates of all GPU acceleration cards in a current computer system; and if a CPU utilization rate of a certain GPU acceleration card is less than 5%, may judge that the GPU acceleration card is in an idle state, and then copy or migrate the to-be-processed data from the public buffer card to a storage device of the GPU acceleration card for processing.
- the second data transfer module 440 may further be configured to store data at a destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card.
- the second data transfer module may specifically include:
- An obtaining unit 4401 is configured to obtain request priority and a request type of each co-processing request message from each co-processing request message.
- a request order determining unit 4402 is configured to determine processing order of each co-processing request message according to the request priority and request type of each co-processing request message.
- a method for the request order determining unit 4402 to determine the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of a same request priority and a same request type queue in the corresponding message queue in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue.
- a data processing unit 4403 is configured to allocate, in sequence and according to the processing order, to-be-processed data from the public buffer card to an idle co-processor card in the computer system for processing, where the to-be-processed data corresponds to each co-processing request message.
- the first data transfer module 430 may adopt a copying or migration manner to store the to-be-processed data in the public buffer card; the second data transfer module 440 may adopt the copying or migration manner to store data at the destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card. Further, the first data transfer module 430 and the second data transfer module 440 may implement copying or migration of data between a hard disk of the compute node, the public buffer card, and the co-processor card in a DMA manner.
- an I/O interface of a storage device where the to-be-processed data is located first sends a DMA request instruction to the first data transfer module 430 ; the first data transfer module 430 , according to the DMA request instruction, makes a bus request to a bus logic controller of the computer system.
- the bus logic controller outputs a bus reply, which indicates that the DMA has already responded, and gives the bus control right to the first data transfer module 430 .
- the first data transfer module 430 After obtaining the bus control rights, the first data transfer module 430 notifies the I/O interface of starting DMA transmission, where the I/O interface is the I/O interface of the storage device where the to-be-copied data is located; and outputs a read/write command, to directly control data transmission.
- the whole data transmission process does not need involvement of the compute node in the computer system, which effectively saves sources in the system.
- the co-processing task management apparatus provided by Embodiment 3 of the present invention further includes:
- a buffer management module 450 configured to, before the first data transfer module 430 stores the to-be-processed data in the public buffer card, apply for a storage space in the public buffer card, where the storage space is used to buffer the to-be-processed data.
- the co-processing task management apparatus manages the co-processing task of each compute node in the computer system in a unified manner through the co-processing request message.
- the compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself.
- the added public buffer card is used as a public data buffer channel between the hard disk and each co-processor card of the computer system, which implements copying or migration of the data, avoids overheads of the to-be-processed data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data.
- the buffer management module is used to apply for a space in the public buffer card, which makes management of the space of the public buffer card more convenient.
- the task priority management module makes the co-processing request of high priority be processed first, and makes the co-processor card be utilized more reasonably, which improves efficiency of co-processing.
- Embodiment 4 of the present invention provides a computer system, including:
- a hard disk 101 a hard disk 101 , a bus exchanger 102 , a public buffer card 103 , a co-processing task management apparatus 104 , at least one compute node (for example, a compute node 105 in FIG. 6 ), and at least one co-processor card (for example, a co-processor card 112 in FIG.
- the co-processor card 112 , the hard disk 101 , and the public buffer card 103 are coupled in data connection to the bus exchanger 102 , the bus exchanger 102 makes the co-processor card 112 , the hard disk 101 , and the public buffer card 103 be interconnected;
- the at least one compute node 105 is configured to send a co-processing request message, the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node 105 .
- the co-processing task management apparatus 104 is configured to: receive the co-processing request message; according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in the public buffer card 103 , where the to-be-processed data is data on which processing is requested by the co-processing request message; and allocate the to-be-processed data stored in the public buffer card 103 to an idle co-processor card among the at least one co-processor card in the computer system (it is assumed that the co-processor card 112 in FIG. 6 is idle) for processing.
- the computer system further includes a hard disk 101 , and the co-processing task management apparatus 104 , according to the co-processing request message, obtains the to-be-processed data from the hard disk 101 .
- the hard disk 101 may specifically be a magnetic disk type hard disk or a solid state type hard disk (such as a flash SSD and a PCMSSD).
- the co-processing task management apparatus 104 is further configured to, before the to-be-processed data is stored in the public buffer card 103 , apply for a storage space in the public buffer card 103 , where the storage space is used to store the to-be-processed data.
- the co-processing task management apparatus 104 is further configured to erase the to-be-processed data from the public buffer card 103 after the to-be-processed data in the public buffer card 103 is allocated to the co-processor card 112 for processing.
- the co-processing task management apparatus 104 is further configured to store data at a destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card 112 .
- the at least one compute node 105 is configured to obtain the data from the destination address, where the data has been completely processed by the co-processor card 112 .
- the co-processing task management apparatus 104 may adopt a copying or migration manner to store the to-be-processed data in the public buffer card 103 , and may also adopt a copying or migration manner to store the data at the destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card 112 . Further, a copying or migration operation may be implemented in a DMA manner.
- the public buffer card 103 may be a PCIE buffer card, and its storage medium may be a Flash SSD, a PCM SSD, or a DRAM.
- the co-processor card 112 , the hard disk 101 , and the public buffer card 103 may all be directly connected to the bus exchanger 102 through a PCIE bus.
- the co-processor card 112 and the public buffer card 103 are connected to the bus exchanger 102 through an input/output subrack. Specifically, the co-processor card 112 and the public buffer card 103 are inserted into PCIE slots of an input/output box 107 , and the input/output box 107 is connected to the bus exchanger 102 through the PCIE bus.
- PCIE has a higher data transmission rate, and therefore, use of a PCIE bus for data connection may increase the speed at which data is transmitted between the hard disk, the co-processor card, and the public buffer card, and further increases a co-processing speed of the computer system.
- the co-processor card 112 , the hard disk 101 , and the public buffer card 103 may also be connected to the bus exchanger 102 through an AGP bus, which is not specifically limited in the embodiment of the present invention.
- the computer system provided by Embodiment 4 of the present invention includes one co-processor card 112 and one compute node 105 is only an example, and therefore shall not be construed as a limit to the quantities of compute nodes and co-processor cards of the computer system provided by Embodiment 4 of the present invention.
- the quantities of compute nodes and co-processor cards may be any integer values greater than 0, but in actual applications, on account of cost saving, the quantity of co-processor cards shall not be greater than the quantity of compute nodes in the computer system.
- a current co-processing apparatus includes 20 compute nodes, and therefore the quantity of co-processor cards may be 1, 5, 10, 15, 20, or the like.
- co-processor card there may be only one type of co-processor card, for example, a GPU acceleration card; and there may also be multiple types, for example, a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card, the GPU acceleration card, and so on.
- a floating-point operation co-processor card for example, a floating-point operation co-processor card
- a Hash operation co-processor card a network co-processor card
- the GPU acceleration card and so on.
- the more types of co-processor cards the computer system includes the more types of co-processing tasks the whole system can support, and the more powerful a co-processing function is.
- the co-processing task management apparatus manages co-processing tasks in the computer system in a unified manner, which reduces resource overheads of each compute node.
- the multiple co-processor cards in the computer system may share the public buffer card, which is used as a data buffer channel between the hard disk and the co-processor cards, and the co-processing task management apparatus is used to implement copying or migration of the data, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases the co-processing speed.
- the PCIE bus is used to connect the co-processor card, the hard disk, the public buffer card, and the bus exchanger in the computer system, so as to effectively increase the transmission rate of the data, and further increases the co-processing speed.
- Embodiment 5 of the present invention provides an acceleration management board, which is configured to increase a co-processing speed of a computer system, and includes a controller 710 and a PCIE interface unit 720 .
- the controller 710 and the PCIE interface unit 720 are coupled in data connection.
- the controller 710 receives at least one co-processing request message sent by a CPU of a compute node in the computer system, where the co-processing request message carries address information of to-be-processed data; and according to the address information of the to-be-processed data, obtains the to-be-processed data from a hard disk in the computer system; and stores the to-be-processed data in a public buffer unit, where the to-be-processed data is data on which processing is requested by the CPU.
- the controller 710 is further configured to allocate the to-be-processed data stored in the public buffer unit to an idle GPU acceleration card in the computer system for processing.
- a GPU acceleration card 80 are coupled in data connection, through its own first PCIE interface 810 , to the PICE interface unit 720 of the acceleration management board 70 .
- the public buffer unit may also be integrated inside the acceleration management board.
- a public buffer unit 730 is connected to the controller 710 through a bus on the acceleration management board 70 .
- the bus on the acceleration board may be a PCIE bus.
- the public buffer unit may also be disposed outside the acceleration management board, and is used as an independent physical entity.
- the public buffer unit may be a PCIE buffer card.
- a PCIE buffer card 90 includes a second PCIE interface 910 , and the PCIE buffer card 90 is connected, through its own second PCIE interface 910 , to the PCIE interface unit 720 of the acceleration management board 70 .
- PCIE has a higher data transmission rate, and therefore, in the embodiment, use of a PCIE interface as an interface for data connection between the GPU acceleration card and the controller and between the controller and the public buffer unit is only an example for achieving an optimal technical effect, so shall not be construed as a limit to the embodiment of the present invention.
- an independent controller manages co-processing tasks in the computer system in a unified manner, which reduces resource overheads of each compute node.
- multiple co-processor cards in the computer system may share the public buffer card which is used as a data buffer channel between the hard disk and the co-processor cards, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multi Processors (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
- This application is a continuation of International Application No. PCT/CN2011/083770, filed on Dec. 9, 2011, which are hereby incorporated by reference in their entireties.
- The present invention relates to the computer field, and in particular, to a co-processing acceleration method, an apparatus, and a system.
- With the development of computer technologies, computers are applied in increasingly wider fields. In addition to common computer office applications in everyday life, the computers are also applied in some very complex fields, such as large-scale scientific computing and massive data processing, which usually have higher requirements on the processing capability of the computers. However, the processing capability of a single computer is limited, and is likely to become a bottleneck of improving system performance in the foregoing large-scale computing scenarios, and this problem is effectively solved as a cluster system emerges. The so-called cluster system is a high-performance system formed of multiple autonomous computers and relevant resources which are connected through a high-speed network, in which each autonomous computer is called a compute node. In a cluster, a CPU (central processing unit, central processing unit) of each compute node is designed as a general-purpose computing device, and therefore in some specific application fields, such as image processing and audio processing, processing efficiency is usually not high, so that many coprocessors emerge, such as a network coprocessor, a GPU (Graphics processing unit, graphics processing unit), and a compression coprocessor. These coprocessors may aid the compute node in task processing, that is, co-processing. A task where a coprocessor aids the compute node in processing is called a co-processing task. In a scenario of massive computation of the large-scale computer system, how to use the coprocessor to aid the compute node in co-processing has direct relation to the work efficiency of a computer system.
- In the prior art, a coprocessor is mostly added into a computer system in a manner of a PCIE (Peripheral Component Interconnect Express, peripheral component interconnect express) co-processor card, a compute node of the computer system controls the coprocessor to process a co-processing task, and meanwhile a memory of the compute node is used as a data transmission channel of a co-processor card and the compute node, so as to transfer to-be-processed data and data which has been completely processed through the co-processor card.
- By adopting such architecture in the prior art, all to-be-processed data has to be transferred through the memory of the computer node, which increase memory overheads, and due to the limits of factors such as the memory bandwidth and delay, a co-processing speed is not high.
- Embodiments of the present invention provide a computer system, a co-processing acceleration method, a co-processing task management apparatus, and an acceleration management board, so as to reduce memory overheads of a computer system and increase a co-processing speed of a coprocessor in the computer system.
- An embodiment of the present invention provides a computer system, including: at least one compute node, a bus exchanger, and at least one co-processor card, where the computer system further includes: a public buffer card and a co-processing task management apparatus; the public buffer card provides temporary storage for data transmission between each compute node and each co-processor card in the computer system; the public buffer card and the at least one co-processor card are interconnected through the bus exchanger;
- the compute node is configured to send a co-processing request message; and
- the co-processing task management apparatus is configured to: receive the co-processing request message, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node; according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in the public buffer card; and allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- An embodiment of the present invention provides a co-processing acceleration method, including:
- receiving at least one co-processing request message sent by a compute node in a computer system, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node;
- according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; where the to-be-processed data is data on which processing is requested by the co-processing request message; and
- allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- An embodiment of the present invention provides a co-processing task management apparatus, including:
- a message receiving module, configured to receive at least one co-processing request message sent by a compute node in a computer system, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the compute node;
- a first data transfer module, configured to, according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in a public buffer card; where the to-be-processed data is data on which processing is requested by the co-processing request message; and
- a second data transfer module, configured to allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- An embodiment of the present invention provides an acceleration management board, including: a controller and a PCIE interface unit; where, the controller is coupled in data connection to a bus exchanger of a computer system through the PCIE interface unit; the controller is configured to receive at least one co-processing request message sent by a central processing unit CPU of the computer system, where the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by the CPU; and according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data from a hard disk in the computer system; and store the to-be-processed data in a public buffer unit; and
- the controller is further configured to allocate the to-be-processed data stored in the public buffer unit to an idle GPU acceleration card in the computer system for processing, and the GPU acceleration card is connected, through its own first PCIE interface, to the bus exchanger of the computer system.
- In the embodiments of the present invention, through the foregoing technical solutions, a public buffer card is used as a public data buffer channel between each compute node and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data.
- To illustrate technical solutions in embodiments of the present invention or in the prior art more clearly, accompanying drawings used in the description of the embodiments or the prior art are briefly introduced in the following. Evidently, the accompanying drawings in the following description are only some embodiments of the present invention, and persons of ordinary skill in the art may obtain other drawings according to these accompanying drawings without creative efforts.
-
FIG. 1 is an architectural diagram of a co-processing system in the prior art; -
FIG. 2 is a flow chart of a co-processing acceleration method according toEmbodiment 1 of the present invention; -
FIG. 3 is a flow chart of a co-processing acceleration method according toEmbodiment 2 of the present invention; -
FIG. 4 is a schematic diagram of a co-processing task management apparatus according toEmbodiment 3 of the present invention; -
FIG. 5 is a schematic diagram of a second data transfer module according toEmbodiment 3 of the present invention; -
FIG. 6 is a structural diagram of a computer system according toEmbodiment 4 of the present invention; and -
FIG. 7 is a schematic diagram of an acceleration management board according to Embodiment 5 of the present invention. - Technical solutions in embodiments of the present invention are hereinafter described clearly and completely with reference to accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are only some rather than all of the embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without creative efforts fall within the protection scope of the present invention.
- In order to make persons of ordinary skill in the art better understand the technical solutions provided by the embodiments of the present invention, a co-processing system and a co-processing solution in the prior art are introduced.
- As shown in
FIG. 1 , according to a solution in the prior art, a co-processor card is placed on an input/output box through a PCIE interface, to help a compute node complete a co-processing task. The input/output box is coupled in data connection to the compute node through a PCIE bus exchanger. Step 1: Computenode 1 copies data from a hard disk to a memory of thecompute node 1. Step 2:Compute node 1 uses a DMA (Direct Memory Access, direct memory access) technology to copy data from the memory ofcompute node 1 to a memory of a co-processor card for processing. Step 3:Compute node 1 uses the DMA to copy the processed data from the memory of the co-processor card to the memory ofcompute node 1. Step 4: Thecompute node 1 performs further processing on the data or re-saves the data in the hard disk. - The technical solutions provided by the embodiments of the present invention may be applied in various massive computation scenarios such as a large-scale computing device of multi-processor architecture, cloud computing, and a CRAN (cloud radio access net, cloud radio access net) service. As shown in
FIG. 2 ,Embodiment 1 of the present invention provides a co-processing acceleration method, which is used to increase a speed of co-processing in a computer system. According toFIG. 2 , the method includes: - S101: Receive at least one co-processing request message sent by a compute node in a computer system, where the co-processing request message carries address information of to-be-processed data.
- It should be noted that, the to-be-processed data is data on which processing is requested by the compute node through the co-processing message, and explanations about to-be-processed data in all embodiments of the present invention are all the same as this.
- Specifically, in the computer system, at least one compute node and at least one co-processor card exist. The co-processor card may aid the compute node in task processing, that is, co-processing. When the compute node needs aid of the co-processor card in task processing, the compute node sends a co-processing request message. In an embodiment, the co-processing request message may be a data packet including several fields.
- In an embodiment, the co-processing request message specifically includes, but is not limited to, the following information:
- 1. Request compute node identifier;
- In a computer system, at least one compute node exists, and a request compute node identifier is used to identify and distinguish a compute node which initiates a service request. Specifically, each compute node in the computer system may be allocated a unique ID number, and when a certain compute node sends a co-processing request message, an ID number of the compute node is used as a request compute node identifier.
- 2. Request type;
- A request type is used to indicate a co-processing type requested by a compute node. Common co-processing types include: a graphics processing type, a floating-point operation type, a network type, and a Hash operation type. Specifically, a field in a co-processing request message may be used to indicate the request type. For example, a request type field being graphic indicates the graphics processing type, a request type field being float indicates the floating-point operation type, a request type field being net indicates the network type, and a request type field being Hash indicates the Hash operation type. It should be noted that, in a computer system, one or more types of co-processor card may be configured, and therefore, an allowable request type needs to be determined according to the type of a co-processor card configured in the current computer system. For example, in an embodiment, only one type of co-processor card such as a GPU acceleration card may be configured in a system, and in this case, the request type includes only the graphics processing type; in another embodiment, multiple types of co-processor cards, such as a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card and a GPU acceleration card, may be configured in a system at the same time, and in this case, the request type correspondingly includes the floating-point operation type, the Hash operation type, the network type, the graphics processing type and so on, which is not specifically limited in the embodiments of the present invention.
- 3. Address information of to-be-processed data
- In an embodiment, address information of to-be-processed data may include a source address and a length of the to-be-processed data.
- The source address indicates a starting address of a storage space where data waiting to be processed by a co-processor card (that is, to-be-processed data) is located. In an embodiment, the source address may be a certain address in a non-volatile storage device of a computer system.
- Further, the non-volatile storage device may be a hard disk or a flash (a flash memory). It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- The length of the to-be-processed data indicates the size of a storage space required by the to-be-processed data.
- 4. Destination address
- A destination address is a final storage address of data which has been completely processed by a co-processor card. In an embodiment, the destination address may be a certain address in a hard disk of a computer system, for example, a certain address in a hard disk. It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- 5. Request priority
- Request priority is designated by a compute node according to the nature, degree of urgency, or origin of a co-processing task. In an embodiment, the request priority may be divided into three levels, high, medium, and low. Definitely, it can be understood that in another embodiment, the priority may further be divided into more levels, such as extremely high, high, ordinary, normal, low, extremely low, and may also be priority levels represented by
1, 2, 3 and so on, which is not specifically limited in this embodiment.Arabic numbers - In an embodiment, information such as the request compute node identifier, the request type, the source address, the length of the to-be-processed data, the destination address and the request priority may be added into a co-processing request message in a form of fields separately, and the fields together form one co-processing request message.
- S102: According to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in a public buffer card.
- It should be noted that, the public buffer card provides a temporary buffer for data transmission between each compute node and each co-processor card in the computer system.
- Specifically, in an embodiment, according to the address information carried in the co-processing request message, the to-be-processed data may be obtained from a hard disk of the computer system.
- In an embodiment, the address information in the co-processing request message includes: a source address and a length of to-be-processed data. Specifically, the to-be-processed data is obtained according to information of two fields which are the source address and the length of the to-be-processed data and are in the co-processing request message. The to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card. The source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data. The to-be-processed data is stored in the public buffer card.
- In an embodiment, a copying or migration manner may be adopted for storing the to-be-processed data in the public buffer card.
- Specifically, a copying or migration operation may be performed in a DMA manner. Specifically, before data copying or migration is performed, an I/O interface of a storage device where the to-be-processed data is located first sends a DMA request instruction, to make a bus request to a bus logic controller of the computer system. When the compute node in the computer system completes execution of an instruction in a current bus cycle and releases a bus control right, the bus logic controller outputs a bus reply, which indicates that the DMA has already responded, and gives the bus control right to a DMA controller. After obtaining the bus control right, the DMA controller notifies the I/O interface of starting DMA transmission, where the I/O interface is the I/O interface of the storage device where the to-be-copied data is located; and outputs a read/write command, to directly control data transmission. The whole data transmission process does not need involvement of the compute node in the computer system, which effectively saves sources in the system.
- It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- It should be noted that, the public buffer card is added in the computer system, and as public temporary storage for each compute node and each co-processor card to perform data transmission, it is different from a buffer of a co-processor card, such as a buffer of a GPU acceleration card. The public buffer card is a buffer area shared by all co-processor cards in the computer system, and is used as a buffer channel for the hard disk and all co-processor cards of the computer system to transmit data. The public buffer card may be any storage medium having a fast accessing capability. In an embodiment, the public buffer card may be a PCIE public buffer card, and its storage medium is a Flash SSD (Solid State Storage, solid state drive), a PCM SSD, a DRAM (dynamic random access memory) or the like.
- S103: Allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- It should be noted that, the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load or is relatively idle. For example, a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- Specifically, in an embodiment, according to a request type in a co-processing request message and a utilization rate of each co-processor card matching the request type, it is judged whether there is an idle co-processor card matching the request type in the co-processing request message. If there is a matching idle co-processor card, the to-be-processed data in the public buffer card is allocated to the idle processor for processing. For example, in an embodiment, if a certain compute node requests for a graphics co-processing service, CPU utilization rates of all GPU acceleration cards in a current computer system are obtained through a system function call. If a CPU utilization rate of a certain GPU acceleration card is less than 5%, it may be judged that the GPU acceleration card is in an idle state, and then the to-be-processed data is copied or migrated from the public buffer card to a storage device of the GPU acceleration card for processing. Definitely, it can be understood that, in another embodiment, if a certain compute node requests for another type of co-processing service, such as a floating-point operation type, it should be judged whether there is any floating-point operation co-processor card being idle, which is not described in detail again herein.
- Further, in order to sort multiple co-processing requests according to priority, to make a co-processing request of high priority be processed first, and to make the co-processor card be utilized more reasonably, in another embodiment, S103 may specifically include the following steps.
- (1): Obtain request priority and a request type of each co-processing request message from each co-processing request message.
- (2): According to the request priority and request type of each co-processing request message, determine processing order of each co-processing request message.
- Specifically, a method for determining the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of same request priority and a same request type queue in a corresponding message queue and in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue.
- (3): Allocate, in sequence and according to the processing order, to-be-processed data from the public buffer card to an idle co-processor card in the computer system for processing, where the to-be-processed data corresponds to each co-processing request message.
- It should be noted that, a specific method for allocating the to-be-processed data from the public buffer card to the idle co-processor card for processing has already been illustrated above in detail, which is not described in detail again herein.
- In
Embodiment 1 of the present invention, through the foregoing technical solution, according to a co-processing request message sent by each compute node in the computer system, to-be-processed data on which processing is requested by each compute node is allocated to the idle co-processor card in the system for processing. The compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself. The public buffer card is used as a public data buffer channel between each compute node and each co-processor card of the computer system, and the to-be-processed data does not need to be transferred by the memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data. - An embodiment of the present invention provides a co-processing acceleration method, which is used to increase a speed of co-processing in a computer system. As shown in
FIG. 3 , the method includes: - S201: Receive at least one co-processing request message sent by a compute node in a computer system.
- In an embodiment, each co-processing message carries address information of to-be-processed data (that is, to-be-processed data on which processing is requested by the by the compute node through the co-processing message) corresponding to the co-processing message.
- Specifically, in the computer system, at least one compute node and at least one co-processor card exist. The co-processor card may aid the compute node in task processing, that is, co-processing. When the compute node needs aid of the co-processor card in task processing, the compute node sends a co-processing request message. In an embodiment, the co-processing request message may be a data packet including several fields.
- In an embodiment, the co-processing request message specifically includes, but is not limited to, the following information:
- 1. Request compute node identifier;
- In a computer system, at least one compute node exists, and a request compute node identifier is used to identify and distinguish a compute node which initiates a service request. Specifically, each compute node in the computer system may be allocated a unique ID number, and when a certain compute node sends a co-processing request message, an ID number of the compute node is used as a request compute node identifier.
- 2. Request type;
- A request type is used to indicate a co-processing type requested by a compute node. Common co-processing types include: a graphics processing type, a floating-point operation type, a network type, and a Hash operation type. Specifically, a field in a co-processing request message may be used to indicate the request type. For example, a request type field being graphic indicates the graphics processing type, a request type field being float indicates the floating-point operation type, a request type field being net indicates the network type, and a request type field being Hash indicates the Hash operation type. It should be noted that, in the computer system one or more types of co-processor card may be configured, and therefore, an allowable request type needs to be determined according to the type of a co-processor card configured in the current computer system. For example, in an embodiment, only one type of co-processor card such as a GPU acceleration card may be configured in a system, and in this case, the request type includes only the graphics processing type; in another embodiment, multiple types of co-processor card, such as a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card, and a GPU acceleration card, may be configured in a system at the same time, and in this case, the request type correspondingly includes the floating-point operation type, the Hash operation type, the network type, the graphics processing type and so on, which is not specifically limited in the embodiments of the present invention.
- 3. Address information of to-be-processed data
- In an embodiment, address information of to-be-processed data may include a source address and a length of the to-be-processed data.
- The source address indicates a starting address of a storage space where data waiting to be processed by a co-processor card (that is, to-be-processed data) is located. In an embodiment, the source address may be a certain address in a non-volatile storage device of a computer system. Further, the non-volatile storage device may be a hard disk or a flash (a flash memory). It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- The length of the to-be-processed data indicates the size of a storage space required by the to-be-processed data.
- 4. Destination address
- A destination address is a final storage address of data which has been completely processed by a co-processor card. In an embodiment, the destination address may be a certain address in a hard disk of a computer system, for example, a certain address in a hard disk. It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- 5. Request priority
- Request priority is designated by a compute node according to the nature, degree of urgency or origin of a co-processing task. In an embodiment, the request priority may be divided into three levels, high, medium, and low. Definitely, it can be understood that in another embodiment, the priority may further be divided into more levels, such as extremely high, high, ordinary, normal, low, extremely low, and may also be priority levels represented by
1, 2, 3 and so on, which is not specifically limited in this embodiment.Arabic numbers - In an embodiment, information such as the request compute node identifier, the request type, the source address, the length of the to-be-processed data, the destination address and the request priority may be added into a co-processing request message in a form of fields separately, and the fields together form one co-processing request message.
- Step S202: Apply for a storage space in a public buffer card, so as to buffer to-be-processed data, where the public buffer card is disposed in the computer system, and provides temporary storage for data transmission between each compute node and each co-processor card in the computer system.
- Specifically, according to a field of a length of the to-be-processed data in address information which is of the to-be-processed data and carried in the co-processing request message, the public buffer card is applied to for a memory space of a size corresponding to the length of the to-be-processed data, where the memory space is used to buffer the to-be-processed data.
- S203: According to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in the storage space which is applied for in the public buffer card.
- Specifically, in an embodiment, according to the address information carried in the co-processing request message, the to-be-processed data may be obtained from a hard disk of the computer system.
- In an embodiment, the address information in the co-processing request message includes: a source address and a length of to-be-processed data. Specifically, the to-be-processed data is obtained according to information of two fields which are the source address and the length of the to-be-processed data and are in the co-processing request message. The to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card. The source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data. It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- In an embodiment, a copying or migration manner may be adopted for storing the to-be-processed data in the public buffer card.
- S204: Allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing.
- It should be noted that, the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load and is relatively idle. For example, a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- Specifically, in an embodiment, according to a request type in a co-processing request message and a utilization rate of each co-processor card matching the request type, it is judged whether there is an idle co-processor card matching the request type in the co-processing request message. If there is a matching idle co-processor card, the to-be-processed data in the public buffer card is allocated to the idle processor for processing. For example, in an embodiment, if a certain compute node requests for a graphics co-processing service, CPU utilization rates of all GPU acceleration cards in a current computer system are obtained through a system function call. If a CPU utilization rate of a certain GPU acceleration card is less than 5%, it may be judged that the GPU acceleration card is in an idle state, and then the to-be-processed data is copied or migrated from the public buffer card to a storage device of the GPU acceleration card for processing. Definitely, it can be understood that in another embodiment, if a certain compute node requests for another type of co-processing service, such as a floating-point operation type, it should be judged whether there is any floating-point operation co-processor card being idle, which is not described in detail again herein.
- Further, in order to sort multiple co-processing requests according to priority, to make a co-processing request of high priority be processed first, and to make the co-processor card be utilized more reasonably, in another embodiment, S204 may specifically include the following steps.
- (1): Obtain request priority and a request type of each co-processing request message from each co-processing request message.
- (2): According to the request priority and request type of each co-processing request message, determine processing order of each co-processing request message.
- Specifically, a method for determining the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of same request priority and a same request type queue in a corresponding message queue and in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue.
- (3): Allocate, in sequence and according to the processing order, to-be-processed data from the public buffer card to an idle co-processor card in the computer system for processing, where the to-be-processed data corresponds to each co-processing request message.
- Further, after the to-be-processed data is allocated from the public buffer card to the idle co-processor card in the computer system for processing, the co-processing acceleration method provided by
Embodiment 2 of the present invention further includes: - S205: Erase the to-be-processed data from the public buffer card.
- S206: Store data at a destination address designated by the co-processing request message, where the data has been completely processed by the idle co-processor card.
- It should be noted that, the destination address is the destination address carried in the co-processing request message, and it indicates a final storage address of the data which has been completely processed by the co-processor card.
- Further, after the data which has been completely processed by the idle co-processor card is stored at the destination address designated by the co-processing request message, the co-processing acceleration method provided by
Embodiment 2 of the present invention further includes: - S207: According to the request compute node identifier in the co-processing request message, send a service request complete message to the compute node which initiates the co-processing request.
- In an embodiment, the service request complete message may be a data packet which includes a field having a specific meaning. The specific field included by the packet may be “finish”, “ok” or “yes”, and is used to indicate that a current co-processing task has already been completed.
- In
Embodiment 2 of the present invention, through the foregoing technical solution, according to a co-processing request message sent by each compute node in the computer system, to-be-processed data on which processing is requested by each compute node is allocated to the idle co-processor card in the system for processing. The compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself. The public buffer card is used as a public data buffer channel between each compute node and each co-processor card of the computer system, and the to-be-processed data does not need to be transferred by the memory of the compute node, which avoids overheads of the to-be-processed data in transmission through the memory of the compute node, breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data. - The embodiment of the present invention provides a co-processing task management apparatus, configured to manage co-processing tasks in a computer system in a unified manner. As shown in
FIG. 4 , the co-processing task management apparatus includes: - A
message receiving module 420 is configured to receive at least one co-processing request message sent by a compute node in the computer system, where the co-processing request message carries address information of to-be-processed data. - Specifically, in the computer system, if the compute node needs a co-processor card to process the to-be-processed data, the compute node sends a co-processing request message to the
message receiving module 420. Themessage receiving module 420 receives the co-processing request message sent by the compute node. Content included in the co-processing request message are exactly the same as content of the co-processing request message described in S101 ofEmbodiment 1 of the present invention, and is not described in detail again in this embodiment. - In another embodiment, the
message receiving module 420 is further configured to, after the co-processor card has completely processed the data, send, according to a request compute node identifier in the co-processing request message, a service request complete message to the compute node which initiates the co-processing request. - Specifically, after the co-processor card has completely processed the data, the
message receiving module 420 sends, according to the request compute node identifier in the co-processing request message, the service request complete message to the compute node which initiates the co-processing request. In an embodiment, the service request complete message may be a data packet which includes a field having a specific meaning. The specific field included by the packet may be “finish”, “OK” or “yes”, and is used to indicate that a current co-processing task has already been completed. - A first
data transfer module 430 is configured to, according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in a public buffer card. - Specifically, in an embodiment, the first
data transfer module 430 may, according to the address information carried in the co-processing request message, obtain the to-be-processed data from a hard disk of the computer system. In an embodiment, the address information in the co-processing request message includes: a source address and a length of to-be-processed data. Specifically, the firstdata transfer module 430 obtains the to-be-processed data according to information of two fields which is the source address and the length of the to-be-processed data and is in the co-processing request message. The to-be-processed data specifically refers to original data which is stored in the hard disk of the computer system and waits to be processed by the co-processor card. The source address field in the co-processing request message indicates a starting address of the to-be-processed data in the hard disk of the computer system, and therefore, in the hard disk of the computer system, data in a contiguous address space which starts from the source address and has a size being the length of the to-be-processed data is the to-be-processed data. - It should be noted that, the hard disk may specifically include a magnetic disk type hard disk and a solid state type hard disk (such as a flash SSD and a PCMSSD).
- It should be noted that, the public buffer card is added in the computer system, and as temporary storage for each compute node and each co-processor card to perform data transmission, it is different from a buffer of a co-processor card, such as a buffer of a GPU acceleration card. The public buffer card is a buffer area shared by all co-processor cards in the computer system, and is used as a buffer channel for the hard disk and all co-processor cards of the computer system to transmit data. The public buffer card may be any storage medium having fast accessing capability. In an embodiment, the public buffer card may be a PCIE public buffer card, and its storage medium may be a Flash SSD, a PCM SSD, a DRAM or the like.
- A second
data transfer module 440 is configured to allocate the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. - It should be noted that, the idle co-processor card may be a co-processor card currently having no co-processing task; and may also be a co-processor card which is selected according to a load balancing policy and has a lighter load and is relatively idle. For example, a co-processor card currently having a lowest CPU utilization rate may be used as an idle co-processor card.
- Specifically, in an embodiment, according to a request type in a co-processing request message and a utilization rate of each co-processor card matching the request type, the second
data transfer module 440 judges whether there is an idle co-processor card matching the request type in the co-processing request message. If there is a matching idle co-processor card, the seconddata transfer module 440 allocates the to-be-processed data in the public buffer card to the idle processor for processing. For example, in an embodiment, if a certain compute node requests for a graphics co-processing service, the seconddata transfer module 440 obtains, through a system function call, CPU utilization rates of all GPU acceleration cards in a current computer system; and if a CPU utilization rate of a certain GPU acceleration card is less than 5%, may judge that the GPU acceleration card is in an idle state, and then copy or migrate the to-be-processed data from the public buffer card to a storage device of the GPU acceleration card for processing. Definitely, it can be understood that in another embodiment, if a certain compute node requests for another type of co-processing service, such as a floating-point operation type, it should be judged whether there is any floating-point operation co-processor card being idle, which is not described in detail again herein. - Further, in another embodiment, the second
data transfer module 440 may further be configured to store data at a destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card. - In an embodiment, as shown in
FIG. 5 , when there are multiple co-processing request messages, in order to sort multiple co-processing requests according to priority, to make a co-processing request of high priority be processed first, and to make the co-processor card be utilized more reasonably, the second data transfer module may specifically include: - An obtaining
unit 4401 is configured to obtain request priority and a request type of each co-processing request message from each co-processing request message. - A request
order determining unit 4402 is configured to determine processing order of each co-processing request message according to the request priority and request type of each co-processing request message. - In an embodiment, a method for the request
order determining unit 4402 to determine the processing order of each co-processing request message is that: Co-processing request messages of different request types are placed in different message queues. Co-processing request messages of a same request type queue in a corresponding message queue, in descending order and according to request priority. Co-processing request messages of a same request priority and a same request type queue in the corresponding message queue in order of requests. An idle co-processor card matching a request type processes to-be-processed data in order of a corresponding task queue. - A
data processing unit 4403 is configured to allocate, in sequence and according to the processing order, to-be-processed data from the public buffer card to an idle co-processor card in the computer system for processing, where the to-be-processed data corresponds to each co-processing request message. - In an embodiment, the first
data transfer module 430 may adopt a copying or migration manner to store the to-be-processed data in the public buffer card; the seconddata transfer module 440 may adopt the copying or migration manner to store data at the destination address designated by the co-processing request message, where the data has been completely processed by the co-processor card. Further, the firstdata transfer module 430 and the seconddata transfer module 440 may implement copying or migration of data between a hard disk of the compute node, the public buffer card, and the co-processor card in a DMA manner. Specifically, taking the firstdata transfer module 430 as an example, before data copying or migration is performed, an I/O interface of a storage device where the to-be-processed data is located first sends a DMA request instruction to the firstdata transfer module 430; the firstdata transfer module 430, according to the DMA request instruction, makes a bus request to a bus logic controller of the computer system. When the compute node in the computer system completes execution of an instruction in a current bus cycle and releases a bus control right, the bus logic controller outputs a bus reply, which indicates that the DMA has already responded, and gives the bus control right to the firstdata transfer module 430. After obtaining the bus control rights, the firstdata transfer module 430 notifies the I/O interface of starting DMA transmission, where the I/O interface is the I/O interface of the storage device where the to-be-copied data is located; and outputs a read/write command, to directly control data transmission. The whole data transmission process does not need involvement of the compute node in the computer system, which effectively saves sources in the system. - For specific work of the second
data transfer module 440, reference may be made to S103 inEmbodiment 1 of the present invention. - Further, in order to facilitate management of a storage space of the public buffer card, the co-processing task management apparatus provided by
Embodiment 3 of the present invention further includes: - a
buffer management module 450, configured to, before the firstdata transfer module 430 stores the to-be-processed data in the public buffer card, apply for a storage space in the public buffer card, where the storage space is used to buffer the to-be-processed data. - In
Embodiment 3 of the present invention, through the foregoing technical solution, the co-processing task management apparatus manages the co-processing task of each compute node in the computer system in a unified manner through the co-processing request message. The compute node does not need to consume its own resources to perform allocation of the to-be-processed data, which reduces resource overheads of each compute node itself. Meanwhile, the added public buffer card is used as a public data buffer channel between the hard disk and each co-processor card of the computer system, which implements copying or migration of the data, avoids overheads of the to-be-processed data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed of the to-be-processed data. Further, before the data is copied to the public buffer card, the buffer management module is used to apply for a space in the public buffer card, which makes management of the space of the public buffer card more convenient. Further, the task priority management module makes the co-processing request of high priority be processed first, and makes the co-processor card be utilized more reasonably, which improves efficiency of co-processing. - as shown in
FIG. 6 ,Embodiment 4 of the present invention provides a computer system, including: - a
hard disk 101, abus exchanger 102, apublic buffer card 103, a co-processingtask management apparatus 104, at least one compute node (for example, acompute node 105 inFIG. 6 ), and at least one co-processor card (for example, aco-processor card 112 inFIG. 6 ); where theco-processor card 112, thehard disk 101, and thepublic buffer card 103 are coupled in data connection to thebus exchanger 102, thebus exchanger 102 makes theco-processor card 112, thehard disk 101, and thepublic buffer card 103 be interconnected; the at least onecompute node 105 is configured to send a co-processing request message, the co-processing request message carries address information of to-be-processed data, and the to-be-processed data is data on which processing is requested by thecompute node 105. - The co-processing
task management apparatus 104 is configured to: receive the co-processing request message; according to the address information which is of the to-be-processed data and carried in the co-processing request message, obtain the to-be-processed data, and store the to-be-processed data in thepublic buffer card 103, where the to-be-processed data is data on which processing is requested by the co-processing request message; and allocate the to-be-processed data stored in thepublic buffer card 103 to an idle co-processor card among the at least one co-processor card in the computer system (it is assumed that theco-processor card 112 inFIG. 6 is idle) for processing. - In an embodiment, the computer system further includes a
hard disk 101, and the co-processingtask management apparatus 104, according to the co-processing request message, obtains the to-be-processed data from thehard disk 101. It should be noted that, thehard disk 101 may specifically be a magnetic disk type hard disk or a solid state type hard disk (such as a flash SSD and a PCMSSD). - Further, in order to facilitate management of a storage space of a buffer card, in an embodiment, the co-processing
task management apparatus 104 is further configured to, before the to-be-processed data is stored in thepublic buffer card 103, apply for a storage space in thepublic buffer card 103, where the storage space is used to store the to-be-processed data. In another embodiment, the co-processingtask management apparatus 104 is further configured to erase the to-be-processed data from thepublic buffer card 103 after the to-be-processed data in thepublic buffer card 103 is allocated to theco-processor card 112 for processing. - In another embodiment, the co-processing
task management apparatus 104 is further configured to store data at a destination address designated by the co-processing request message, where the data has been completely processed by theco-processor card 112. Correspondingly, the at least onecompute node 105 is configured to obtain the data from the destination address, where the data has been completely processed by theco-processor card 112. - In an embodiment, the co-processing
task management apparatus 104 may adopt a copying or migration manner to store the to-be-processed data in thepublic buffer card 103, and may also adopt a copying or migration manner to store the data at the destination address designated by the co-processing request message, where the data has been completely processed by theco-processor card 112. Further, a copying or migration operation may be implemented in a DMA manner. - In an embodiment, the
public buffer card 103 may be a PCIE buffer card, and its storage medium may be a Flash SSD, a PCM SSD, or a DRAM. - In an embodiment, the
co-processor card 112, thehard disk 101, and thepublic buffer card 103 may all be directly connected to thebus exchanger 102 through a PCIE bus. - In another embodiment, as shown in
FIG. 6 , theco-processor card 112 and thepublic buffer card 103 are connected to thebus exchanger 102 through an input/output subrack. Specifically, theco-processor card 112 and thepublic buffer card 103 are inserted into PCIE slots of an input/output box 107, and the input/output box 107 is connected to thebus exchanger 102 through the PCIE bus. - As the latest bus interface standard and compared with other bus interface standards, PCIE has a higher data transmission rate, and therefore, use of a PCIE bus for data connection may increase the speed at which data is transmitted between the hard disk, the co-processor card, and the public buffer card, and further increases a co-processing speed of the computer system.
- Definitely, it can be understood that in another embodiment in actual applications, the
co-processor card 112, thehard disk 101, and thepublic buffer card 103 may also be connected to thebus exchanger 102 through an AGP bus, which is not specifically limited in the embodiment of the present invention. - It should be noted that, that the computer system provided by
Embodiment 4 of the present invention includes oneco-processor card 112 and onecompute node 105 is only an example, and therefore shall not be construed as a limit to the quantities of compute nodes and co-processor cards of the computer system provided byEmbodiment 4 of the present invention. It can be understood that, in an embodiment, the quantities of compute nodes and co-processor cards may be any integer values greater than 0, but in actual applications, on account of cost saving, the quantity of co-processor cards shall not be greater than the quantity of compute nodes in the computer system. For example, a current co-processing apparatus includes 20 compute nodes, and therefore the quantity of co-processor cards may be 1, 5, 10, 15, 20, or the like. - Further, in an embodiment, there may be only one type of co-processor card, for example, a GPU acceleration card; and there may also be multiple types, for example, a floating-point operation co-processor card, a Hash operation co-processor card, a network co-processor card, the GPU acceleration card, and so on. Definitely, it can be understood that, the more types of co-processor cards the computer system includes, the more types of co-processing tasks the whole system can support, and the more powerful a co-processing function is.
- In
Embodiment 4 of the present invention, through the foregoing technical solution, the co-processing task management apparatus manages co-processing tasks in the computer system in a unified manner, which reduces resource overheads of each compute node. Meanwhile, the multiple co-processor cards in the computer system may share the public buffer card, which is used as a data buffer channel between the hard disk and the co-processor cards, and the co-processing task management apparatus is used to implement copying or migration of the data, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases the co-processing speed. Further, the PCIE bus is used to connect the co-processor card, the hard disk, the public buffer card, and the bus exchanger in the computer system, so as to effectively increase the transmission rate of the data, and further increases the co-processing speed. - As shown in
FIG. 7 , Embodiment 5 of the present invention provides an acceleration management board, which is configured to increase a co-processing speed of a computer system, and includes acontroller 710 and aPCIE interface unit 720. Thecontroller 710 and thePCIE interface unit 720 are coupled in data connection. Thecontroller 710 receives at least one co-processing request message sent by a CPU of a compute node in the computer system, where the co-processing request message carries address information of to-be-processed data; and according to the address information of the to-be-processed data, obtains the to-be-processed data from a hard disk in the computer system; and stores the to-be-processed data in a public buffer unit, where the to-be-processed data is data on which processing is requested by the CPU. - The
controller 710 is further configured to allocate the to-be-processed data stored in the public buffer unit to an idle GPU acceleration card in the computer system for processing. Specifically, as shown inFIG. 7 , aGPU acceleration card 80 are coupled in data connection, through its ownfirst PCIE interface 810, to thePICE interface unit 720 of theacceleration management board 70. - In an embodiment, the public buffer unit may also be integrated inside the acceleration management board. As shown in
FIG. 7 , apublic buffer unit 730 is connected to thecontroller 710 through a bus on theacceleration management board 70. Specifically, the bus on the acceleration board may be a PCIE bus. - In another embodiment, the public buffer unit may also be disposed outside the acceleration management board, and is used as an independent physical entity. Further, the public buffer unit may be a PCIE buffer card. Specifically, as shown in
FIG. 7 , aPCIE buffer card 90 includes asecond PCIE interface 910, and thePCIE buffer card 90 is connected, through its ownsecond PCIE interface 910, to thePCIE interface unit 720 of theacceleration management board 70. - It should be noted that, as the latest bus interface standard and compared with other bus interface standards, PCIE has a higher data transmission rate, and therefore, in the embodiment, use of a PCIE interface as an interface for data connection between the GPU acceleration card and the controller and between the controller and the public buffer unit is only an example for achieving an optimal technical effect, so shall not be construed as a limit to the embodiment of the present invention.
- In the embodiment of the present invention, through the foregoing technical solution, an independent controller manages co-processing tasks in the computer system in a unified manner, which reduces resource overheads of each compute node. Meanwhile, multiple co-processor cards in the computer system may share the public buffer card which is used as a data buffer channel between the hard disk and the co-processor cards, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed.
- What are described above are merely several embodiments of the present invention. Persons skilled in the prior art can make various modifications or variations according to the disclosure of the application document, without departing from the spirit and principle of the present invention.
Claims (29)
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2011/083770 WO2013082809A1 (en) | 2011-12-09 | 2011-12-09 | Acceleration method, device and system for co-processing |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2011/083770 Continuation WO2013082809A1 (en) | 2011-12-09 | 2011-12-09 | Acceleration method, device and system for co-processing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20130151747A1 true US20130151747A1 (en) | 2013-06-13 |
| US8478926B1 US8478926B1 (en) | 2013-07-02 |
Family
ID=47577491
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/622,422 Active US8478926B1 (en) | 2011-12-09 | 2012-09-19 | Co-processing acceleration method, apparatus, and system |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US8478926B1 (en) |
| EP (1) | EP2657836A4 (en) |
| CN (1) | CN102906726B (en) |
| WO (1) | WO2013082809A1 (en) |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150199214A1 (en) * | 2014-01-13 | 2015-07-16 | Electronics And Telecommunications Research Institute | System for distributed processing of stream data and method thereof |
| WO2015138312A1 (en) * | 2014-03-11 | 2015-09-17 | Cavium, Inc. | Method and apparatus for transfer of wide command and data between a processor and coprocessor |
| EP3054387A4 (en) * | 2013-11-07 | 2016-08-10 | Huawei Tech Co Ltd | METHOD FOR COMPRESSING DATA, AND STORAGE SYSTEM |
| JP2018113075A (en) * | 2018-04-19 | 2018-07-19 | 株式会社日立製作所 | Information processing device |
| US10261847B2 (en) * | 2016-04-08 | 2019-04-16 | Bitfusion.io, Inc. | System and method for coordinating use of multiple coprocessors |
| US10467176B2 (en) | 2015-02-25 | 2019-11-05 | Hitachi, Ltd. | Information processing apparatus |
| CN112272122A (en) * | 2020-10-14 | 2021-01-26 | 北京中科网威信息技术有限公司 | FPGA accelerator card detection method and device and readable storage medium |
| US10970133B2 (en) * | 2016-04-20 | 2021-04-06 | International Business Machines Corporation | System and method for hardware acceleration for operator parallelization with streams |
| CN114020439A (en) * | 2021-11-17 | 2022-02-08 | 山东乾云启创信息科技股份有限公司 | Interrupt processing method and device and computer equipment |
| US11281609B2 (en) * | 2019-03-12 | 2022-03-22 | Preferred Networks, Inc. | Arithmetic processor and control method for arithmetic processor |
| CN115801582A (en) * | 2022-09-27 | 2023-03-14 | 海光信息技术股份有限公司 | System link bandwidth improving method, related device and computer equipment |
| EP4428700A4 (en) * | 2021-12-10 | 2025-01-08 | Huawei Technologies Co., Ltd. | Service processing method and apparatus |
| US20250086124A1 (en) * | 2023-09-07 | 2025-03-13 | Samsung Electronics Co., Ltd. | Systems and methods for moving data between a storage device and a processing element |
| CN119990337A (en) * | 2025-04-15 | 2025-05-13 | 之江实验室 | Model reasoning acceleration method, system, computer device and readable storage medium |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104410586B (en) * | 2014-12-11 | 2018-08-07 | 福建星网锐捷网络有限公司 | Message processing method and device under a kind of VSU environment |
| CN105353978B (en) * | 2015-10-22 | 2017-07-14 | 湖南国科微电子股份有限公司 | A kind of data of PCIE SSD arrays read method, system and reading/writing method |
| CN105426122B (en) * | 2015-11-02 | 2019-09-03 | 深圳忆联信息系统有限公司 | A kind of method of data processing, electronic equipment and storage device |
| CN107544845B (en) * | 2017-06-26 | 2020-08-11 | 新华三大数据技术有限公司 | GPU resource scheduling method and device |
| CN111833232B (en) * | 2019-04-18 | 2024-07-02 | 杭州海康威视数字技术股份有限公司 | Image processing apparatus |
| CN110399215A (en) * | 2019-06-25 | 2019-11-01 | 苏州浪潮智能科技有限公司 | A coprocessor, an electronic device and a data processing method |
| CN113032298B (en) * | 2019-12-24 | 2023-09-29 | 中科寒武纪科技股份有限公司 | Computing device, integrated circuit device, board card and order preserving method for order preserving |
| CN113032299B (en) * | 2019-12-24 | 2023-09-26 | 中科寒武纪科技股份有限公司 | Bus system, integrated circuit device, board card and order preserving method for processing request |
| CN113051957B (en) * | 2019-12-26 | 2024-10-01 | 浙江宇视科技有限公司 | Data analysis method and system |
| CN114691218B (en) * | 2020-12-31 | 2025-01-03 | Oppo广东移动通信有限公司 | Coprocessor chip, electronic device and startup method |
| CN114237898B (en) * | 2021-12-20 | 2025-09-16 | 平安证券股份有限公司 | Data processing method, system, terminal equipment and storage medium |
| CN119065599B (en) * | 2022-06-25 | 2025-04-04 | 华为技术有限公司 | Data processing method, device and system |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6484224B1 (en) * | 1999-11-29 | 2002-11-19 | Cisco Technology Inc. | Multi-interface symmetric multiprocessor |
| JP2002207708A (en) * | 2001-01-12 | 2002-07-26 | Mitsubishi Electric Corp | Arithmetic unit |
| US7082502B2 (en) * | 2001-05-15 | 2006-07-25 | Cloudshield Technologies, Inc. | Apparatus and method for interfacing with a high speed bi-directional network using a shared memory to store packet data |
| US8296764B2 (en) * | 2003-08-14 | 2012-10-23 | Nvidia Corporation | Internal synchronization control for adaptive integrated circuitry |
| CN100407690C (en) * | 2004-01-09 | 2008-07-30 | 华为技术有限公司 | Method and system for communication between CPU and co-processing unit |
| US7395410B2 (en) * | 2004-07-06 | 2008-07-01 | Matsushita Electric Industrial Co., Ltd. | Processor system with an improved instruction decode control unit that controls data transfer between processor and coprocessor |
| US7240182B2 (en) * | 2004-09-16 | 2007-07-03 | International Business Machines Corporation | System and method for providing a persistent function server |
| US8370448B2 (en) * | 2004-12-28 | 2013-02-05 | Sap Ag | API for worker node retrieval of session request |
| US7797670B2 (en) * | 2006-04-14 | 2010-09-14 | Apple Inc. | Mirrored file system |
| US9170864B2 (en) * | 2009-01-29 | 2015-10-27 | International Business Machines Corporation | Data processing in a hybrid computing environment |
-
2011
- 2011-12-09 EP EP11860708.4A patent/EP2657836A4/en not_active Ceased
- 2011-12-09 CN CN201180005166.7A patent/CN102906726B/en active Active
- 2011-12-09 WO PCT/CN2011/083770 patent/WO2013082809A1/en not_active Ceased
-
2012
- 2012-09-19 US US13/622,422 patent/US8478926B1/en active Active
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3054387A4 (en) * | 2013-11-07 | 2016-08-10 | Huawei Tech Co Ltd | METHOD FOR COMPRESSING DATA, AND STORAGE SYSTEM |
| EP3349130A1 (en) * | 2013-11-07 | 2018-07-18 | Huawei Technologies Co., Ltd. | Data compression method and storage system |
| US10055134B2 (en) | 2013-11-07 | 2018-08-21 | Huawei Technologies Co., Ltd. | Data compression method and storage system |
| US20150199214A1 (en) * | 2014-01-13 | 2015-07-16 | Electronics And Telecommunications Research Institute | System for distributed processing of stream data and method thereof |
| WO2015138312A1 (en) * | 2014-03-11 | 2015-09-17 | Cavium, Inc. | Method and apparatus for transfer of wide command and data between a processor and coprocessor |
| US10467176B2 (en) | 2015-02-25 | 2019-11-05 | Hitachi, Ltd. | Information processing apparatus |
| US10261847B2 (en) * | 2016-04-08 | 2019-04-16 | Bitfusion.io, Inc. | System and method for coordinating use of multiple coprocessors |
| US20190213062A1 (en) * | 2016-04-08 | 2019-07-11 | Bitfusion.io, Inc. | System and Method for Coordinating Use of Multiple Coprocessors |
| US11860737B2 (en) | 2016-04-08 | 2024-01-02 | Vmware, Inc. | System and method for coordinating use of multiple coprocessors |
| US10970133B2 (en) * | 2016-04-20 | 2021-04-06 | International Business Machines Corporation | System and method for hardware acceleration for operator parallelization with streams |
| JP2018113075A (en) * | 2018-04-19 | 2018-07-19 | 株式会社日立製作所 | Information processing device |
| US11281609B2 (en) * | 2019-03-12 | 2022-03-22 | Preferred Networks, Inc. | Arithmetic processor and control method for arithmetic processor |
| CN112272122A (en) * | 2020-10-14 | 2021-01-26 | 北京中科网威信息技术有限公司 | FPGA accelerator card detection method and device and readable storage medium |
| CN114020439A (en) * | 2021-11-17 | 2022-02-08 | 山东乾云启创信息科技股份有限公司 | Interrupt processing method and device and computer equipment |
| EP4428700A4 (en) * | 2021-12-10 | 2025-01-08 | Huawei Technologies Co., Ltd. | Service processing method and apparatus |
| CN115801582A (en) * | 2022-09-27 | 2023-03-14 | 海光信息技术股份有限公司 | System link bandwidth improving method, related device and computer equipment |
| US20250086124A1 (en) * | 2023-09-07 | 2025-03-13 | Samsung Electronics Co., Ltd. | Systems and methods for moving data between a storage device and a processing element |
| CN119990337A (en) * | 2025-04-15 | 2025-05-13 | 之江实验室 | Model reasoning acceleration method, system, computer device and readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2657836A1 (en) | 2013-10-30 |
| US8478926B1 (en) | 2013-07-02 |
| CN102906726B (en) | 2015-11-25 |
| EP2657836A4 (en) | 2014-02-19 |
| CN102906726A (en) | 2013-01-30 |
| WO2013082809A1 (en) | 2013-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8478926B1 (en) | Co-processing acceleration method, apparatus, and system | |
| EP3754498B1 (en) | Architecture for offload of linked work assignments | |
| CN107690622B (en) | Method, device and system for implementing hardware accelerated processing | |
| EP3748510A1 (en) | Network interface for data transport in heterogeneous computing environments | |
| CN101784989B (en) | Method and system for allocating network adapter resources among logical partitions | |
| CN111309649B (en) | A data transmission and task processing method, device and device | |
| US9176795B2 (en) | Graphics processing dispatch from user mode | |
| CN109074281B (en) | Method and device for distributing graphics processor tasks | |
| US9417924B2 (en) | Scheduling in job execution | |
| KR20170030578A (en) | Technologies for proxy-based multi-threaded message passing communication | |
| CN106325996A (en) | GPU resource distribution method and system | |
| US9697047B2 (en) | Cooperation of hoarding memory allocators in a multi-process system | |
| WO2024037239A1 (en) | Accelerator scheduling method and related device | |
| US20130247065A1 (en) | Apparatus and method for executing multi-operating systems | |
| US20250165286A1 (en) | Data processing method and apparatus | |
| CN114662162A (en) | Multi-algorithm core high-performance SR-IOV encryption and decryption system and method for realizing dynamic allocation of VF | |
| US10284501B2 (en) | Technologies for multi-core wireless network data transmission | |
| CN117806802A (en) | Task scheduling method based on containerized distributed system | |
| US10423424B2 (en) | Replicated stateless copy engine | |
| CN118642857A (en) | Task distribution method, device, system, equipment, medium and product | |
| CN118233530A (en) | Data transmission method, device, electronic device, storage medium and program product | |
| CN121411958A (en) | A method for CXL memory allocation and scheduling in a Linux environment | |
| CN120821639A (en) | Method, device and equipment for indicating virtual machine migration status | |
| CN115344192A (en) | Data processing method and device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: HUAWEI TECHNOLOGIES CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, XIAOFENG;FANG, FAN;QIN, LING;REEL/FRAME:029074/0304 Effective date: 20120910 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| CC | Certificate of correction | ||
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| CC | Certificate of correction | ||
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |