Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Further, the drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The following describes exemplary embodiments of the present disclosure in detail with reference to fig. 1 to 9.
Fig. 1 is a flowchart of a method for allocating network computing resources in an exemplary embodiment of the present disclosure.
Referring to fig. 1, the method for allocating network computing resources may include:
and step S102, receiving the service requirement sent by the terminal.
And step S104, analyzing the sub-requirements in the service requirements.
And step S106, dividing the network computing resources into a plurality of computing resource pools according to the sub-demands.
And S108, distributing the sub-demands to corresponding computing power resource pools for computing.
According to the embodiment of the disclosure, the sub-requirements in the service requirements are analyzed, the network computing resources are divided into the computing resource pools according to the sub-requirements, and then the sub-requirements are distributed to the corresponding computing resource pools for computing, so that the distribution mode of the network computing resources is optimized, and the utilization rate and the reliability of the network computing resources are improved.
Next, each step of the network computing power resource allocation method will be described in detail.
In an exemplary embodiment of the present disclosure, as shown in fig. 2, dividing the network computing power resource into a plurality of computing power resource pools according to the sub-requirements includes:
step S202, if the sub-requirements are determined to be storage requirements and/or control requirements, dividing the network computing resources to obtain a logic computing resource pool.
In an exemplary embodiment of the present disclosure, as shown in fig. 3, dividing the network computing power resource into a plurality of computing power resource pools according to the sub-requirements further includes:
step S302, if the sub-requirements are determined to be at least one of image processing requirements, computing requirements, password cracking requirements, numerical analysis requirements, data processing requirements and financial analysis requirements, dividing the network computing resources into parallel computing resource pools.
In an exemplary embodiment of the present disclosure, as shown in fig. 4, dividing the network computing power resource into a plurality of computing power resource pools according to the sub-requirements further includes:
step S402, if the sub-requirements are determined to be neural network computing requirements and/or machine learning computing requirements, dividing the network computing resources into neural network computing resource pools.
In an exemplary embodiment of the present disclosure, as shown in fig. 5, resolving sub-requirements in the service requirement includes:
step S502, performing calculation force analysis on the service requirement according to calculation force requirements, wherein an expression of the calculation force analysis comprises:
Cbr=α×∑Ai+β×∑Bj+γ×∑Ck+q,
the Cbr represents the computing power requirement, the alpha, the beta and the gamma are preset weights, the Ai represents a logic computing requirement, the Bj represents a parallel computing requirement, the Ck represents a neural network computing requirement, and the q represents a redundant computing power.
In an exemplary embodiment of the present disclosure, the business requirements include at least one of hyper-computational business requirements, AI inference training business requirements, target detection business requirements, and voice business requirements.
In an exemplary embodiment of the present disclosure, the computing resources in the computing resource pool include at least one of a CPU, a GPU, and an AI chip.
In an exemplary embodiment of the present disclosure, as shown in fig. 6, the step of allocating the network computing power resource includes:
step S602: in terms of service input, in the existing services at present, various services have different demands on computing power, such as a super-computation service, an AI inference training service, an inference type service for target detection, and the like, and speech semantic translation.
Step S604: the computing power network global orchestrator receives the service request.
Step S606: analyzing the calculation power requirement of the service requirement through a calculation power requirement analysis module:
computing power demand Cbr=α·∑Ai+β·∑Bj+γ·∑Ck+q。
Wherein Ai represents a logic operation requirement, Bj represents a parallel computing capability requirement, Ck represents a neural network computing requirement, alpha, beta and gamma are preset proportionality coefficients, and q is a redundancy computing power.
Step S608: and distributing the calculation power resources by the sub-modules of the orchestrator according to the output result of the calculation power requirement analysis module, namely distributing the requirement of the business on the calculation power capability to each sub-module for orchestration.
Step S610: and the calculation force node completes the calculation.
In an exemplary embodiment of the present disclosure, as shown in fig. 7, an architecture for allocating network computing power resources includes: a plurality of edge node devices, e.g., a first edge node device 702, a second edge node device 704, a third edge node device 706, and so on. In addition, the network computing power resource allocation architecture further comprises: a plurality of resource pools, such as a first resource pool 708, a second resource pool 710, and a third resource pool 712, and the like.
First, a user traffic demand is received.
Secondly, a calculation power requirement analysis module in the calculation power network global service orchestrator analyzes the user service requirements.
And thirdly, performing resource allocation on the analysis result through the parallel calculation arrangement submodule, the logic operation arrangement submodule and the neural network calculation arrangement submodule.
In an exemplary embodiment of the present disclosure, the computation power requirements corresponding to the parallel computation orchestration submodule are allocated to the first edge node device 702 and the second edge node device 704 through the routing nodes, the computation power requirements corresponding to the logic computation orchestration submodule are allocated to the third edge node device 706 through the routing nodes, and the computation power requirements corresponding to the neural network computation orchestration submodule are allocated to the first resource pool 708, the second resource pool 710, the third resource pool 712, and the like through the routing nodes.
Corresponding to the method embodiment, the disclosure further provides a device for allocating network computing power resources, which can be used to execute the method embodiment.
Fig. 8 is a block diagram of an apparatus for allocating network computing resources in an exemplary embodiment of the present disclosure.
Referring to fig. 8, the apparatus 800 for allocating network computing resources may include:
the receiving module 802 is configured to receive a service requirement sent by a terminal.
The parsing module 804 is configured to parse the sub-requirements in the service requirements.
A partitioning module 806 configured to partition the network computing power resource into a plurality of computing power resource pools according to the sub-requirements.
A calculation module 808 configured to allocate the sub-requirements to corresponding computational resource pools for calculation.
In an exemplary embodiment of the disclosure, the dividing module 806 is further configured to: and if the sub-requirements are determined to be storage requirements and/or control requirements, dividing the network computing resources to obtain a logic computing resource pool.
In an exemplary embodiment of the disclosure, the dividing module 806 is further configured to: and if the sub-requirements are determined to be at least one of image processing requirements, computing requirements, password cracking requirements, numerical analysis requirements, data processing requirements and financial analysis requirements, dividing the network computing resources into parallel computing resource pools.
In an exemplary embodiment of the disclosure, the dividing module 806 is further configured to: and if the sub-requirements are determined to be neural network computing requirements and/or machine learning computing requirements, dividing the network computing power resources to obtain a neural network computing resource pool.
In an exemplary embodiment of the disclosure, the parsing module 804 is further configured to: and performing calculation analysis on the service requirement through calculation requirement, wherein an expression of the calculation analysis comprises: cbr ═ α x Σ Ai + β x Σ Bj + γ x Σ Ck + q, said Cbr characterizing said computation power requirements, said α, said β, and said γ being preset weights, said Ai characterizing logical computation requirements, said Bj characterizing parallel computation requirements, said Ck characterizing neural network computation requirements, and said q characterizing redundant computation power.
In an exemplary embodiment of the present disclosure, the business requirements include at least one of hyper-computational business requirements, AI inference training business requirements, target detection business requirements, and voice business requirements.
In an exemplary embodiment of the present disclosure, the computing resources in the computing resource pool include at least one of a CPU, a GPU, and an AI chip.
Since the functions of the network computing resource allocation apparatus 800 have been described in detail in the corresponding method embodiments, the disclosure is not repeated herein.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 900 according to this embodiment of the invention is described below with reference to fig. 9. The electronic device 900 shown in fig. 9 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processing unit 910, the at least one memory unit 920, and a bus 930 that couples various system components including the memory unit 920 and the processing unit 910.
Wherein the storage unit stores program code that is executable by the processing unit 910 to cause the processing unit 910 to perform steps according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of the present specification. For example, the processing unit 910 may perform a method as shown in the embodiments of the present disclosure.
The storage unit 920 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM)9201 and/or a cache memory unit 9202, and may further include a read only memory unit (ROM) 9203.
Storage unit 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 900 may also communicate with one or more external devices 940 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 900, and/or any device (e.g., router, modem, etc.) that enables the electronic device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. As shown, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
The program product for implementing the above method according to an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.