Disclosure of Invention
The embodiment of the invention aims to provide a computing power processing network system and a computing power processing method, so as to solve the problem that a network in the prior art cannot realize computing power resource allocation.
In order to solve the above problem, an embodiment of the present invention provides a computing power processing network system, including:
a first processing layer, a second processing layer and a third processing layer;
the first processing layer is used for acquiring a service request of a service and sending the service request to the second processing layer;
the second processing layer is used for obtaining a calculation force configuration strategy at least according to the service request and sending the calculation force configuration strategy to the third processing layer;
the third processing layer is used for selecting a corresponding network path at least according to the computing power configuration strategy and dispatching the service to a corresponding computing power network element node for processing;
wherein the third process layer is further configured to:
acquiring network resource information;
and selecting a corresponding network path at least according to the computing power configuration strategy and the network resource information, and scheduling the service to a corresponding computing power network element node for processing.
Wherein the computing power processing network system further comprises: a fourth process layer;
the fourth processing layer is used for acquiring computing power resource state information of the computing power network element node and sending the computing power resource state information to the second processing layer;
and the second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource state information.
Wherein the second handle layer is further configured to:
carrying out abstract description and representation on the computing power resource state information to generate a computing power capability template;
and generating a computing power service contract at least according to the computing power capability template and the service request.
Wherein the second handle layer is further configured to:
and sending the computing power capability template and/or the computing power service contract to a corresponding computing power network element node.
Wherein the second handle layer is further configured to:
and performing performance monitoring on the state information of the computing resources of the computing network element nodes, and sending at least one of the performance of the computing resources, the computing cost management information and the computing resource fault information to the corresponding computing network element nodes.
Wherein the computing resource status information comprises at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
Wherein the fourth processing layer sends the computing resource state information to the second processing layer, including:
and the second processing layer sends a calculation force measurement request message to the fourth processing layer and receives a calculation force state information response message fed back by the fourth processing layer, wherein the calculation force state information response message carries calculation force resource state information of calculation force network element nodes.
Wherein the fourth processing layer sends the computing resource state information to the second processing layer, including:
and the fourth processing layer reports the computing resource state information of the computing network element node to the second processing layer periodically or non-periodically.
Wherein the computation power measurement request message includes at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
And the second processing layer carries the calculation force measurement request message through the telemetry information of the operation maintenance management (OAM).
The embodiment of the invention also provides a computing power processing method, which is applied to a network system for computing power processing and comprises the following steps:
acquiring a service request of a service;
obtaining a computing power configuration strategy at least according to the service request mapping;
and selecting a corresponding network path at least according to the computing power configuration strategy, and scheduling the service to a corresponding computing power network element node for processing.
Wherein the method further comprises:
acquiring network resource information;
and selecting a corresponding network path at least according to the computing power configuration strategy and the network resource information, and scheduling the service to a corresponding computing power network element node for processing.
Wherein the method further comprises:
acquiring computing resource state information of a computing network element node;
and obtaining the computing power configuration strategy at least according to the service request and the computing power resource state information.
Wherein the method further comprises:
carrying out abstract description and representation on the computing power resource state information to generate a computing power capability template;
and generating a computing power service contract at least according to the computing power capability template and the service request.
Wherein the method further comprises:
and sending the computing power capability template and/or the computing power service contract to a corresponding computing power network element node.
Wherein the method further comprises:
and performing performance monitoring on the state information of the computing resources of the computing network element nodes, and sending at least one of the performance of the computing resources, the computing cost management information and the computing resource fault information to the corresponding computing network element nodes.
Wherein the computing resource status information comprises at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
The embodiment of the invention also provides a network system for computing power processing, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the processor realizes the computing power processing method when executing the program.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the computational power processing method described above.
The technical scheme of the invention at least has the following beneficial effects:
the network system for computing power processing and the computing power processing method provided by the embodiment of the invention have the advantages that dynamically distributed computing resources are interconnected based on ubiquitous network connection, and massive services can call computing resources in different places in real time as required through unified and collaborative scheduling of multidimensional resources such as network, storage and computing power, so that the global optimization of connection and computing power in a network is realized, and consistent user experience is provided.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a computing power processing network system, including:
a first processing layer (also referred to as a power service layer), a second processing layer (also referred to as a power platform layer), and a third processing layer (also referred to as a power routing layer);
the first processing layer is used for acquiring a service request of a service and sending the service request to the second processing layer;
the second processing layer is used for obtaining a calculation force configuration strategy at least according to the service request and sending the calculation force configuration strategy to the third processing layer; for example, the service request includes parameters such as service ID, service type, service level, latency, etc. Mapping to a corresponding computational power configuration policy: CPU/GPU resource configuration requirements, storage configuration requirements, and the like.
And the third processing layer is used for selecting a corresponding network path at least according to the computing power configuration strategy and dispatching the service to a corresponding computing power network element node for processing.
Wherein the third process layer is further configured to:
acquiring network resource information;
and selecting a corresponding network path at least according to the computing power configuration strategy and the network resource information, and scheduling the service to a corresponding computing power network element node for processing.
It should be noted that the third processing layer may periodically or dynamically acquire the network resource information from the network resource layer in fig. 1, or the network resource layer actively reports the network resource information, so that the third processing layer can select a corresponding network path according to the computational power configuration policy and the current network resource information, and schedule the service to a corresponding computational power network element node for processing. The current network resource information may be understood as network resource information newly acquired by the third processing layer or network resource information newly reported by the network resource layer.
Optionally, the network path selected by the third processing layer is a current optimal network path, and the computational power network element node scheduled by the third processing layer is the current optimal computational power network element node, which is not specifically limited herein.
Wherein, the force calculation network element node refers to a network device with force calculation; further, a computational power routing node (a network device which is located in a computational power routing layer and is responsible for the notification and transmission of computational power resource information in the network, and may be a router device with computational power awareness capability) and a computational power node (a device with only computational power, which is equivalent to a device for processing computational tasks in the network, such as a server device of a data center) can be included.
In the embodiment of the present invention, the computational network element node is a network node disposed in a network resource layer as shown in fig. 1. The network resource layer is used for providing network infrastructure for information transmission, and comprises an access network, a metropolitan area network and a backbone network.
The network system for computing power processing provided in the above embodiments of the present invention may also be referred to as a system oriented to computing network convergence, a computing power sensing network system, a computing power network system, or the like, and is not limited specifically herein.
In order to realize perception, interconnection and cooperative scheduling of ubiquitous computing and services, a network oriented to computing network convergence is logically and functionally divided into a first processing layer, a second processing layer and a third processing layer. It should be noted that, in the embodiment of the present invention, the first processing layer, the second processing layer, and the third processing layer are divided according to logic functions, and in actual deployment, each of the processing layers may be deployed on one device or may be deployed on multiple devices; if the device is deployed on one device, information transmission can be carried out between the processing layers through the internal interface; if the device is deployed on a plurality of devices, information transmission can be realized among all processing layers through signaling interaction. Optionally, the specific names of the first processing layer, the second processing layer, and the third processing layer are not limited in the embodiment of the present invention, and all layer names capable of implementing the corresponding functions are applicable to the embodiment of the present invention. For example, the second processing layer, which may also be referred to as a computing platform layer, a computing power management device, a computing power management node, a computing power management layer, etc., is not necessarily enumerated herein.
The system for computing network convergence provided by the embodiment of the invention is based on computing resources where the network is ubiquitous, the computing platform layer finishes abstraction, modeling, control and management of the computing resources and informs the computing routing layer, and the computing routing layer comprehensively considers user requirements, network resource conditions and computing resource conditions and schedules services to appropriate computing network element nodes so as to realize optimal resource utilization and ensure extreme user experience.
In the embodiment of the present invention, the scheduling of the service to the corresponding computational element node includes at least two ways:
the first method is as follows: and (3) computing resource scheduling (namely selecting a corresponding network path according to the computing configuration strategy and scheduling the service to a corresponding computing network element node for processing), matching the service and the computing power and scheduling the service to a proper computing node for service processing. That is, computing-based scheduling enables finding the best-purpose serving compute node.
The second method comprises the following steps: computing resource scheduling + network resource scheduling (namely, selecting a corresponding network path according to the computing configuration strategy and the network resource information, and scheduling the service to a corresponding computing network element node for processing); the calculation resource scheduling is combined with the scheduling of the existing network resource information (for example, the network resource information comprises bandwidth, time delay jitter and the like) in the network; the calculation resource scheduling realizes the scheduling of the service to the proper calculation node, and the network resource scheduling can realize the searching of the optimal network path of the target service calculation node. Namely, scheduling based on network and scheduling of computing power (combining computing power resource scheduling and network resources), through the optimal network path, and performing optimal computing power service processing, thereby providing optimal user experience.
As shown in fig. 1, the computation force service layer supports application solution to form an atomization functional component and an algorithm library, and the algorithm library is uniformly scheduled by the API gateway to realize instantiation of the atomization algorithm in the ubiquitous computation force resource as required. Through the I1 interface, the computing power service layer passes service requests of the business or application to the computing power platform layer.
The computing platform layer needs to complete perception, measurement and OAM management of computing resources so as to support the perception, measurement, management and control of the computing resources by the network, which is beneficial to realizing joint scheduling of the computing network and improving the resource utilization rate of the operator network.
And the computation routing layer is used for discovering based on the abstracted computing resources, comprehensively considering the network condition and the computing resource condition and flexibly dispatching the service to different computation network element nodes according to the requirement. The specific functions mainly comprise calculation force route identification, calculation force route control, calculation force state network notification, calculation force route addressing, calculation force route forwarding and the like.
As an alternative embodiment, the network system for computing power processing further includes: a fourth processing layer (also referred to as a computational resource layer);
the fourth processing layer is used for acquiring computing power resource state information of computing power network element nodes and sending the computing power resource state information to the second processing layer;
and the second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource state information.
The computing resource state information is used to reflect information such as a ubiquitous computing capability state and a deployment location in a network, and may refer to capabilities of a service connection number, CPU/GPU computing power, a deployment form (physical, virtual), a deployment location (corresponding IP address), a storage capacity, a storage form, and the like, or may refer to computing capabilities abstracted based on the basic computing resources, and is used to reflect currently available computing capabilities of each node of the network and a distributed location and form.
Similarly, the fourth processing layer is also divided according to logic functions, and in actual deployment, each of the first processing layer, the second processing layer, the third processing layer, and the fourth processing layer may be deployed on one device or may be deployed on multiple devices; if the device is deployed on one device, information transmission can be carried out between the processing layers through the internal interface; if the device is deployed on a plurality of devices, information transmission can be realized among all processing layers through signaling interaction.
In order to meet the diversity calculation requirements in the field of edge calculation, aiming at different applications, the Morel law is restored at a system level and calculation innovation is promoted by combining various calculation forces from a single-core CPU (central processing unit) to a multi-core CPU, to a CPU + GPU (image processor) + FPGA (field programmable gate array) and the like. In the face of various heterogeneous computing resources distributed in a network, the fourth processing layer needs to collect and report computing resource state information.
As another alternative, the second processing layer (i.e., the computing platform layer) is further configured to:
carrying out abstract description and representation on the computing power resource state information to generate a computing power capability template;
and generating a computing power service contract at least according to the computing power capability template and the service request.
In the face of heterogeneous computing resources, as shown in fig. 1, the computing force platform layer includes a "computing force modeling" submodule, which first needs to research a computing force resource measurement dimension and a measurement and weighing system, and forms a corresponding computing force capability template through information such as a requirement of a general algorithm or a custom usage. And combining the plurality of computing power capability templates and the service request of the service into a computing power service contract for meeting the computing power requirement of the service.
It should be noted that, in the above embodiments of the present invention, the term "at least according to" is understood as: in order to make the obtained result accurate, those skilled in the art can refer to other parameters related to calculation power to obtain better parameters based on the common means in the art, which are not enumerated herein.
Further, in the foregoing embodiment of the present invention, the second processing layer (i.e., the force platform layer) is further configured to:
and sending the computing power capability template and/or the computing power service contract to a corresponding computing power network element node. For example, the second processing layer first sends the computational capability template and/or the computational service contract to the third processing layer, and the computational capability template and/or the computational service contract is forwarded to the corresponding computational network element node by the third processing layer, or the computational capability template and/or the computational service contract is directly sent to the computational network element node by the second processing layer, where a specific sending path is not limited herein.
Optionally, the computing capability template is mainly used for unifying some information requesting modes between the computing capability processing network system and the user equipment, for example, after the computing capability processing network system receives a service request of a service, the service request is converted into information conforming to a system processing format according to the computing capability template, so as to facilitate subsequent processing; or after the user equipment receives the computing capability template (which can be acquired by the computing capability network element node), before sending the service request, the user equipment converts the relevant information of the service request into information conforming to the system processing format through the computing capability template, so that the subsequent processing of the system is facilitated, and the processing load of the system is reduced.
Optionally, the computing power service contract is mainly used for generating a corresponding computing power service contract according to the user subscription information, and once the user service comes in, the network needs to provide a corresponding computing power service according to the computing power service contract. In addition, after obtaining the computation force service contract, the user equipment may obtain which computation force network element node the user equipment can communicate with, and obtain a charging rule, and the like, which is not specifically limited herein.
As shown in fig. 1, the computing power platform layer includes a "computing power notification" sub-module, and the "computing power notification" sub-module is responsible for notifying the computing power resources actually deployed to the corresponding computing power network element nodes together with information such as a computing power service contract after the computing power resources are abstractly represented by a computing power template. The sub-modules comprise sub-functional modules of computing power service contract announcement, computing power capability announcement, computing power state announcement and the like. The computing power service contract notification means that computing power service requirements are generated according to service requests of the computing power service layer and are notified to corresponding computing power network element nodes. The calculation capacity notification means that after the calculation resources are actually deployed and are abstractly represented by the calculation capacity template, the calculation capacity notification is notified to the corresponding calculation capacity network element nodes. The computing power state notification notifies the real-time state of the computing power resource to a corresponding network node through an I4 interface.
Further, the second processing layer is further configured to:
and performing performance monitoring on the state information of the computing resources of the computing network element nodes, and sending at least one of the performance of the computing resources, the computing cost management information and the computing resource fault information to the corresponding computing network element nodes.
As shown in fig. 1, the second processing layer (i.e., the computing power platform layer) includes "computing power OAM" sub-modules, which include computing power performance monitoring, computing power charge management, and computing power resource fault management for the computing power resource layer. In other words, the computation OAM submodule mainly includes real-time states of the computation network element node, including capacity expansion, capacity reduction and fault states, and according to these information, the computation platform layer can update the current available computation state in time on the one hand, and can perform some operations of fault recovery on the other hand: for example, the computing platform layer sends some operation instructions such as restart and configuration, and the like, and recovers and processes the fault.
As an optional embodiment, the computing power resource status information includes at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
As another optional embodiment, the sending, by the fourth processing layer, the computing resource state information to the second processing layer includes:
and the second processing layer sends a calculation force measurement request message to the fourth processing layer and receives a calculation force state information response message fed back by the fourth processing layer, wherein the calculation force state information response message carries calculation force resource state information of calculation force network element nodes.
For example, as shown in fig. 2, in actual deployment, the second processing layer is deployed on the force calculation management device, and the fourth processing layer is deployed on the force calculation node device, then the force calculation management device actively sends a force calculation measurement request message to the force calculation node device, and the force calculation node device sends a force calculation state information response message to the force calculation management device according to the force calculation measurement request message.
Or, the fourth processing layer sends the computing resource state information to the second processing layer, including:
and the fourth processing layer reports the computing resource state information of the computing network element node to the second processing layer periodically or non-periodically.
Optionally, the computation power measurement request message includes at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
And the second processing layer carries the calculation force measurement request message through the telemetering (telemeasure) information of the operation maintenance management OAM, so that the calculation force sensing work flow is realized.
For example, as shown in fig. 3, unused bits (bit bits) in the telemetry information (e.g., OAM-trace-type) of the OAM are utilized: bit 4-7;
and (7) Bit 7: defined as a bit to indicate whether the power sensing function is on.
And (4) Bit 4: defined as bit of the computation force measurement request/computation force state information response.
And (5) Bit 5: defined as bit of the computation force measurement request/computation force state information response.
For another example, a Node data list (Node data list) is used to carry state information of computational power resources, network resources and the like of the computational power network element Node for the variable length list. Wherein, the calculation power resource means: server, processing, memory, storage, virtual machine, etc. Network resources include network bandwidth, latency, jitter, etc. requirements.
The system for computing network convergence provided by the embodiment of the invention not only defines functional modules such as a computing resource layer, a computing platform layer, a computing routing layer and the like, but also defines interfaces among partial functional modules.
As shown in fig. 1:
i1 interface: an interface between the computing power service layer and the computing power platform layer is defined for communicating SLA (service level agreement) requirements, computing power service deployment configuration information, and the like.
I2 interface: and the calculation force modeling submodule is used for transmitting information such as a calculation force service contract and a calculation force capability template to the calculation force notification submodule.
I3 interface: and the calculation power OAM sub-module is used for transmitting information such as calculation power resource performance monitoring, calculation power charge management, calculation power resource faults and the like to the calculation power notification sub-module.
I4 interface: the method is used for transmitting the computing power service contract information and the state announcement of the computing power resource to the computing power routing layer by the computing power platform layer.
I5 interface: the interface between the calculation power resource layer and the calculation power platform layer is mainly used for calculation power resource registration management, performance state and fault information transmission of calculation power resources and the like.
To sum up, the system for computing network convergence provided by the embodiment of the present invention is a novel network architecture, and based on ubiquitous network connection, based on highly distributed computing nodes, through automatic deployment, optimal routing, and load balancing of services, a completely new network infrastructure for computing power perception is constructed, so that network unavailability, computing power ubiquity, and intelligence unavailability are really realized. The massive applications, the massive function functions and the massive computing resources form an open ecology, wherein the massive applications can call the computing resources in different places in real time according to needs, the utilization efficiency of the computing resources is improved, and finally, the user experience optimization, the computing resource utilization rate optimization and the network efficiency optimization are realized.
As shown in fig. 4, an embodiment of the present invention further provides a computing power processing method applied to a network system for computing power processing, including:
step 41, acquiring a service request of a service;
step 42, obtaining a computing power configuration strategy at least according to the service request mapping; for example, the service request includes parameters such as service ID, service type, service level, latency, etc. Mapping to a corresponding computational power configuration policy: CPU/GPU resource configuration requirements, storage configuration requirements, and the like.
And 43, selecting a corresponding network path at least according to the computing power configuration strategy, and scheduling the service to a corresponding computing power network element node for processing.
Optionally, the method further comprises:
acquiring network resource information;
and selecting a corresponding network path at least according to the computing power configuration strategy and the network resource information, and scheduling the service to a corresponding computing power network element node for processing.
It should be noted that the third processing layer may periodically or dynamically acquire the network resource information from the network resource layer in fig. 1, or the network resource layer actively reports the network resource information, so that the third processing layer can select a corresponding network path according to the computational power configuration policy and the current network resource information, and schedule the service to a corresponding computational power network element node for processing. The current network resource information may be understood as network resource information newly acquired by the third processing layer or network resource information newly reported by the network resource layer.
Optionally, the network path selected by the third processing layer is a current optimal network path, and the computational power network element node scheduled by the third processing layer is the current optimal computational power network element node, which is not specifically limited herein.
Alternatively, the computing power processing method is applied to the computing power processing network system as described in fig. 1 to 3, and the execution subjects of the above steps 41, 42, and 43 may be respective corresponding processing layers of the computing power processing network system, for example, step 41 is executed by a first processing layer, step 42 is executed by a second processing layer, and step 43 is executed by a third processing layer.
It should be noted that, the first processing layer, the second processing layer, and the third processing layer are divided according to logic functions, and in actual deployment, each processing layer may be deployed on one device or multiple devices; if the device is deployed on one device, information transmission can be carried out between the processing layers through the internal interface; if the device is deployed on a plurality of devices, information transmission can be realized among all processing layers through signaling interaction. Optionally, the specific names of the first processing layer, the second processing layer, and the third processing layer are not limited in the embodiment of the present invention, and all layer names capable of implementing the corresponding functions are applicable to the embodiment of the present invention.
In the embodiment of the present invention, the scheduling of the service to the corresponding computational element node includes at least two ways:
the first method is as follows: and (3) computing resource scheduling (namely selecting a corresponding network path according to the computing configuration strategy and scheduling the service to a corresponding computing network element node for processing), matching the service and the computing power and scheduling the service to a proper computing node for service processing. That is, computing-based scheduling enables finding the best-purpose serving compute node.
The second method comprises the following steps: computing resource scheduling + network resource scheduling (namely, selecting a corresponding network path according to the computing configuration strategy and the network resource information, and scheduling the service to a corresponding computing network element node for processing); the calculation resource scheduling is combined with the scheduling of the existing network resource information (for example, the network resource information comprises bandwidth, time delay jitter and the like) in the network; the calculation resource scheduling realizes the scheduling of the service to the proper calculation node, and the network resource scheduling can realize the searching of the optimal network path of the target service calculation node. Namely, scheduling based on network and scheduling of computing power (combining computing power resource scheduling and network resources), through the optimal network path, and performing optimal computing power service processing, thereby providing optimal user experience.
As an alternative embodiment, the method further comprises:
acquiring computing resource state information of a computing network element node;
and obtaining the computing power configuration strategy at least according to the service request and the computing power resource state information.
The computing resource state information is used to reflect information such as a ubiquitous computing capability state and a deployment location in a network, and may refer to capabilities of a service connection number, CPU/GPU computing power, a deployment form (physical, virtual), a deployment location (corresponding IP address), a storage capacity, a storage form, and the like, or may refer to computing capabilities abstracted based on the basic computing resources, and is used to reflect currently available computing capabilities of each node of the network and a distributed location and form.
As an alternative embodiment, the method further comprises:
carrying out abstract description and representation on the computing power resource state information to generate a computing power capability template;
and generating a computing power service contract at least according to the computing power capability template and the service request.
The network system for computing power processing in the embodiment of the present invention further includes: and the fourth processing layer (also called as a computing resource layer) is used for realizing the collection and report of the computing resource state information (reported to the second processing layer). The second processing layer abstractly describes and represents the state information of the computing power resource to generate a computing power capability template; and generating a computing power service contract according to the computing power capability template and the service request of the service.
And sending the computing power capability template and/or the computing power service contract to a corresponding computing power network element node. For example, the second processing layer first sends the computational capability template and/or the computational service contract to the third processing layer, and the computational capability template and/or the computational service contract is forwarded to the corresponding computational network element node by the third processing layer, or the computational capability template and/or the computational service contract is directly sent to the computational network element node by the second processing layer, where a specific sending path is not limited herein.
Optionally, the computing capability template is mainly used for unifying some information requesting modes between the computing capability processing network system and the user equipment, for example, after the computing capability processing network system receives a service request of a service, the service request is converted into information conforming to a system processing format according to the computing capability template, so as to facilitate subsequent processing; or after the user equipment receives the computing capability template (which can be acquired by the computing capability network element node), before sending the service request, the user equipment converts the relevant information of the service request into information conforming to the system processing format through the computing capability template, so that the subsequent processing of the system is facilitated, and the processing load of the system is reduced.
Optionally, the computing power service contract is mainly used for generating a corresponding computing power service contract according to the user subscription information, and once the user service comes in, the network needs to provide a corresponding computing power service according to the computing power service contract. In addition, after obtaining the computation force service contract, the user equipment may obtain which computation force network element node the user equipment can communicate with, and obtain a charging rule, and the like, which is not specifically limited herein.
It should be noted that, in the above embodiments of the present invention, the term "at least according to" is understood as: in order to make the obtained result accurate, those skilled in the art can refer to other parameters related to calculation power to obtain better parameters based on the common means in the art, which are not enumerated herein.
As yet another alternative embodiment, the method further comprises:
and sending the computing power capability template and/or the computing power service contract to a corresponding computing power network element node. For example, the computing power capability template and/or the computing power service contract are/is sent to a third processing layer, and the third processing layer sends the computing power capability template and/or the computing power service contract to a corresponding computing power network element node.
Further, the method further comprises:
and performing performance monitoring on the state information of the computing resources of the computing network element nodes, and sending at least one of the performance of the computing resources, the computing cost management information and the computing resource fault information to the corresponding computing network element nodes.
As an optional embodiment, the computing power resource status information includes at least one of:
a service ID;
information of the central processing unit CPU;
the number of service links;
information of the memory;
information of the image processor GPU;
information of the hard disk.
In summary, the computing power processing method provided by the embodiment of the invention is based on ubiquitous network connection, and based on highly distributed computing nodes, through automatic deployment, optimal routing and load balancing of services, a brand-new network infrastructure for computing power perception is constructed, so that network inaccessibility, computing power inaccessibility and intelligence inaccessibility are really realized. The massive applications, the massive function functions and the massive computing resources form an open ecology, wherein the massive applications can call the computing resources in different places in real time according to needs, the utilization efficiency of the computing resources is improved, and finally, the user experience optimization, the computing resource utilization rate optimization and the network efficiency optimization are realized.
The embodiment of the present invention further provides a network system for computing power, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements each process in the embodiment of the computing power processing method when executing the program, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements each process in the above-described computing power processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks.
These computer program instructions may also be stored in a computer-readable storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.