[go: up one dir, main page]

WO2018006676A1 - 加速资源处理方法、装置及网络功能虚拟化系统 - Google Patents

加速资源处理方法、装置及网络功能虚拟化系统 Download PDF

Info

Publication number
WO2018006676A1
WO2018006676A1 PCT/CN2017/087236 CN2017087236W WO2018006676A1 WO 2018006676 A1 WO2018006676 A1 WO 2018006676A1 CN 2017087236 W CN2017087236 W CN 2017087236W WO 2018006676 A1 WO2018006676 A1 WO 2018006676A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
acceleration
acceleration resource
service
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/087236
Other languages
English (en)
French (fr)
Inventor
黄宝君
康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to EP17823488.6A priority Critical patent/EP3468151B1/en
Priority to JP2018568900A priority patent/JP6751780B2/ja
Priority to KR1020197001653A priority patent/KR102199278B1/ko
Publication of WO2018006676A1 publication Critical patent/WO2018006676A1/zh
Priority to US16/234,607 priority patent/US10838890B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Definitions

  • the present invention relates to communications technologies, and in particular, to an accelerated resource processing method, apparatus, and network function virtualization system.
  • NFV Network Function Virtualization
  • COTS Commercial-Off-The-Shelf
  • the virtual network function can provide the functions of different network elements of the original telecommunication network, and the virtual network function can use the hardware resources of the infrastructure layer, including computing hardware, storage hardware, and network.
  • Hardware and acceleration hardware is hardware dedicated to some complex function acceleration, such as hardware corresponding to encryption and decryption, media audio and video transcoding.
  • the application carries the acceleration type, the algorithm type, and the like, and the NFV selects the acceleration hardware that meets the requirements according to the requirements of the acceleration resource. .
  • the embodiment of the invention provides an acceleration resource processing method, a device and a network function virtualization system, which are used to solve the problem that the delay and performance of the service caused by the prior art are not up to standard.
  • a first aspect of the embodiments of the present invention provides an acceleration resource processing method, where the method includes:
  • the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource scheduling policy
  • the service acceleration resource scheduling policy included in the acceleration resource request is based on the industry
  • the business needs of the business are determined.
  • the acceleration resource of the service is determined according to the attribute parameter of the acceleration resource and the service resource scheduling policy.
  • the method not only determines the acceleration resource of the service according to the attribute parameter of the acceleration resource but also the service resource scheduling policy, so that the determined acceleration resource can meet the actual demand of the service, and the time delay and performance requirements of the service are ensured. .
  • the acceleration resources of the business can be determined by the following methods:
  • the acceleration resource calculation node is determined according to the attribute parameter of the acceleration resource.
  • the computing node of the acceleration resource of the service is determined from the acceleration resource computing node.
  • the method before determining the acceleration resource calculation node according to the attribute parameter of the acceleration resource, the method further includes:
  • the computing resource computing node is obtained according to the acceleration resource request.
  • the foregoing method for determining a computing node of an acceleration resource of a service from an acceleration resource computing node according to a service acceleration resource scheduling policy is:
  • the current accelerated resource type is determined according to the priority order of the accelerated resources in the accelerated resource scheduling policy, and secondly, the determined current accelerated resource type is determined. If the current accelerated resource type is the local virtualized accelerated resource or the local hard pass-through Accelerating the resource: determining the computing node of the acceleration resource of the service from the intersection of the acceleration resource computing node and the computing resource computing node; if the current acceleration resource type is the remote virtualization acceleration resource or the remote hard through acceleration resource, then: A computing node that accelerates the acceleration of the resource computing node and the computing resource computing node to determine the acceleration resource of the service.
  • the foregoing method for determining a computing node of an acceleration resource of a service from an intersection of an acceleration resource computing node and a computing resource computing node is:
  • the foregoing method for determining a computing node of an acceleration resource of a service from an intersection of an acceleration resource computing node and a computing resource computing node is:
  • the morphological attribute is used to identify a deployment configuration of a computing node, and the deployment modality includes virtualization and hard pass-through.
  • it also includes:
  • the acceleration resource attribute information is received, and the acceleration resource attribute information includes at least a morphological attribute, and the accelerated resource attribute information is obtained by querying an acceleration resource attribute when the node is initialized periodically.
  • a new acceleration resource scheduling policy indication is also received, where the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource, and further, according to The policy name, the accelerated resource type, and the scheduling priority of each type of accelerated resource are generated to generate an accelerated resource scheduling policy.
  • the default acceleration resource scheduling policy is determined as the acceleration resource scheduling policy in the resource scheduling request.
  • the scheduling priorities of each type of accelerated resources in the default accelerated resource scheduling policy are: local virtualization acceleration resources, remote virtualization acceleration resources, local hard through acceleration resources, Remote Hard through accelerates resources.
  • the attribute parameters include: acceleration type, algorithm type, and acceleration flow.
  • a second aspect of the present invention provides an acceleration resource processing apparatus.
  • the device has the function of implementing the above method. These functions can be implemented in hardware or in software by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the apparatus may include a first receiving module and a processing module, where the first receiving module is configured to receive an acceleration resource request of the service, where the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource.
  • the scheduling policy wherein the service acceleration resource scheduling policy is determined according to the service requirement of the service; and the processing module is configured to determine the acceleration resource of the service according to the attribute parameter of the acceleration resource and the service acceleration resource scheduling policy.
  • the processing module can include:
  • the first determining unit is configured to determine an acceleration resource calculation node according to the attribute parameter of the acceleration resource.
  • a second determining unit configured to determine, from the acceleration resource computing node, a computing node of the acceleration resource of the service according to the service acceleration resource scheduling policy.
  • the processing module may further include:
  • An obtaining unit configured to acquire a computing resource computing node according to the acceleration resource request.
  • the second determining unit is specifically configured to:
  • the current accelerated resource type is determined according to the priority order of the accelerated resources in the accelerated resource scheduling policy; if the current accelerated resource type is the local virtualized accelerated resource or the local hard through accelerated resource, the following: the accelerated resource computing node and the computing resource computing node If the current acceleration resource type is a remote virtualization acceleration resource or a remote hard through acceleration resource, the intersection of the acceleration resource calculation node and the calculation resource calculation node determines the service. Accelerate the compute node of the resource.
  • the second determining unit is further used to:
  • the second determining unit is further used to:
  • the morphological attribute is used to identify a deployment configuration of a computing node, and the deployment modality includes virtualization and hard pass-through.
  • the above device may further include:
  • a second receiving module configured to receive acceleration resource attribute information, where the acceleration resource attribute information includes at least a morphological attribute, and the accelerated resource attribute information is obtained by querying an acceleration resource attribute when periodically or calculating a node initialization.
  • the above device may further include:
  • the third receiving module and the generating module may be configured to receive a new acceleration resource scheduling policy indication, where the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource; a generating module,
  • the acceleration resource scheduling policy may be generated according to a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • the above device may further include:
  • a determining module configured to: when the resource scheduling request does not include an acceleration resource scheduling policy, the default acceleration resource scheduling The policy is determined as an accelerated resource scheduling policy in the resource scheduling request.
  • the scheduling priorities of each type of accelerated resources in the default accelerated resource scheduling policy are: local virtualization acceleration resources, remote virtualization acceleration resources, local hard through acceleration resources, The remote hard pass-through acceleration resource.
  • the attribute parameters include: acceleration type, algorithm type, and acceleration flow.
  • a third aspect of the embodiments of the present invention provides an acceleration resource processing apparatus.
  • the apparatus includes a memory and a processor, wherein the memory is for storing program instructions, and the processor is for calling program instructions in the memory to perform the aforementioned method.
  • the processor can be used to execute:
  • the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource scheduling policy, where the service acceleration resource scheduling policy is determined according to the service requirement of the service; and the attribute parameter and the service according to the acceleration resource Accelerate resource scheduling strategies to determine the acceleration resources of the business.
  • the processor is also used to:
  • the acceleration resource calculation node is determined according to the attribute parameter of the acceleration resource.
  • the computing node of the acceleration resource of the service is determined from the acceleration resource computing node.
  • the processor is also used to:
  • the computing resource computing node is obtained according to the acceleration resource request.
  • the processor is further configured to: determine a current accelerated resource type according to an order of priority of the accelerated resources in the accelerated resource scheduling policy.
  • the calculation node of the acceleration resource of the service is determined from the intersection of the acceleration resource calculation node and the calculation resource calculation node.
  • the calculation node of the acceleration resource of the service is determined from the difference between the acceleration resource calculation node and the calculation resource calculation node.
  • the processor is also used to:
  • the processor is also used to:
  • the morphological attribute is used to identify the deployment form of the computing node, and the deployment form includes virtualization and hard pass-through.
  • the processor is also used to:
  • acceleration resource attribute information includes at least the foregoing morphological attribute
  • the acceleration resource attribute information is obtained by querying the accelerated resource attribute when periodically or calculating the node initialization.
  • the processor is also used to:
  • the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • An accelerated resource scheduling policy is generated according to the policy name, the accelerated resource type, and the scheduling priority of each type of accelerated resource.
  • the processor is also used to:
  • the default acceleration resource scheduling policy is determined as the acceleration resource scheduling policy in the resource scheduling request.
  • the scheduling priorities of each type of accelerated resources in the default acceleration resource scheduling policy are: local virtualization acceleration resources, remote virtualization acceleration resources, and local hard through acceleration resources. Remote hard-through acceleration resources.
  • the above attribute parameters include: acceleration type, algorithm type, and acceleration flow.
  • a fourth aspect of the embodiments of the present invention provides a network function virtualization NFV system, where the NFV system includes the foregoing acceleration resource processing apparatus.
  • the solution provided by the embodiment of the present invention selects an acceleration resource according to a service acceleration resource scheduling policy, and can meet specific requirements such as delay sensitivity of the service, thereby improving service delay and performance.
  • Figure 1 is a system architecture diagram of NFV
  • Embodiment 1 of an acceleration resource processing method according to an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of various types of acceleration resources
  • FIG. 4 is a schematic flowchart diagram of Embodiment 2 of an acceleration resource processing method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of Embodiment 3 of an acceleration resource processing method according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of Embodiment 4 of an acceleration resource processing method according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of inter-module interaction for defining an accelerated resource scheduling policy
  • Figure 8 shows the complete process of accelerating resource processing
  • FIG. 9 is a block diagram of a first embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a block diagram of a second embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 11 is a block diagram of a third embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 12 is a block diagram of a fourth embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 13 is a block diagram of a fifth embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 14 is a block diagram of a sixth embodiment of an acceleration resource processing apparatus according to an embodiment of the present disclosure.
  • FIG. 15 is a block diagram of a seventh embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention.
  • Figure 1 shows the system architecture of the NFV.
  • the NFV system is used in various networks, such as data center networks, carrier networks, or local area networks.
  • the NFV system includes NFV Management and Orchestration (NFV-MANO) 101, NFV Infrastructure (NFV) 130, and multiple Virtual Network Functions (VNF). 108), multiple element management (EM) 122, network service, VNF and infrastructure description (Network Service, VNF and Infrastructure Description) 126, and operation support system (Operation-Support System/Business Support System, OSS/BSS) 124.
  • NFV-MANO NFV Management and Orchestration
  • NFV NFV Infrastructure
  • VNF Virtual Network Functions
  • the NFVI 130 includes computing hardware 112, storage hardware 114, network hardware 116, acceleration hardware 115, virtualization layer, virtual computing 110, virtual storage 118, virtual network 120, and virtual acceleration 123.
  • the NFV management and orchestration system 101 is used to perform monitoring and management of the virtual network function 108 and the NFV infrastructure layer 130.
  • the NFV management and orchestration system 101 includes:
  • NFV orchestrator 102 may implement network services on NFV infrastructure layer 130, may also perform resource related requests from one or more VNF managers 104, send configuration information to VNF manager 104, and collect virtual network functions 108 Status information.
  • VNFM VNF Manager
  • Virtualized Infrastructure Manager (VIM) 106 A function that can perform resource management, such as managing allocation and operational functions of infrastructure resources.
  • the virtual infrastructure manager 106 and the VNF manager 104 can communicate with each other for resource allocation and exchange of configuration and status information for virtualized hardware resources.
  • the VIM includes an acceleration resource management module 121 for performing acceleration resource allocation management and the like.
  • the NFV infrastructure layer 130 includes: computing hardware 112, storage hardware 114, acceleration hardware 115, network hardware 116, virtualization layer, virtual computing 110, virtual storage 118, virtual acceleration 123 And a virtual network 120.
  • the acceleration hardware 115, the acceleration resource management agent 125, and the virtual acceleration 123 are related to the scheduling of the acceleration resources.
  • Compute node refers to the physical host that provides computing hardware, network hardware, acceleration hardware, etc. in the NFV system architecture. Different computing nodes are different physical hosts.
  • Acceleration resource refers to a resource that can provide an acceleration function, and may be an acceleration hardware in the NFV in the embodiment of the present invention.
  • the computing nodes can be used to provide acceleration resources (accelerating hardware), that is, different computing nodes respectively provide different acceleration resources (acceleration hardware), when the acceleration resources need to be determined, the acceleration resources can be determined by providing Accelerate the hardware) of the compute node to achieve.
  • Embodiment 1 of an acceleration resource processing method according to an embodiment of the present invention.
  • the execution body of the method is a VIM in the NFV system. As shown in FIG. 2, the method includes:
  • the acceleration resource request of the service is received, where the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource scheduling policy, where the service acceleration resource scheduling policy is determined according to the service requirement of the service.
  • the acceleration resource request of the service is delivered in a specific scenario.
  • the application accelerates the resource according to the actual needs of the service.
  • the service involves encryption and decryption processing, media audio and video transcoding, etc.
  • the service acceleration resource scheduling policy corresponding to the service reflects the requirements of the acceleration resource and the computing resource in the same computing node, that is, according to the industry.
  • the actual needs of the service are to determine the accelerated resource scheduling strategy.
  • the VNFM when applying for a virtual machine for a service, sends a request for a virtual machine to the VIM.
  • the virtual machine request includes an acceleration resource request, including an attribute of the accelerated resource. Parameters and service acceleration resource scheduling strategies.
  • the VIM includes an acceleration resource management module for performing acceleration resource allocation management.
  • the acceleration of the resource request module accelerates the acceleration of the resource request.
  • the attribute parameters of the resource and the service acceleration resource scheduling policy are received and processed.
  • the attribute parameters of the acceleration resource in the acceleration resource request include parameters such as an acceleration type and an algorithm type, and the acceleration resource scheduling policy is determined according to the service requirement of the foregoing service, and the acceleration resource scheduling policy is mainly used to set a preferred sequence for accelerating resource scheduling, wherein the acceleration resource is accelerated.
  • the acceleration resource scheduling policy is mainly used to set a preferred sequence for accelerating resource scheduling, wherein the acceleration resource is accelerated. Including: local virtualization acceleration resources, remote virtualization acceleration resources, local hard pass-through acceleration resources, and remote hard pass-through acceleration resources.
  • the local virtualization acceleration resource indicates that the acceleration resource and the virtual machine are on the same computing node, and needs to be connected to the virtual machine through the virtualization layer;
  • the remote virtualization acceleration resource indicates that the acceleration resource and the virtual machine are not on the same computing node, and
  • the virtual hard-through layer is connected to the virtual machine;
  • the local hard-through acceleration resource indicates that the acceleration resource and the virtual machine are on the same computing node, and directly connected to the virtual machine without going through the virtualization layer;
  • the remote hard-through acceleration resource indicates that the resource is accelerated. Not on the same compute node as the virtual machine, and directly connected to the virtual machine, without going through the virtualization layer.
  • Figure 3 is a schematic diagram of various types of acceleration resources.
  • the acceleration resource scheduling policy is used to set a preferred sequence for accelerating resource scheduling.
  • the acceleration resource management module receives the acceleration resource scheduling policy, the acceleration resource is selected according to the sequence of the accelerated resource scheduling set therein. The following is explained by an example.
  • the preferred order of the accelerated resource scheduling in the accelerated resource scheduling policy is: a local hard through acceleration resource, a local virtualization acceleration resource, a remote hard through acceleration resource, and a remote virtualization acceleration resource.
  • the acceleration resource management module receives the accelerated resource scheduling policy, it first selects an acceleration resource that is directly connected to the virtual machine on the same computing node, and further ensures that the acceleration resources satisfy the foregoing acceleration resource. Attribute parameters.
  • the acceleration resource can only be determined according to parameters such as the calculation type and the algorithm type. In this way, only the basic requirements of the service acceleration can be ensured, and the acceleration effect of the service acquisition can not be satisfied.
  • the service acceleration is sensitive to delay
  • the acceleration resources determined by the prior art there may be a scenario where the acceleration resource and the virtual machine are not in the same computing node, and the service is accelerated when using the accelerated resource, because there is a calculation.
  • the network switching between nodes, that is, the network delay, the network delay is generally higher than the calculation processing delay, which may cause the service delay to be unsatisfactory.
  • the service acceleration resource scheduling policy is added to the service acceleration resource request, and the preferred sequence of the acceleration resources is set in the policy according to the actual needs of the service. For example, if the service is sensitive to delay, the local hard pass-through acceleration resource can be set as the preferred one in the policy.
  • the acceleration resource management module selects the acceleration resource, it first selects the same compute node as the virtual machine and is hard-through. Acceleration resources of the mode, the accelerating resources that meet this requirement can ensure that the network does not need to be exchanged when the service is accelerated, thereby avoiding delay and meeting the requirements of the service for delay.
  • the acceleration resource is selected according to the service acceleration resource scheduling policy, and the specific requirements such as the delay sensitivity of the service can be met, thereby improving the delay and performance of the service. If there is no acceleration resource that satisfies the above conditions, the second acceleration resource is selected according to the preferred order set in the acceleration resource scheduling policy, and so on.
  • FIG. 4 is a schematic flowchart of Embodiment 2 of an acceleration resource processing method according to an embodiment of the present invention. As shown in FIG. 4, the foregoing Step S102 specifically includes:
  • the attribute parameters of the acceleration resource include the acceleration type and the algorithm type, etc., in addition to this, the attribute parameter may also include acceleration traffic and the like.
  • the acceleration type is used to indicate which type of acceleration is performed in this acceleration, such as encryption and decryption, codec or image processing.
  • the algorithm type is used to identify an algorithm under a specific acceleration type, such as a specific encryption and decryption algorithm during encryption and decryption.
  • the acceleration traffic indicates the requirement for the processing capability of the acceleration resource, for example, the encryption and decryption throughput at the time of encryption and decryption is 5 Gbps.
  • the set obtained by this step may be a set of multiple compute nodes.
  • the computing nodes that satisfy the service acceleration resource scheduling policy are selected from the computing nodes.
  • the method before the step S201, the method further includes:
  • the computing resource computing node is obtained according to the foregoing acceleration resource request.
  • the resources requested by the business may include storage resources and network resources.
  • you need to apply for computing resources first, that is, determine the computing node where the computing resources are located. For example, when a service is applied for a virtual machine, the VNFM sends a request for a virtual machine to the VIM, and the VIM uses the computing resource processing module to determine a computing resource computing node that satisfies the service requirement, that is, the computing node.
  • the computing resources in the middle meet the requirements of the business.
  • the determined computing resource computing nodes may have multiple and are arranged in order of priority, ie, the preferred computing node is the computing node that best satisfies the computing requirements of the service.
  • FIG. 5 is a flowchart of Embodiment 3 of an acceleration resource processing method according to an embodiment of the present invention.
  • the foregoing step S202 specifically includes:
  • the preferred order of the accelerated resource scheduling in the accelerated resource scheduling policy is: a local hard through acceleration resource, a local virtualization acceleration resource, a remote hard through acceleration resource, and a remote virtualization acceleration resource.
  • the current acceleration resource type is determined to be a local hard-through acceleration resource, that is, the acceleration resource that the service wants to use should be on the same computing node as the virtual machine, and the acceleration resource is hard through.
  • the current acceleration resource type is a local virtualization acceleration resource or a local hard through acceleration resource, determine a calculation node of the acceleration resource of the service from an intersection of the acceleration resource calculation node and the calculation resource calculation node.
  • the current acceleration resource type is a local virtualization acceleration resource or a local hard-through acceleration resource
  • acceleration resource calculation node is ⁇ Node 1, Node 2, Node 3 ⁇ , and the aforementioned calculated resource calculation node is ⁇ Node 2, Node 3, Node 4 ⁇ , then the intersection thereof It is ⁇ node 2, node 3 ⁇ , that is, node 2 and node 3 can satisfy the acceleration resource requirements of the service and the computing resource requirements of the service.
  • the current acceleration resource type is a remote virtualization acceleration resource or a remote hard-through acceleration resource, determine a calculation node of the acceleration resource of the service from the difference between the acceleration resource calculation node and the calculation resource calculation node.
  • the current acceleration resource type is a remote virtualization acceleration resource or a remote hard-through acceleration resource
  • the accelerated resource computing node and the computing resource computing node determine a difference set in the two types of computing nodes, that is, a computing node that belongs to the accelerated resource computing node and does not belong to the computing resource computing node.
  • the difference is The set is ⁇ node 1 ⁇ , ie, node 1 belongs only to the accelerated resource computing node and not to the computing resource computing node.
  • S301-S303 If the computing node that meets the required acceleration resource is not determined by the foregoing S301-S303, continue to execute S301-S303, that is, according to the preferred sequence of the accelerated resources in the accelerated resource scheduling policy, find the next accelerated resource as the current accelerated resource type, and A compute node that determines an accelerated resource for the service based on the new current accelerated resource type.
  • the computing nodes all have a morphological attribute, which can be used to identify the deployment form of the computing node, and the deployment form of the computing node includes virtualization and hard pass.
  • the deployed configuration may be virtualized, that is, the physical hardware is connected to the virtual resource layer through the virtualization layer, or may be hard through, that is, the physical hardware does not pass through the virtualization layer, directly interacts with the virtual resource layer. connection.
  • the morphological attributes of the compute nodes are used to describe the two deployment modalities of the compute nodes.
  • the embodiment relates to a specific method for determining a computing node of the acceleration resource of the service from the intersection of the acceleration resource computing node and the computing resource computing node, that is, the foregoing step S302 is specifically:
  • the calculation nodes in the intersection set are judged in order, and once the shape attribute in a calculation node is consistent with the current acceleration resource type, the process does not continue. The judgment is made directly, and the computing node is directly used as a computing node of the acceleration resource of the service.
  • the intersection of the acceleration resource calculation node and the calculation resource calculation node is ⁇ node 2, node 3, node 4 ⁇ , wherein the morphological attribute of node 2 is hard through, and the morphological attribute in node 3 is virtualized, node The morphological property of 4 is hard through.
  • the current accelerated resource type is local virtualization.
  • the node 3 is continuously judged. Since the morphological attribute in the node 3 is virtualized, it is consistent with the current rapid resource type. Therefore, the node 3 can be determined as the computing node of the acceleration resource of the service.
  • the embodiment relates to a specific method for determining a computing node of the acceleration resource of the service from the difference between the acceleration resource computing node and the computing resource computing node, that is, the foregoing step S303 is specifically:
  • the method for obtaining the acceleration resource attribute is as follows:
  • the acceleration resource attribute information is received, and the acceleration resource attribute information includes at least the foregoing morphological attribute, and the accelerated resource attribute information is obtained by querying an acceleration resource attribute when periodically or calculating a node initialization.
  • the NFVI may periodically detect the form of the acceleration resource when the computing node is initialized, thereby determining the morphological attribute, and sending the morphological attribute to the accelerated resource management module, and the accelerated resource management module saves the morphological attribute, when needed to select
  • the acceleration resources are determined according to the morphological attributes and the received accelerated resource scheduling policies.
  • FIG. 6 is a schematic flowchart of Embodiment 4 of an acceleration resource processing method according to an embodiment of the present invention, as shown in FIG. Before the foregoing step S101, the method further includes:
  • S401 Receive a new acceleration resource scheduling policy indication, where the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • the parameters of the accelerated resource scheduling policy are input by the user in the client of the VNFM or the VIM. If the parameters are entered in the VNFM client, the VNFM client sends the input parameters to the VNFM, which is sent by the VNFM to the VIM and finally saved by the VIM's accelerated resource management module. If the parameters are entered in the VIM client, the VIM client sends the input parameters to the VIM and is finally saved by the VIM's accelerated resource management module.
  • S402. Generate the accelerated resource scheduling policy according to a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • the acceleration resource management module Before the acceleration resource management module saves, it first generates a new acceleration resource scheduling policy according to the parameter information input by the user.
  • FIG. 7 is a schematic diagram of inter-module interaction for defining an accelerated resource scheduling policy. As shown in FIG. 7, the parameters of the accelerated resource scheduling policy may be input through the clients of the VNFM and the VIM, and finally saved by the accelerated resource management module.
  • the acceleration resource management module when the acceleration resource management module receives the parameters of the accelerated resource scheduling policy, it determines the corresponding Whether the policy already exists, if it does not exist, generate the policy and save it, otherwise it will fail.
  • the acceleration resource scheduling policy can be flexibly defined according to the needs of the service, so as to meet the requirements of various services.
  • the default acceleration resource scheduling is performed.
  • the policy is determined as an accelerated resource scheduling policy in the resource scheduling request.
  • the scheduling priority of each type of accelerated resource in the default accelerated resource scheduling policy is from high to low: local virtualization acceleration resource, remote virtualization acceleration resource, and local hard through acceleration resource. Remote hard-through acceleration resources.
  • FIG. 8 shows the complete process of accelerating resource processing.
  • the modules in FIG. 8 and the interaction between them are only an optional manner of the embodiments of the present invention, and are not limited to the embodiments of the present invention. In other embodiments, other Modules or fewer modules to achieve the same functionality. As shown in Figure 8, the process includes:
  • S501 and VNFM generate a request for applying for a virtual machine, and an acceleration resource scheduling policy is added.
  • the VNFM sends a request for requesting a virtual machine to the VIM, where the request includes an acceleration resource scheduling policy and an acceleration resource attribute parameter.
  • the processing module of the VIM determines, according to the request, the computing resource computing node, and assembles the computing resource computing node, the acceleration resource scheduling policy, and the acceleration resource attribute parameter in the application acceleration resource request.
  • the processing module of the VIM sends an application for an acceleration resource request to the acceleration resource management module of the VIM.
  • the acceleration resource management module determines whether the accelerated resource scheduling policy in the request for the accelerated resource request belongs to the saved accelerated resource scheduling policy in the accelerated resource management module. If yes, the next step is performed, otherwise the return fails.
  • the acceleration resource management module determines an acceleration resource calculation node according to the acceleration resource attribute parameter, and determines a calculation node of the acceleration resource of the service according to the acceleration resource calculation node and the calculation resource calculation node.
  • S507 and NFVI periodically detect the shape of the acceleration resource when the computing node is initialized, and acquire the morphological attribute of the computing node.
  • S508 and NFVI send the morphological attribute of the computing node to the acceleration resource management module.
  • the acceleration resource management module saves the received morphological attribute.
  • the S507-S509 and S501-S506 are executed in no order and can be executed independently.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • FIG. 9 is a block diagram of a first embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in FIG. 9, the apparatus includes:
  • the first receiving module 501 is configured to receive an acceleration resource request of the service, where the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource scheduling policy, where the service acceleration resource scheduling policy is determined according to the service requirement of the service.
  • the processing module 502 is configured to determine an acceleration resource of the service according to the attribute parameter of the acceleration resource and the service acceleration resource scheduling policy.
  • the device is used to implement the foregoing method embodiments, and the implementation principle and technical effects are similar, and details are not described herein again.
  • FIG. 10 is a block diagram of a second embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in FIG. 10, the processing module 502 includes:
  • the first determining unit 5021 is configured to determine an acceleration resource calculation node according to an attribute parameter of the acceleration resource.
  • the second determining unit 5022 is configured to determine, from the acceleration resource computing node, a computing node of the acceleration resource of the service according to the service acceleration resource scheduling policy.
  • FIG. 11 is a block diagram of a third embodiment of an apparatus for processing an accelerated resource according to an embodiment of the present invention. As shown in FIG. 11, the processing module 502 further includes:
  • the obtaining unit 5023 is configured to acquire a computing resource computing node according to the acceleration resource request.
  • the second determining unit 5022 is specifically configured to:
  • the current accelerated resource type is determined according to the priority order of the accelerated resources in the accelerated resource scheduling policy; if the current accelerated resource type is the local virtualized accelerated resource or the local hard through accelerated resource, the following: the accelerated resource computing node and the computing resource computing node If the current acceleration resource type is a remote virtualization acceleration resource or a remote hard through acceleration resource, the intersection of the acceleration resource calculation node and the calculation resource calculation node determines the service. Accelerate the compute node of the resource.
  • the second determining unit 5022 is further specifically configured to:
  • the second determining unit 5022 is further specifically configured to:
  • the morphological attribute is used to identify a deployment modality of the computing node, where the deployment modality includes virtualization and hard passthrough.
  • FIG. 12 is a block diagram of a fourth embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in FIG. 12, the apparatus further includes:
  • the second receiving module 503 is configured to receive acceleration resource attribute information, where the acceleration resource attribute information includes at least a morphological attribute, and the acceleration resource attribute information is obtained by querying an acceleration resource attribute when periodically or calculating a node initialization.
  • FIG. 13 is a block diagram of a fifth embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in FIG. 13, the device further includes:
  • the third receiving module 504 is configured to receive a new acceleration resource scheduling policy indication, where the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • the generating module 505 is configured to generate an acceleration resource scheduling policy according to the policy name, the acceleration resource type, and the scheduling priority of each type of acceleration resource.
  • FIG. 14 is a block diagram of a sixth embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in FIG. 14, the apparatus further includes:
  • the determining module 506 is configured to determine the default acceleration resource scheduling policy as the acceleration resource scheduling policy in the resource scheduling request when the resource scheduling request does not include the acceleration resource scheduling policy.
  • the scheduling priorities of each type of the accelerated resources in the default acceleration resource scheduling policy are: local virtualization acceleration resources, remote virtualization acceleration resources, local hard through acceleration resources, and far End hard to accelerate resources.
  • the foregoing attribute parameters include: an acceleration type, an algorithm type, and an acceleration flow.
  • FIG. 15 is a block diagram of a seventh embodiment of an acceleration resource processing apparatus according to an embodiment of the present invention. As shown in Figure 15, the apparatus includes:
  • the memory 601 is used to store program instructions, and the processor 602 is configured to call program instructions in the memory 601 to perform the following methods:
  • the acceleration resource request includes an attribute parameter of the acceleration resource and a service acceleration resource scheduling policy, where the service acceleration resource scheduling policy is determined according to the service requirement of the service;
  • processor 602 is further configured to:
  • the acceleration resource calculation node is determined according to the attribute parameter of the acceleration resource.
  • the computing node of the acceleration resource of the service is determined from the acceleration resource computing node.
  • processor 602 is further configured to:
  • the computing resource computing node is obtained according to the acceleration resource request.
  • processor 602 is further configured to:
  • the current accelerated resource type is determined according to the priority order of the accelerated resources in the accelerated resource scheduling policy.
  • the calculation node of the acceleration resource of the service is determined from the intersection of the acceleration resource calculation node and the calculation resource calculation node.
  • the calculation node of the acceleration resource of the service is determined from the difference between the acceleration resource calculation node and the calculation resource calculation node.
  • processor 602 is further configured to:
  • processor 602 is further configured to:
  • the morphological attribute is used to identify a deployment mode of the computing node, and the deployment mode includes virtualization and hard pass-through.
  • processor 602 is further configured to:
  • acceleration resource attribute information includes at least the foregoing morphological attribute
  • the acceleration resource attribute information is obtained by querying the accelerated resource attribute when periodically or calculating the node initialization.
  • processor 602 is further configured to:
  • the new acceleration resource scheduling policy indication includes a policy name, an acceleration resource type, and a scheduling priority of each type of acceleration resource.
  • An accelerated resource scheduling policy is generated according to the policy name, the accelerated resource type, and the scheduling priority of each type of accelerated resource.
  • processor 602 is further configured to:
  • the default acceleration resource scheduling policy is determined as An accelerated resource scheduling policy in a resource scheduling request.
  • the scheduling priorities of each type of the accelerated resources in the default acceleration resource scheduling policy are: local virtualization acceleration resources, remote virtualization acceleration resources, local hard through acceleration resources, and far End hard to accelerate resources.
  • the foregoing attribute parameters include: an acceleration type, an algorithm type, and an acceleration flow.
  • an embodiment of the present invention further provides an NFV system, where the NFV system includes the foregoing acceleration resource processing apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明提供一种加速资源处理方法、装置及网络功能虚拟化系统,该方法包括:接收业务的加速资源请求,所述加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,所述业务加速资源调度策略为根据所述业务的业务需求确定的;根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源。该方法根据业务加速资源调度策略来选择加速资源,能够满足业务的时延敏感等具体要求,从而提升业务的时延和性能。

Description

加速资源处理方法、装置及网络功能虚拟化系统
本申请要求于2016年7月4日提交中国专利局、申请号为201610522240.7、发明名称为“加速资源处理方法、装置及网络功能虚拟化系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信技术,尤其涉及一种加速资源处理方法、装置及网络功能虚拟化系统。
背景技术
传统的电信系统通过各种专用的硬件设备组成,不同的应用采用不同的硬件设备。随着网络规模的增长,系统越来越复杂,带来了诸多的挑战,包括新增业务的开发上线、系统的运维、资源利用率等。为了应对这些挑战,提出了网络功能虚拟化(Network Function Virtualization,简称NFV)技术。NFV技术将电信网络中各个网元的功能从原有的专用硬件平台迁移至通用的商用货架产品(Commercial-Off-The-Shelf,简称COTS)服务器上,将电信网络中使用的各个网元转变成为独立的应用,可以灵活部署在基于标准的服务器、存储以及交换机等其他设备构建的统一基础设施平台上,并通过虚拟化技术,对基础设施硬件设备资源池化及虚拟化,对上层应用提供虚拟资源,实现应用、硬件解耦,使得每一个应用能够快速增加虚拟资源以实现快速扩展系统容量的目的,或者能够快速减少虚拟资源以实现收缩系统容量的目的,大大提升网络的弹性。
在NFV架构中,包括虚拟网络功能和基础设施层,虚拟网络功能可以提供原有电信网络的不同网元的功能,虚拟网络功能可以使用基础设施层的硬件资源,包括计算硬件、存储硬件、网络硬件以及加速硬件。其中,加速硬件是专用于一些复杂功能加速的硬件,例如加解密、媒体音视频转码对应的硬件。
现有技术中,当虚拟网络功能对应的业务需要申请加速资源时,则会在申请中携带加速类型、算法类型等对于加速资源的要求,NFV会根据加速资源的要求选择能够满足要求的加速硬件。
但是,使用现有技术选择加速硬件,仅能满足基本的加速需求,而不能保证业务获得最佳的加速效果,导致业务的时延和性能等不达标。
发明内容
本发明实施例提供一种加速资源处理方法、装置及网络功能虚拟化系统,用于解决现有技术所导致的业务的时延和性能等不达标的问题。
本发明实施例第一方面提供一种加速资源处理方法,该方法包括:
接收业务的加速资源请求,这个加速资源请求中包括了加速资源的属性参数以及业务加速资源调度策略,其中,这个加速资源请求中所包括的业务加速资源调度策略是根据业 务的业务需求确定的。在接收到业务的加速资源请求之后,会根据加速资源的属性参数以及业务资源调度策略,来确定出业务的加速资源。该方法不仅根据加速资源的属性参数,还结合业务资源调度策略来确定业务的加速资源,从而使得所确定出的加速资源能够满足业务的实际需求,保证了业务的时延和性能等要求得到满足。
在一种可能的设计中,可以通过以下方法确定业务的加速资源:
根据加速资源的属性参数,确定加速资源计算节点。
根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
在一种可能的设计中,在上述根据加速资源的属性参数,确定加速资源计算节点之前,还包括:
根据加速资源请求,获取计算资源计算节点。
在一种可能的设计中,上述根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点的方法为:
首先,按照加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型,其次,对确定出的当前加速资源类型进行判断,如果当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点;如果当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
在一种可能的设计中,上述从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点的方法为:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,上述从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点的方法为:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,上述形态属性用于标识计算节点的部署形态,该部署形态包括虚拟化和硬直通。
在一种可能的设计中,还包括:
接收加速资源属性信息,该加速资源属性信息中至少包括形态属性,该加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
在一种可能的设计中,还会接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级,进而,根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成加速资源调度策略。
在一种可能的设计中,如果资源调度请求中不包括加速资源调度策略,则将默认加速资源调度策略确定为资源调度请求中的加速资源调度策略。
在一种可能的设计中,默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端 硬直通加速资源。
在一种可能的设计中,属性参数包括:加速类型、算法类型以及加速流量。
本发明第二方面提供一种加速资源处理装置。该装置具有实现上述方法的功能。这些功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的设计中,该装置可以包括第一接收模块以及处理模块,其中,第一接收模块,用于接收业务的加速资源请求,加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,业务加速资源调度策略为根据所述业务的业务需求确定的;处理模块,用于根据加速资源的属性参数和业务加速资源调度策略,确定业务的加速资源。
在一种可能的设计中,处理模块可以包括:
第一确定单元,用于根据加速资源的属性参数,确定加速资源计算节点。
第二确定单元,用于根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
在一种可能的设计中,处理模块还可以包括:
获取单元,用于根据加速资源请求,获取计算资源计算节点。
在一种可能的设计中,第二确定单元具体用于:
按照加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型;若当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点;若当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
在一种可能的设计中,第二确定单元具体还用于:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,第二确定单元具体还用于:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,上述形态属性用于标识计算节点的部署形态,该部署形态包括虚拟化和硬直通。
在一种可能的设计中,上述装置中还可以包括:
第二接收模块,该模块可用于接收加速资源属性信息,该加速资源属性信息中至少包括形态属性,该加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
在一种可能的设计中,上述装置中还可以包括:
第三接收模块以及生成模块。其中,第三接收模块,可以用于接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级;生成模块,可以用于根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成所述加速资源调度策略。
在一种可能的设计中,上述装置中还可以包括:
确定模块,用于在资源调度请求中不包括加速资源调度策略时,将默认加速资源调度 策略确定为资源调度请求中的加速资源调度策略。
在一种可能的设计中,默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
在一种可能的设计中,属性参数包括:加速类型、算法类型以及加速流量。
本发明实施例第三方面提供一种加速资源处理装置。该装置包括存储器和处理器,其中,存储器用于存储程序指令,处理器用于调用存储器中的程序指令,执行前述的方法。
在一种可能的设计中,处理器可以用于执行:
接收业务的加速资源请求,该加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,该业务加速资源调度策略为根据业务的业务需求确定的;根据加速资源的属性参数和业务加速资源调度策略,确定业务的加速资源。
在一种可能的设计中,处理器还用于:
根据加速资源的属性参数,确定加速资源计算节点。
根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
在一种可能的设计中,处理器还用于:
根据加速资源请求,获取计算资源计算节点。
在一种可能的设计中,处理器还用于:按照加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型。
若当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点。
若当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
在一种可能的设计中,处理器还用于:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,处理器还用于:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
在一种可能的设计中,上述形态属性用于标识计算节点的部署形态,部署形态包括虚拟化和硬直通。
在一种可能的设计中,处理器还用于:
接收加速资源属性信息,该加速资源属性信息中至少包括上述形态属性,加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
在一种可能的设计中,处理器还用于:
接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级。
根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成加速资源调度策略。
在一种可能的设计中,处理器还用于:
在所述资源调度请求中不包括加速资源调度策略时,将默认加速资源调度策略确定为资源调度请求中的加速资源调度策略。
在一种可能的设计中,上述默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
在一种可能的设计中,上述属性参数包括:加速类型、算法类型以及加速流量。
本发明实施例第四方面提供一种网络功能虚拟化NFV系统,该NFV系统中包括前述的加速资源处理装置。
相较于现有技术,本发明实施例所提供的方案,根据业务加速资源调度策略来选择加速资源,能够满足业务的时延敏感等具体要求,从而提升业务的时延和性能。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为NFV的系统架构图;
图2为本发明实施例提供的加速资源处理方法实施例一的流程示意图;
图3为各种类型的加速资源的示意图;
图4为本发明实施例提供的加速资源处理方法实施例二的流程示意图;
图5为本发明实施例提供的加速资源处理方法实施例三的流程示意图;
图6为本发明实施例提供的加速资源处理方法实施例四的流程示意图;
图7为定义加速资源调度策略的模块间交互示意图;
图8为加速资源处理的完整流程;
图9为本发明实施例提供的加速资源处理装置实施例一的模块结构图;
图10为本发明实施例提供的加速资源处理装置实施例二的模块结构图;
图11为本发明实施例提供的加速资源处理装置实施例三的模块结构图;
图12为本发明实施例提供的加速资源处理装置实施例四的模块结构图;
图13为本发明实施例提供的加速资源处理装置实施例五的模块结构图;
图14为本发明实施例提供的加速资源处理装置实施例六的模块结构图;
图15为本发明实施例提供的加速资源处理装置实施例七的模块结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为NFV的系统架构图,NFV系统应用于各种网络中,例如数据中心网络、运营商网络或局域网络。如图1所示,NFV系统包括NFV管理和编排系统(NFV Management and Orchestration,简称NFV-MANO)101、NFV基础设施层(NFV Infrastructure,NFVI)130、多个虚拟网络功能(Virtual Network Function,VNF)108、多个网元管理(Element Management,EM)122、网络服务、VNF和基础设施描述(Network Service,VNF and Infrastructure Description)126,以及业务支持管理系统(Operation-Support System/Business Support System,OSS/BSS)124。NFVI 130包括计算硬件112、存储硬件114、网络硬件116、加速硬件115、虚拟化层(Virtualization Layer)、虚拟计算110、虚拟存储118、虚拟网络120以及虚拟加速123。其中,NFV管理和编排系统101用于执行对虚拟网络功能108和NFV基础设施层130的监视和管理。
在上述NFV的系统架构中,NFV管理和编排系统101包括:
NFV编排器102:可以实现在NFV基础设施层130上的网络服务,也可以执行来自一个或多个VNF管理器104的资源相关请求,发送配置信息到VNF管理器104,并收集虚拟网络功能108的状态信息。
VNF管理器(VNF Manager,简称VNFM)104:可以管理一个或多个虚拟网络功能108。
虚拟基础设施管理器(Virtualized Infrastructure Manager,简称VIM)106:可以执行资源管理的功能,例如管理基础设施资源的分配和操作功能。虚拟基础设施管理器106和VNF管理器104可以相互通信,来进行资源分配和交换虚拟化硬件资源的配置和状态信息。VIM中包括一个加速资源管理模块121,用于进行加速资源的分配管理等。
在上述NFV的系统架构中,NFV基础设施层130包括:计算硬件112、存储硬件114、加速硬件115、网络硬件116、虚拟化层(Virtualization Layer)、虚拟计算110、虚拟存储118、虚拟加速123以及虚拟网络120。其中,加速硬件115、加速资源管理代理125和虚拟加速123与加速资源的调度相关。
以下对本发明实施例中涉及的概念作一解释:
计算节点:是指在NFV系统架构中提供计算硬件、网络硬件、加速硬件等的物理主机,不同的计算节点就是不同的物理主机。
加速资源:是指可以提供加速功能的资源,在本发明实施例中可以为NFV中的加速硬件。
由于计算节点可以用来提供加速资源(加速硬件),即,不同的计算节点分别提供各自的不同加速资源(加速硬件),因此,当需要确定加速资源时,可以通过确定提供所述加速资源(加速硬件)的计算节点来实现。
图2为本发明实施例提供的加速资源处理方法实施例一的流程示意图,该方法的执行主体为上述NFV系统中的VIM,如图2所示,该方法包括:
S101、接收业务的加速资源请求,该加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,业务加速资源调度策略为根据上述业务的业务需求确定的。
业务的加速资源请求会在特定的场景下下发,例如,当为业务申请虚拟机时,根据业务的实际需要,为业务申请加速资源。例如,如果业务中涉及加解密处理、媒体音视频转码等操作时,就需要为业务申请加速资源。当业务对时延的要求比较高,则业务对应的业务加速资源调度策略中就会体现加速资源和计算资源在同一计算节点的要求,即,根据业 务的实际需要来确定加速资源调度策略。
以上述申请虚拟机时为业务申请加速资源为例,当为业务申请虚拟机时,VNFM会向VIM发送申请虚拟机请求,该申请虚拟机请求中包括加速资源的请求,其中包括加速资源的属性参数和业务加速资源调度策略。
如前所述,VIM中包括一个加速资源管理模块,用于进行加速资源的分配管理等,本步骤中,当VIM接收到加速资源的请求时,通过加速资源管理模块对加速资源请求中的加速资源的属性参数和业务加速资源调度策略进行接收和处理。
S102、根据加速资源的属性参数和业务加速资源调度策略,确定业务的加速资源。
加速资源请求中的加速资源的属性参数包括加速类型和算法类型等参数,加速资源调度策略根据上述业务的业务需求确定,加速资源调度策略主要用于设置加速资源调度的优选顺序,其中,加速资源包括:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源以及远端硬直通加速资源。其中,本地虚拟化加速资源表示加速资源和虚拟机在同一计算节点上,并且需要经过虚拟化层与虚拟机连接;远端虚拟化加速资源表示加速资源和虚拟机不在同一计算节点上,并且需要经过虚拟化层与虚拟机连接;本地硬直通加速资源表示加速资源和虚拟机在同一计算节点上,并且直接同虚拟机连接,不需要经过虚拟化层;远端硬直通加速资源表示表示加速资源和虚拟机不在同一计算节点上,并且直接同虚拟机连接,不需要经过虚拟化层。图3为各种类型的加速资源的示意图。
加速资源调度策略用于设置加速资源调度的优选顺序,当加速资源管理模块接收到加速资源调度策略时,会按照其中所设置的加速资源调度的顺序来选择加速资源。以下以一个示例进行说明。
假设加速资源调度策略中的加速资源调度的优选顺序为:本地硬直通加速资源、本地虚拟化加速资源、远端硬直通加速资源、远端虚拟化加速资源。当加速资源管理模块接收到该加速资源调度策略时,首先会选择与虚拟机在同一计算节点上,并且与虚拟机直接连接的加速资源,进而,还要确保这些加速资源满足前述的加速资源的属性参数。
现有技术中,仅能根据计算类型和算法类型等参数来确定加速资源,通过这种方式只能保证业务加速的基本要求,而无法满足业务获取更优的加速效果。例如,如果业务加速对时延敏感,根据现有技术的方式所确定的加速资源,有可能出现加速资源和虚拟机不在同一个计算节点的场景,业务在使用加速资源进行加速时,由于存在计算节点间的网络交换,即网络时延,网络时延相对于计算处理时延一般都比较高,从而可能会导致业务的时延不达标。
而本实施例中,在业务的加速资源请求中加入了业务加速资源调度策略,根据业务的实际需要在该策略中设置加速资源的优选顺序。例如,如果业务对时延敏感,则可以在该策略中将本地硬直通加速资源设置为首选,加速资源管理模块在选择加速资源时,就会首先选择与虚拟机在同一计算节点并且为硬直通方式的加速资源,满足这一要求的加速资源能够保证业务进行加速时不需要进行网络交换,从而避免时延,满足业务对时延的要求。即,本实施例中,在现有技术的基础上,根据业务加速资源调度策略来选择加速资源,能够满足业务的时延敏感等具体要求,从而提升业务的时延和性能。如果没有满足上述条件的加速资源,则按照加速资源调度策略中设置的优选顺序选择第二种加速资源,以此类推。
当然,可选地,业务加速资源调度策略也可以不设置加速资源的优选顺序,而仅仅根 据业务实际需要提供上述各种加速资源中最优的一种。在上述实施例的基础上,本实施例涉及确定业务的加速资源的具体方法。即,图4为本发明实施例提供的加速资源处理方法实施例二的流程示意图,如图4所示,上述步骤S102具体包括:
S201、根据加速资源的属性参数,确定加速资源计算节点。
如前所述,加速资源的属性参数包括加速类型和算法类型等,除此之外,该属性参数还可能包括加速流量等。其中,加速类型用于表示本次加速是进行哪一类加速,例如加解密、编解码或图像处理等。算法类型用于标识具体加速类型下的算法,例如加解密时的具体加解密算法。加速流量表示对于加速资源的处理能力的要求,例如加解密时的加解密吞吐量为5Gbps。
根据这些属性参数,确定满足这些属性参数要求的所有计算节点。需要说明的是,满足这些属性参数要求的计算节点可能有多个,因此,本步骤中所获取到的可能为多个计算节点所组成的集合。
S202、根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
当确定出满足上述属性参数的计算节点之后,再从这些计算节点中选择出满足业务加速资源调度策略的计算节点。
另一实施例中,在上述步骤S201之前,还包括:
根据上述加速资源请求,获取计算资源计算节点。
具体地,对于一个具体业务,在申请资源时,可能需要申请计算资源,计算资源用于提供处理和计算资源,例如中央处理器等。除此之外,业务申请的资源可能还包括存储资源和网络资源。在申请加速资源之前,需要首先申请计算资源,即确定出计算资源所在的计算节点。以为业务申请虚拟机为例,当为业务申请虚拟机时,VNFM会向VIM发送申请虚拟机请求,VIM通过其中的计算资源处理模块来确定满足业务要求的计算资源计算节点,即,这些计算节点中的计算资源都满足业务的要求。所确定出的计算资源计算节点可能有多个,并且按照优先顺序进行排列,即,首选的计算节点是最能够满足业务的计算要求的计算节点。
在上述实施例的基础上,本实施例涉及从加速资源计算节点中确定业务的加速资源的计算节点的具体方法,即,图5为本发明实施例提供的加速资源处理方法实施例三的流程示意图,如图5所示,上述步骤S202具体包括:
S301、按照加速资源调度策略中的加速资源的优选顺序,确定当前加速资源类型。
例如,假设加速资源调度策略中的加速资源调度的优选顺序为:本地硬直通加速资源、本地虚拟化加速资源、远端硬直通加速资源、远端虚拟化加速资源。则首先确定出当前加速资源类型为本地硬直通加速资源,即,业务希望使用的加速资源应该同虚拟机在同一计算节点上,并且该加速资源为硬直通。
S302、若当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点。
具体地,如果当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则说明业务希望使用的加速资源应该同虚拟机在同一计算节点上,此时,则应该根据前述已经确定出的加速资源计算节点以及计算资源计算节点,确定这两类计算节点中的交集。由于 加速资源计算节点都是满足业务的加速资源要求的计算节点,而计算资源计算节点都是满足业务的计算资源要求的计算节点,因此,二者的交集中的每个计算节点,都是既能满足业务的加速资源要求,也能满足业务的计算资源要求。
举例来说,假设前述所确定出的加速资源计算节点为{节点1,节点2,节点3},前述所确定出的计算资源计算节点为{节点2,节点3,节点4},则其交集为{节点2,节点3},即,节点2和节点3既能满足业务的加速资源要求,也能满足业务的计算资源要求。
S303、若当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
具体地,如果当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则说明业务希望使用的加速资源应该同虚拟机不在同一计算节点上,此时,则应该根据前述已经确定出的加速资源计算节点以及计算资源计算节点,确定这两类计算节点中的差集,即属于加速资源计算节点而不属于计算资源计算节点的计算节点。
举例来说,假设前述所确定出的加速资源计算节点为{节点1,节点2,节点3},前述所确定出的计算资源计算节点为{节点2,节点3,节点4},则其差集为{节点1},即,节点1仅属于加速资源计算节点,而不属于计算资源计算节点。通过确定加速资源计算节点和计算资源计算节点的差集,可以保证所得到的加速资源和计算资源(即虚拟机)不在同一计算节点上,从而满足了加速资源调度策略中的要求。
如果经过前述S301-S303没有确定出符合要求的加速资源的计算节点,继续执行S301-S303,即按照加速资源调度策略中的加速资源的优选顺序,找到下一个加速资源作为当前加速资源类型,并基于新的当前加速资源类型确定业务的加速资源的计算节点。
另一实施例中,计算节点都具有形态属性,该形态属性可以用来标识计算节点的部署形态,计算节点的部署形态包括虚拟化和硬直通。
具体地,计算节点在部署时,部署的形态可以为虚拟化,即物理硬件经过虚拟化层与虚拟资源层连接,也可以为硬直通,即物理硬件不经过虚拟化层,直接与虚拟资源层连接。计算节点的形态属性即用于描述计算节点的这两种部署形态。
在上述实施例的基础上,本实施例涉及从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点的具体方法,即,上述步骤S302具体为:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
具体地,加速资源计算节点和计算资源计算节点的交集可能有多个,按照顺序对交集中的计算节点进行判断,一旦某个计算节点中的形态属性与当前加速资源类型一致,则不再继续进行判断,直接将该计算节点作为业务的加速资源的计算节点。
举例来说,假设加速资源计算节点和计算资源计算节点的交集为{节点2,节点3,节点4},其中,节点2的形态属性为硬直通,节点3中的形态属性为虚拟化,节点4的形态属性为硬直通。当前加速资源类型为本地虚拟化。则从交集中的第一个计算节点,即节点2开始进行判断,由于节点2的形态属性为硬直通,而当前加速资源类型为本地虚拟化,即节点2的形态属性与当前急速资源类型并不一致,因此,继续判断节点3,由于节点3中的形态属性为虚拟化,因此,与当前急速资源类型一致,因此,可以确定节点3为业务的加速资源的计算节点。
在上述实施例的基础上,本实施例涉及从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点的具体方法,即,上述步骤S303具体为:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
具体方法可以参考上一实施例,此处不再赘述。
在上述实施例的基础上,本实施例涉及获取加速资源属性的具体方法,即,上述加速资源处理方法还包括:
接收加速资源属性信息,该加速资源属性信息中至少包括上述形态属性,该加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
可选地,NFVI会在计算节点初始化时或者周期性地检测加速资源的形态,从而确定形态属性,并将形态属性发送给加速资源管理模块,加速资源管理模块将形态属性保存起来,当需要选择加速资源时,根据形态属性以及所接收到的加速资源调度策略来来确定加速资源。
在上述实施例的基础上,本实施例涉及定义加速资源调度策略的具体方法,即,图6为本发明实施例提供的加速资源处理方法实施例四的流程示意图,如图6所示,在上述步骤S101之前,还包括:
S401、接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级。
具体地,本发明实施例在VNFM或者VIM的客户端中由用户输入加速资源调度策略的参数。如果是在VNFM的客户端中输入参数,则VNFM的客户端将输入的参数发送到VNFM,并由VNFM发送到VIM,最终由VIM的加速资源管理模块进行保存。如果是在VIM的客户端中输入参数,则VIM的客户端将输入的参数发送到VIM,最终由VIM的加速资源管理模块进行保存。
S402、根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成所述加速资源调度策略。
加速资源管理模块在保存前,首先根据用户所输入的参数信息生成一个新的加速资源调度策略。
如下是加速资源调度策略的一种示例性表示方式,但本发明实施例并不限于这种表示方式。
"AccResourceSchedulingPolicyType":{
"Name":"LatencyPriority",//加速资源调度策略的名称
"Sequence":
{"1":"LocalSriovAcc","2":"LocalVirtioAcc","3":"RemoteSriovAcc","4":"RemoteVirtioAcc"}}//加速资源调度策略中的优选顺序,1表示最高优先级,以此类推,并且存在4中加速资源
图7为定义加速资源调度策略的模块间交互示意图,如图7所示,可以通过VNFM和VIM的客户端输入加速资源调度策略的参数,并最终由加速资源管理模块来保存。
需要说明的是,加速资源管理模块在接收到加速资源调度策略的参数时,会判断其对应的 策略是否已经存在,如果不存在,则生成策略并保存,否则返回失败。
本实施例中,可以根据业务的需要灵活定义加速资源调度策略,从而满足各类业务的需求。
另一实施例中,在上述步骤S101之后,即当虚拟基础设施管理器106接收到业务的加速资源请求之后,如果判断出资源调度请求中不包括加速资源调度策略,则会将默认加速资源调度策略确定为资源调度请求中的加速资源调度策略。
作为一种可选的实施方式,默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
如下以业务申请虚拟机为例,来说明加速资源处理的完整过程。图8为加速资源处理的完整流程。需要说明的是,图8中的模块及其之间的交互仅是本发明实施例的一种可选的方式,并不能作为本发明实施例的限制,在其他实施例中,也可以通过其他模块或者更少的模块来实现同样的功能。如图8所示,该过程包括:
S501、VNFM生成申请虚拟机的请求,其中新增了加速资源调度策略。
S502、VNFM向VIM发送申请虚拟机的请求,请求中包括加速资源调度策略以及加速资源属性参数。
S503、VIM的处理模块根据上述请求,确定计算资源计算节点,并将计算资源计算节点、加速资源调度策略以及加速资源属性参数组装在申请加速资源请求中。
S504、VIM的处理模块将申请加速资源请求发送给VIM的加速资源管理模块。
S505、加速资源管理模块判断申请加速资源请求中的加速资源调度策略是否属于该加速资源管理模块中已保存的加速资源调度策略,若属于,则执行下一步,否则返回失败。
S506、加速资源管理模块根据加速资源属性参数确定加速资源计算节点,并根据加速资源计算节点和计算资源计算节点确定业务的加速资源的计算节点。
S507、NFVI在计算节点初始化时或者周期性地检测加速资源的形态,获取计算节点的形态属性。
S508、NFVI将计算节点的形态属性发送给加速资源管理模块。
S509、加速资源管理模块保存接收到的形态属性。
其中,S507-S509与S501-S506的执行没有先后顺序,可以独立执行。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图9为本发明实施例提供的加速资源处理装置实施例一的模块结构图,如图9所示,该装置包括:
第一接收模块501,用于接收业务的加速资源请求,该加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,业务加速资源调度策略为根据业务的业务需求确定的。
处理模块502,用于根据加速资源的属性参数和业务加速资源调度策略,确定业务的加速资源。
该装置用于实现前述的方法实施例,其实现原理和技术效果类似,此处不再赘述。
图10为本发明实施例提供的加速资源处理装置实施例二的模块结构图,如图10所示,处理模块502包括:
第一确定单元5021,用于根据加速资源的属性参数,确定加速资源计算节点。
第二确定单元5022,用于根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
图11为本发明实施例提供的加速资源处理装置实施例三的模块结构图,如图11所示,处理模块502还包括:
获取单元5023,用于根据加速资源请求,获取计算资源计算节点。
另一实施例中,第二确定单元5022具体用于:
按照加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型;若当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点;若当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
进一步地,第二确定单元5022具体还用于:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
进一步地,第二确定单元5022具体还用于:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
另一实施例中,上述形态属性用于标识所述计算节点的部署形态,所述部署形态包括虚拟化和硬直通。
图12为本发明实施例提供的加速资源处理装置实施例四的模块结构图,如图12所示,在图9的基础上,该装置还包括:
第二接收模块503,用于接收加速资源属性信息,该加速资源属性信息中至少包括形态属性,加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
图13为本发明实施例提供的加速资源处理装置实施例五的模块结构图,如图13所示,在图12的基础上,该装置还包括:
第三接收模块504,用于接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级。
生成模块505,用于根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成加速资源调度策略。
图14为本发明实施例提供的加速资源处理装置实施例六的模块结构图,如图14所示,在图13的基础上,该装置还包括:
确定模块506,用于在资源调度请求中不包括加速资源调度策略时,将默认加速资源调度策略确定为资源调度请求中的加速资源调度策略。
另一实施例中,上述默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
另一实施例中,上述属性参数包括:加速类型、算法类型以及加速流量。
图15为本发明实施例提供的加速资源处理装置实施例七的模块结构图,如图15所示,该装置包括:
存储器601以及处理器602。
存储器601用于存储程序指令,处理器602用于调用存储器601中的程序指令,执行下述方法:
接收业务的加速资源请求,所述加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,所述业务加速资源调度策略为根据所述业务的业务需求确定的;
根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源。
进一步地,处理器602还用于:
根据加速资源的属性参数,确定加速资源计算节点。
根据业务加速资源调度策略,从加速资源计算节点中确定业务的加速资源的计算节点。
进一步地,处理器602还用于:
根据加速资源请求,获取计算资源计算节点。
进一步地,处理器602还用于:
按照加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型。
若当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的交集中确定业务的加速资源的计算节点。
若当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从加速资源计算节点和计算资源计算节点的差集中确定业务的加速资源的计算节点。
进一步地,处理器602还用于:
判断加速资源计算节点和计算资源计算节点的交集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
进一步地,处理器602还用于:
判断加速资源计算节点和计算资源计算节点的差集中的当前计算节点的形态属性是否与当前加速资源类型一致,若一致,则将当前计算节点作为业务的加速资源的计算节点。
另一实施例中,上述形态属性用于标识计算节点的部署形态,部署形态包括虚拟化和硬直通。
进一步地,处理器602还用于:
接收加速资源属性信息,该加速资源属性信息中至少包括上述形态属性,加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
进一步地,处理器602还用于:
接收新增加速资源调度策略指示,该新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级。
根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成加速资源调度策略。
进一步地,处理器602还用于:
在所述资源调度请求中不包括加速资源调度策略时,将默认加速资源调度策略确定为 资源调度请求中的加速资源调度策略。
另一实施例中,上述默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
另一实施例中,上述属性参数包括:加速类型、算法类型以及加速流量。
另一实施例中,本发明实施例还提供一种NFV系统,该NFV系统中包括前述的加速资源处理装置。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (26)

  1. 一种加速资源处理方法,其特征在于,包括:
    接收业务的加速资源请求,所述加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,所述业务加速资源调度策略为根据所述业务的业务需求确定的;
    根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源,包括:
    根据所述加速资源的属性参数,确定加速资源计算节点;
    根据所述业务加速资源调度策略,从所述加速资源计算节点中确定所述业务的加速资源的计算节点。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述加速资源的属性参数,确定加速资源计算节点之前,包括:
    根据所述加速资源请求,获取计算资源计算节点。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述业务加速资源调度策略,从所述加速资源计算节点中确定所述业务的加速资源的计算节点,包括:
    按照所述加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型;
    若所述当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从所述加速资源计算节点和所述计算资源计算节点的交集中确定所述业务的加速资源的计算节点;
    若所述当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从所述加速资源计算节点和所述计算资源计算节点的差集中确定所述业务的加速资源的计算节点。
  5. 根据权利要求4所述的方法,其特征在于,所述从所述加速资源计算节点和所述计算资源计算节点的交集中确定所述业务的加速资源的计算节点,包括:
    判断所述加速资源计算节点和所述计算资源计算节点的交集中的当前计算节点的形态属性是否与所述当前加速资源类型一致,若一致,则将当前计算节点作为所述业务的加速资源的计算节点。
  6. 根据权利要求4所述的方法,其特征在于,所述从所述加速资源计算节点和所述计算资源计算节点的差集中确定所述业务的加速资源的计算节点,包括:
    判断所述加速资源计算节点和所述计算资源计算节点的差集中的当前计算节点的形态属性是否与所述当前加速资源类型一致,若一致,则将当前计算节点作为所述业务的加速资源的计算节点。
  7. 根据权利要求5或6所述的方法,其特征在于,所述形态属性用于标识所述计算节点的部署形态,所述部署形态包括虚拟化和硬直通。
  8. 根据权利要求5或6所述的方法,其特征在于,还包括:
    接收加速资源属性信息,所述加速资源属性信息中至少包括所述形态属性,所述加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
  9. 根据权利要求1所述的方法,其特征在于,所述接收业务的加速资源请求之前,还包括:
    接收新增加速资源调度策略指示,所述新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级;
    根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成所述加速资源调度策略。
  10. 根据权利要求1所述的方法,其特征在于,所述接收业务的加速资源请求之后,还包括:
    若所述资源调度请求中不包括所述加速资源调度策略,则将默认加速资源调度策略确定为所述资源调度请求中的加速资源调度策略。
  11. 根据权利要求10所述的方法,其特征在于,所述默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
  12. 根据权利要求1所述的方法,其特征在于,所述属性参数包括:加速类型、算法类型以及加速流量。
  13. 一种加速资源处理装置,其特征在于,包括:
    第一接收模块,用于接收业务的加速资源请求,所述加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,所述业务加速资源调度策略为根据所述业务的业务需求确定的;
    处理模块,用于根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源。
  14. 根据权利要求13所述的装置,其特征在于,所述处理模块包括:
    第一确定单元,用于根据所述加速资源的属性参数,确定加速资源计算节点;
    第二确定单元,用于根据所述业务加速资源调度策略,从所述加速资源计算节点中确定所述业务的加速资源的计算节点。
  15. 根据权利要求14所述的装置,其特征在于,所述处理模块还包括:
    获取单元,用于根据所述加速资源请求,获取计算资源计算节点。
  16. 根据权利要求15所述的装置,其特征在于,所述第二确定单元具体用于:
    按照所述加速资源调度策略中的加速资源的优先级顺序,确定当前加速资源类型;若所述当前加速资源类型为本地虚拟化加速资源或本地硬直通加速资源,则:从所述加速资源计算节点和所述计算资源计算节点的交集中确定所述业务的加速资源的计算节点;若所述当前加速资源类型为远端虚拟化加速资源或远端硬直通加速资源,则:从所述加速资源计算节点和所述计算资源计算节点的差集中确定所述业务的加速资源的计算节点。
  17. 根据权利要求16所述的装置,其特征在于,所述第二确定单元具体还用于:
    判断所述加速资源计算节点和所述计算资源计算节点的交集中的当前计算节点的形态属性是否与所述当前加速资源类型一致,若一致,则将当前计算节点作为所述业务的加速资源的计算节点。
  18. 根据权利要求16所述的装置,其特征在于,所述第二确定单元具体还用于:
    判断所述加速资源计算节点和所述计算资源计算节点的差集中的当前计算节点的形态属性是否与所述当前加速资源类型一致,若一致,则将当前计算节点作为所述业务的加速资源的计算节点。
  19. 根据权利要求17或18所述的装置,其特征在于,所述形态属性用于标识所述计算节点的部署形态,所述部署形态包括虚拟化和硬直通。
  20. 根据权利要求17或18所述的装置,其特征在于,所述装置还包括:
    第二接收模块,用于接收加速资源属性信息,所述加速资源属性信息中至少包括所述形态属性,所述加速资源属性信息通过周期性或计算节点初始化时查询加速资源属性获得。
  21. 根据权利要求13所述的装置,其特征在于,还包括:
    第三接收模块,用于接收新增加速资源调度策略指示,所述新增加速资源调度策略指示中包括策略名称、加速资源类型以及每种类型的加速资源的调度优先级;
    生成模块,用于根据策略名称、加速资源类型以及每种类型的加速资源的调度优先级,生成所述加速资源调度策略。
  22. 根据权利要求13所述的装置,其特征在于,还包括:
    确定模块,用于在所述资源调度请求中不包括所述加速资源调度策略时,将默认加速资源调度策略确定为所述资源调度请求中的加速资源调度策略。
  23. 根据权利要求22所述的装置,其特征在于,所述默认加速资源调度策略中每种类型的加速资源的调度优先级从高到低分别为:本地虚拟化加速资源、远端虚拟化加速资源、本地硬直通加速资源、远端硬直通加速资源。
  24. 根据权利要求13所述的装置,其特征在于,所述属性参数包括:加速类型、算法类型以及加速流量。
  25. 一种加速资源处理装置,其特征在于,包括:
    存储器以及处理器;
    所述存储器用于存储程序指令,所述处理器用于调用所述存储器中的程序指令,执行下述方法:
    接收业务的加速资源请求,所述加速资源请求中包括加速资源的属性参数和业务加速资源调度策略,其中,所述业务加速资源调度策略为根据所述业务的业务需求确定的;
    根据所述加速资源的属性参数和所述业务加速资源调度策略,确定所述业务的加速资源。
  26. 一种网络功能虚拟化NFV系统,其特征在于,所述NFV系统包括如权利要求13-24任一项所述的加速资源处理装置。
PCT/CN2017/087236 2016-07-04 2017-06-06 加速资源处理方法、装置及网络功能虚拟化系统 Ceased WO2018006676A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17823488.6A EP3468151B1 (en) 2016-07-04 2017-06-06 Acceleration resource processing method and apparatus
JP2018568900A JP6751780B2 (ja) 2016-07-04 2017-06-06 アクセラレーション・リソース処理方法及び装置
KR1020197001653A KR102199278B1 (ko) 2016-07-04 2017-06-06 가속 자원 처리 방법 및 장치, 및 네트워크 기능 가상화 시스템
US16/234,607 US10838890B2 (en) 2016-07-04 2018-12-28 Acceleration resource processing method and apparatus, and network functions virtualization system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610522240.7 2016-07-04
CN201610522240.7A CN105979007B (zh) 2016-07-04 2016-07-04 加速资源处理方法、装置及网络功能虚拟化系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/234,607 Continuation US10838890B2 (en) 2016-07-04 2018-12-28 Acceleration resource processing method and apparatus, and network functions virtualization system

Publications (1)

Publication Number Publication Date
WO2018006676A1 true WO2018006676A1 (zh) 2018-01-11

Family

ID=56954982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087236 Ceased WO2018006676A1 (zh) 2016-07-04 2017-06-06 加速资源处理方法、装置及网络功能虚拟化系统

Country Status (6)

Country Link
US (1) US10838890B2 (zh)
EP (1) EP3468151B1 (zh)
JP (1) JP6751780B2 (zh)
KR (1) KR102199278B1 (zh)
CN (1) CN105979007B (zh)
WO (1) WO2018006676A1 (zh)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979007B (zh) * 2016-07-04 2020-06-02 华为技术有限公司 加速资源处理方法、装置及网络功能虚拟化系统
US10034407B2 (en) 2016-07-22 2018-07-24 Intel Corporation Storage sled for a data center
CN106412063B (zh) * 2016-09-29 2019-08-13 赛尔网络有限公司 教育网内cdn节点检测与资源调度系统及方法
CN108073423B (zh) 2016-11-09 2020-01-17 华为技术有限公司 一种加速器加载方法、系统和加速器加载装置
CN111813459B (zh) * 2016-11-09 2024-12-27 华为技术有限公司 一种加速器加载方法、系统和加速器加载装置
CN108076095B (zh) 2016-11-15 2019-07-19 华为技术有限公司 一种nfv系统业务加速方法、系统、装置及服务器
CN106533987B (zh) * 2016-11-15 2020-02-21 郑州云海信息技术有限公司 一种nfv加速资源与通用计算资源智能切换方法及系统
CN106657279B (zh) * 2016-11-24 2019-11-01 北京华为数字技术有限公司 一种网络业务加速方法和设备
US20180150256A1 (en) 2016-11-29 2018-05-31 Intel Corporation Technologies for data deduplication in disaggregated architectures
CN108121587B (zh) * 2016-11-30 2021-05-04 华为技术有限公司 一种数据加速方法及虚拟加速器
US20190303344A1 (en) * 2016-12-23 2019-10-03 Intel Corporation Virtual channels for hardware acceleration
US20190044809A1 (en) 2017-08-30 2019-02-07 Intel Corporation Technologies for managing a flexible host interface of a network interface controller
WO2019095154A1 (zh) 2017-11-15 2019-05-23 华为技术有限公司 一种调度加速资源的方法、装置及加速系统
EP3738033A1 (en) * 2018-01-08 2020-11-18 Telefonaktiebolaget Lm Ericsson (Publ) Process placement in a cloud environment based on automatically optimized placement policies and process execution profiles
CN107948006B (zh) * 2018-01-09 2021-04-16 重庆邮电大学 一种虚拟化网络功能的编排方法及装置
CN110912722B (zh) * 2018-09-17 2022-08-09 中兴通讯股份有限公司 业务资源管理方法、装置、网络设备和可读存储介质
CN109976876B (zh) * 2019-03-20 2021-11-16 联想(北京)有限公司 加速器管理方法和装置
CN112395071B (zh) * 2019-08-12 2026-01-02 昆仑芯(北京)科技有限公司 用于资源管理的方法、装置、电子设备和存储介质
CN113407330A (zh) * 2020-03-16 2021-09-17 中国移动通信有限公司研究院 一种加速能力的匹配方法及装置、设备、存储介质
US12039357B2 (en) 2021-04-23 2024-07-16 Samsung Electronics Co., Ltd. Mechanism for distributed resource-based I/O scheduling over storage device
CN115599811A (zh) * 2021-07-09 2023-01-13 华为技术有限公司(Cn) 数据处理的方法、装置和计算系统
CN114064125B (zh) * 2022-01-18 2022-06-24 北京大学 指令解析方法、装置及电子设备
US20250085998A1 (en) * 2022-01-21 2025-03-13 NEC Laboratories Europe GmbH Centralized acceleration abstraction layer for ran virtualization
US20220334983A1 (en) * 2022-06-28 2022-10-20 Intel Corporation Techniques For Sharing Memory Interface Circuits Between Integrated Circuit Dies
CN116647520A (zh) * 2023-05-30 2023-08-25 南京航空航天大学 一种网络功能虚拟化场景下面向时延敏感性业务的网络转发系统和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951353A (zh) * 2014-03-28 2015-09-30 华为技术有限公司 一种对vnf实现加速处理的方法及装置
CN105357258A (zh) * 2015-09-28 2016-02-24 华为技术有限公司 一种加速管理节点、加速节点、客户端及方法
CN105577801A (zh) * 2014-12-31 2016-05-11 华为技术有限公司 一种业务加速方法及装置
CN105656994A (zh) * 2014-12-31 2016-06-08 华为技术有限公司 一种业务加速方法和装置
CN105979007A (zh) * 2016-07-04 2016-09-28 华为技术有限公司 加速资源处理方法、装置及网络功能虚拟化系统

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1297894C (zh) 2003-09-30 2007-01-31 国际商业机器公司 用于调度作业的方法、调度器以及网络计算机系统
US8072883B2 (en) * 2005-09-29 2011-12-06 Emc Corporation Internet small computer systems interface (iSCSI) distance acceleration device
US8219988B2 (en) * 2007-08-02 2012-07-10 International Business Machines Corporation Partition adjunct for data processing system
US8645974B2 (en) * 2007-08-02 2014-02-04 International Business Machines Corporation Multiple partition adjunct instances interfacing multiple logical partitions to a self-virtualizing input/output device
CN103238305A (zh) * 2010-05-28 2013-08-07 安全第一公司 用于安全数据储存的加速器系统
WO2012151392A1 (en) * 2011-05-04 2012-11-08 Citrix Systems, Inc. Systems and methods for sr-iov pass-thru via an intermediary device
CN103577266B (zh) * 2012-07-31 2017-06-23 国际商业机器公司 用于对现场可编程门阵列资源进行分配的方法及系统
WO2014117376A1 (zh) * 2013-01-31 2014-08-07 华为技术有限公司 可定制的移动宽带网络系统和定制移动宽带网络的方法
WO2015081308A2 (en) * 2013-11-26 2015-06-04 Dynavisor, Inc. Dynamic i/o virtualization
US9760428B1 (en) * 2013-12-19 2017-09-12 Amdocs Software Systems Limited System, method, and computer program for performing preventative maintenance in a network function virtualization (NFV) based communication network
US10031767B2 (en) * 2014-02-25 2018-07-24 Dynavisor, Inc. Dynamic information virtualization
US20160050112A1 (en) * 2014-08-13 2016-02-18 PernixData, Inc. Distributed caching systems and methods
US10452570B1 (en) * 2014-08-27 2019-10-22 Amazon Technologies, Inc. Presenting physical devices to virtual computers through bus controllers emulated on PCI express endpoints
US20160179218A1 (en) * 2014-12-23 2016-06-23 Intel Corporation Systems and methods for improving the quality of motion sensor generated user input to mobile devices
CN105159753B (zh) * 2015-09-25 2018-09-28 华为技术有限公司 加速器虚拟化的方法、装置及集中资源管理器
US10929189B2 (en) * 2015-10-21 2021-02-23 Intel Corporation Mobile edge compute dynamic acceleration assignment
US9904975B2 (en) * 2015-11-11 2018-02-27 Amazon Technologies, Inc. Scaling for virtualized graphics processing
US10048977B2 (en) * 2015-12-22 2018-08-14 Intel Corporation Methods and apparatus for multi-stage VM virtual network function and virtual service function chain acceleration for NFV and needs-based hardware acceleration
US10191865B1 (en) * 2016-04-14 2019-01-29 Amazon Technologies, Inc. Consolidating write transactions for a network device
US10169065B1 (en) * 2016-06-29 2019-01-01 Altera Corporation Live migration of hardware accelerated applications
US10425472B2 (en) * 2017-01-17 2019-09-24 Microsoft Technology Licensing, Llc Hardware implemented load balancing
US10783100B2 (en) * 2019-03-27 2020-09-22 Intel Corporation Technologies for flexible I/O endpoint acceleration
US11334382B2 (en) * 2019-04-30 2022-05-17 Intel Corporation Technologies for batching requests in an edge infrastructure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951353A (zh) * 2014-03-28 2015-09-30 华为技术有限公司 一种对vnf实现加速处理的方法及装置
CN105577801A (zh) * 2014-12-31 2016-05-11 华为技术有限公司 一种业务加速方法及装置
CN105656994A (zh) * 2014-12-31 2016-06-08 华为技术有限公司 一种业务加速方法和装置
CN105357258A (zh) * 2015-09-28 2016-02-24 华为技术有限公司 一种加速管理节点、加速节点、客户端及方法
CN105979007A (zh) * 2016-07-04 2016-09-28 华为技术有限公司 加速资源处理方法、装置及网络功能虚拟化系统

Also Published As

Publication number Publication date
KR102199278B1 (ko) 2021-01-06
CN105979007A (zh) 2016-09-28
JP6751780B2 (ja) 2020-09-09
US10838890B2 (en) 2020-11-17
EP3468151A1 (en) 2019-04-10
CN105979007B (zh) 2020-06-02
KR20190020073A (ko) 2019-02-27
EP3468151A4 (en) 2019-05-29
US20190129874A1 (en) 2019-05-02
JP2019522293A (ja) 2019-08-08
EP3468151B1 (en) 2021-03-31

Similar Documents

Publication Publication Date Title
WO2018006676A1 (zh) 加速资源处理方法、装置及网络功能虚拟化系统
US7792944B2 (en) Executing programs based on user-specified constraints
US9307017B2 (en) Member-oriented hybrid cloud operating system architecture and communication method thereof
US10223140B2 (en) System and method for network function virtualization resource management
CN102713849B (zh) 用于抽象对虚拟机的基于非功能需求的部署的方法和系统
US10394477B2 (en) Method and system for memory allocation in a disaggregated memory architecture
US9350682B1 (en) Compute instance migrations across availability zones of a provider network
EP3313023A1 (en) Life cycle management method and apparatus
WO2018024059A1 (zh) 一种虚拟化网络中业务部署的方法和装置
US20170171245A1 (en) Dynamic detection and reconfiguration of a multi-tenant service
US20120239810A1 (en) System, method and computer program product for clustered computer environment partition resolution
WO2016183799A1 (zh) 一种硬件加速方法以及相关设备
US12463903B2 (en) Cloud-native workload optimization
WO2016183832A1 (zh) 一种网络业务实例化的方法及设备
CN106161603B (zh) 一种组网的方法、设备及架构
US11693703B2 (en) Monitoring resource utilization via intercepting bare metal communications between resources
CN112039985B (zh) 一种异构云管理方法及系统
WO2018014351A1 (zh) 一种资源配置方法及装置
CN118426947A (zh) 一种集群资源的处理方法和装置
Muhammad et al. Service orchestration over clouds and networks
CN113098705B (zh) 网络业务的生命周期管理的授权方法及装置
CN120295789A (zh) Dpu集中式服务网格的cpu资源分配方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17823488

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018568900

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197001653

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017823488

Country of ref document: EP

Effective date: 20190103