WO2018009103A1 - Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données - Google Patents
Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données Download PDFInfo
- Publication number
- WO2018009103A1 WO2018009103A1 PCT/SE2016/050686 SE2016050686W WO2018009103A1 WO 2018009103 A1 WO2018009103 A1 WO 2018009103A1 SE 2016050686 W SE2016050686 W SE 2016050686W WO 2018009103 A1 WO2018009103 A1 WO 2018009103A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- compute
- memory resources
- power manager
- resources needed
- datacentre
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5094—Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to power management and in particular to power management of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- SDN Software-Defined Networks
- NFV Network Functions Virtualisation
- SDN is fundamentally based on decoupling software from hardware, consolidating the control plane so that single software controls multiple data plane elements
- NFV regards the virtualisation of network functions and their dynamic allocation and execution on generic servers.
- Aligned, SDN and NFV offer agile and programmable network infrastructures toward generic network hardware deployed on open software, in which functions of the centralised control can be performed through virtualised functions and capabilities from NFV.
- VPS Virtualised Power Shifting
- the object is to obviate at least some of the problems outlined above.
- it is an object to provide a power manager and a method performed thereby for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- a method performed by a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- the method comprises determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the
- the method further comprises determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- the power manager is configured for determining characteristics of incoming traffic to the datacentre; and for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the
- the power manager is further configured for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the power manager and the method performed by the power manager have several advantages.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces Operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity
- Figure 1 a is a flowchart of a method performed by a power manager according to an exemplifying embodiment.
- Figure 1 b is a flowchart of a method performed by a power manager according to another exemplifying embodiment.
- Figure 1 c is a flowchart of a method performed by a power manager according to yet an exemplifying embodiment.
- Figure 1 d is a flowchart of a method performed by a power manager according to still an exemplifying embodiment.
- Figure 2a is an example of a system architecture of a datacentre connected to the Internet.
- Figure 2b is a block diagram of an exemplifying implementation of a prediction engine.
- Figure 2c is a sequence diagram of steps for configuring a server's green capabilities.
- Figure 2d is a sequence diagram of steps towards service provisioning.
- Figure 2e is a sequence diagram of steps to coordinate datacentre infrastructure capabilities.
- Figure 2f is an illustration of dependencies between CPU consumption and network traffic.
- Figure 2g is an illustration of dependency between CPU power consumption and CPU frequency (load).
- Figure 3 is a block diagram of a power manager according to an exemplifying embodiment.
- Figure 4 is a block diagram of a power manager according to another exemplifying embodiment.
- Figure 5 is a block diagram of an arrangement in a power manager according to an exemplifying embodiment.
- a power manager and a method performed thereby for managing power of a datacentre comprising compute, network and infrastructure resources and implementing virtualisation and executing application instances are provided.
- the power manager may determine characteristics of the incoming traffic to the datacentre. Depending on the incoming traffic, a certain amount of processing is required and thus a certain amount of power is consumed by the datacentre in order to perform the required processing.
- the power manager thus uses the characteristics of the incoming traffic in order to predict how much resources of the datacentre are required for performing the required processing. Based on that, the power manager may determine a power consumption for individual machines and may further based on that determine to e.g. migrate one or more application instances between virtual machines and/or servers in order to optimise (which usually is minimise) power consumption of the datacentre.
- an application instance may be e.g. a Virtual Network Function, VNF. It is observed that there are direct dependencies between network traffic and the CPU and memory resources used by VNFs.
- VNF Virtual Network Function
- Figure 1 a illustrates the method 100 comprising determining 1 10 characteristics of incoming traffic to the datacentre; and predicting 120 compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the method further comprises determining 130 a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the traffic coming in to the datacentre may vary over time, wherein the characteristics of the incoming traffic to the datacentre vary over time.
- the characteristics may be for example number of packets per time unit, type of packets, packet size, target address of the packet(s) etc.
- the target address of the packet(s) may be one or more applications, the target address of the packet(s) may be e.g. an Internet Protocol, IP, address.
- the power manager may determine the characteristics of the incoming traffic to the datacentre itself, e.g. by sampling the incoming traffic.
- the power manager may also receive the characteristics from a prediction engine, which may determine the characteristics e.g. based on samples of the incoming traffic.
- the power manager predicts compute and/or memory resources needed. Different applications may require different amount of processing, i.e. compute resources, e.g. different number of compute cycles. Also different type of packets, which may relate to different types of services associated with the packets may require different amount of
- the power manager may thus make a good prediction of the total amount of required compute and/or memory resources needed to meet the demands of the incoming traffic.
- the power manager may make use of one or more Application Resource Prediction models, which may be pre-configured in the power manager or in the prediction engine for each type of application that needs to be supported. Such models could be determined for example by machine learning techniques offline (an example of such models is show in figure 2f).
- the power manager may then, based on the predicted compute and/or memory resources needed, determine the power consumption for individual one or more virtual machines.
- the datacentre may comprise one or more physical servers, each may execute one or more virtual machines.
- a virtual machine may execute one or more application instances, e.g. in the form of Virtual Network Function(s), VNF(s).
- the power manager may make use of one or more Resource Power Consumption models, which may be pre-configured in the power manager or the prediction engine for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre.
- Parametric models are available, either based on direct measurements as illustrated in figure 2g or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy consumption). More elaborate forms and
- the method performed by the power manager has several possible advantages.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
- the method may further comprise, as illustrated in figure 1 b, informing 140 a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
- the Cloud Resource Orchestrator is enabled to take appropriate actions, as will be described in more detail below.
- the Cloud Resource Orchestrator is responsible for e.g. which virtual machines are to be executed and on which physical server they should be executed on.
- the Cloud Resource Orchestrator is also responsible for which virtual machine shall execute which application(s), or VNF(s).
- the incoming traffic only requires one physical server to be powered in order to execute the necessary virtual machine(s) and/or application(s)/VNF(s). Assume further that there are at least two physical servers being up and running. Based on the determined power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed for the incoming traffic to the
- the Cloud Resource Orchestrator may determine to execute all the one or more virtual machines on one physical server only, wherein the other two or more physical servers that are currently up and running could be put in a power saving mode.
- the informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed may comprise requesting migration of an application instance from a first server or virtual machine to a second server or virtual machine.
- the power manager determines that at least the first virtual machine needs not be running, but instead the second virtual machine has capacity to execute the application instance (e.g. a VNF) in question. The power manager may then request the Cloud Resource Orchestrator to migrate the application instance from the first to the second virtual machine.
- the application instance e.g. a VNF
- a physical server executing both the first and the second virtual machine may consume less energy just executing one virtual machine as both compute and memory resources need to be assigned to both the first and the second virtual machine. Compute and memory resources may also be saved by just executing one server instead of two.
- the informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed comprises requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
- each server may be associated with an individual power consumption budget, or the power consumption budget is the same for all servers of the datacentre.
- the power consumption budget of a server may be associated with a respective usage level of compute and memory resources of that server. The more heavily loaded a Central Processing Unit, CPU, is or how full memories and/or buffers are, the higher the power consumption.
- the CPU may be operating close to its maximum capacity and/or memories and buffers are getting to be almost full, wherein it is necessary to start a second server and/or to migrate application instances to a virtual machine on the second server.
- the method further comprises informing 150 an Infrastructure Manager about the predicted compute and/or memory resources needed.
- the Infrastructure Manager of the datacentre controls e.g. cooling of the datacentre, which may be done by air conditioning or water cooling. Depending on how many servers that are powered and operating, and/or a resource usage level of individual servers, more or less energy is consumed by the datacentre.
- the more resource usage level of individual servers the more heat is generated thereby, wherein the more cooling of the datacentre may be required.
- the Infrastructure Manager may determine how much cooling the datacentre needs, which servers required cooling etc. The Infrastructure Manager may then take appropriate actions in order to keep the datacentre properly cooled, wherein unnecessary cooling may be avoided thereby saving energy and possible overheating may be avoided.
- the informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed may comprise requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
- the power manager may be aware of the present resource usage level (level of compute and/or memory resources) of the datacentre as a whole and of individual servers. Comparing that information with the predicted compute and/or memory resources needed, the power manager may determine increase or decrease of manageable site infrastructure resources.
- the informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed comprises requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
- DVFS Dynamic Voltage and Frequency Scaling
- DVFS enables the power manager to e.g. lower the voltage and/or the frequency of a CPU of a server in case that CPU could still fulfil the requirements of the predicted compute and/or memory resources needed even with a reduced voltage and/or frequency.
- datacentre may comprise determining which application instance is the destination address of the incoming traffic.
- processing resources e.g. with regard to number of compute cycles and/or level of memory/buffer usage.
- the different application instances may be identified by their address. Consequently, the power manager may determine the characteristics of incoming traffic to the datacentre by determining which application instance is the
- the predicting 120 of compute and/or memory resources needed comprises determining a type of application instance to which the incoming traffic is addressed.
- a first type may require two compute cycles
- a second type may require five compute cycles
- a third type may require eight compute cycles
- a fourth type may require one compute cycle.
- the power manager may predict compute and/or memory resources needed by determining the type of application instance to which the incoming traffic is addressed.
- the predicting 120 of compute and/or memory resources needed further comprises mapping 125 the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
- the power manager may relatively accurately predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- a first type of application instances requires a first power consumption per incoming packet
- a second type of application instances requires a second power consumption per incoming packet
- a third type of application instances requires a third power consumption per incoming packet
- a fourth type of application instances requires a fourth power consumption per incoming packet. Then the five incoming packets tell the power manager that: 2 * second power consumption + 2 * third power
- the predicting 120 of compute and/or memory resources needed may further comprise also using statistical data together with the determined
- Statistical data may provide useful information to the power manager about how the traffic characteristics statistically changes over time. Of course there may be deviations from the statistics but still valuable information may be obtained from statistical data.
- the traffic varies according to the same pattern over most Mondays, over most Tuesdays etc., wherein the pattern comprises peeks which usually occur at the same point in time most Mondays, Tuesdays, Wednesdays, Sundays. Similar models may be obtained over time for the datacentre.
- Figure 2a is an illustration of an example of a system architecture of the datacentre and how it is connected to the Internet. External traffic, e.g. from the Internet is coming to the datacentre via a datacentre gateway 270, which is part of a layer 3 (L3) network.
- the datacentre comprises in this illustrative example three servers 240a, 240b and 240c and optionally also the gateway 270.
- the datacentre also comprises a layer 2 (L2) network 250 for communication within the datacentre and datacentre infrastructure resources 260.
- Figure 2a also illustrates a prediction engine 210, the power manager 200, Cloud Resource Orchestration (CRO) 220 and Infrastructure Management System (IMS) 230.
- CRO Cloud Resource Orchestration
- IMS Infrastructure Management System
- the datacentre also comprises in this illustrative example a cloud monitoring bus, which connects respective Governors of the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, the prediction engine 210 and the power manager 200.
- the datacentre also comprises in this illustrative example a cloud management bus, which connects the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, and the CRO 220.
- the datacentre also comprises in this illustrative example an infrastructure
- management bus which connects the IMS 230 and the datacentre infrastructure resources 260.
- the prediction engine 210 may comprise various functions and/or units as illustrated in figure 2b, e.g. traffic feature extraction, VNF/application resource prediction model, prediction model and resource power consumption model.
- the traffic feature extraction function determines which VNF or application instance is the destination of the traffic (for example, by examining the IP address). It might also aggregate information from several traffic samples received (such as to provide a statistic of the packet sizes received during a particular time interval).
- the VNF/Application resource prediction model may be pre-configured in the prediction engine 210 for each type of VNF or application instance that needs to be supported. Such models may be determined for example by offline machine learning techniques.
- the Resource Power Consumption model may be pre-configured in the prediction engine 210 for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre.
- resource for example, CPU
- Parametric models are available, either based on direct measurements or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy
- the prediction model function is optional and it takes the output of the resource prediction model and tries to determine how much resources may be used in the near future (equivalent to predicting network traffic based on the history). As stated above, it is optional, and it may help by enabling a pro-active rather than reactive management.
- the prediction engine 210 may proactively send power consumption estimates to a Power Management System, PoMS, which defines a particular power budget allocation, redistribution or capping for VNF/applications executed in the datacentre.
- PoMS Power Management System
- the PoMS may interact with both the Cloud resource Orchestration and the datacentre Infrastructure Management system.
- the interaction with the Cloud Resource Orchestration may be made in terms of: (1 ) requesting migration of a VNF or application instance such that overall power consumption is optimised either on the source or destination servers, and (2) requesting provisioning a new instance of a VNF or application instance in case the power consumption budget on a given server is about to be fully utilised, but overall the VNF is within the allowed power budget.
- the PoMS may also interact with the datacentre Infrastructure
- the Management system (that controls the air conditioning or water cooling for example). It may request the DC Infrastructure Management to increase the air conditioning or water cooling flow on certain locations, in case a significant number of VNFs or application instances are using a lot of power (thus creating significant amounts of heat from the CPU). Or it may request the datacentre Infrastructure Management to decrease the air conditioning or water cooling in certain locations where VNFs or application instances are using significantly less power than before, thus reducing the amount of heat from the CPU.
- the PoMS may interact with the Governors of these capabilities and configure policies that determine how the power will be consumed. For example, for DVFS on a server, the PoMS may configure the DVFS Governor with a certain value for the maximum and minimum frequency of the processor, as well as maximum and minimum voltage. These values need to be in the range offered by the manufacturer, but the entire operating range may be restricted through the values configured by the PoMS. For the Energy Efficient Ethernet, the PoMS may configure the associated Governor with permitted data rates (out of the possible 10, 100, 1000 or 10000 in case the NICs and cabling allow this) as well as values for transition timeouts.
- Green Capabilities such as DVFS (allowing changing the frequency and/or CPU voltage depending on load and other policies) or Energy Efficient Ethernet, and some of these capabilities are exposed to the Virtualisation Management platform (be it Kernel-based Virtual Machine for virtual machines or Docker for containers)
- the PoMS may interact with the Governors of these capabilities and configure policies that determine how the power will be consumed.
- the PoMS may configure the
- Embodiments herein also relate to a power manager for managing power of a datacentre comprising compute and network resources and
- the power manager has the same technical features, objects and advantages as the method performed by the power manager described above. The power manager will therefore be described only in brief in order to avoid unnecessary repetition. The power manager will be described with reference to figures 3 and 4.
- Figure 3 illustrates the power manager 300, 400 being configured for determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the determined
- the power manager 300, 400 is further configured determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- FIG. 3 illustrates the power manager 300 comprising a processor 321 and memory 322, the memory comprising instructions, e.g. by means of a computer program 323, which when executed by the processor 321 causes the power manager 300 to determine characteristics of incoming traffic to the datacentre; and to predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the memory further comprises
- Figure 3 also illustrates the power manager 300 comprising a memory 310. It shall be pointed out that figure 3 is merely an exemplifying illustration and memory 310 may be optional, be a part of the memory 322 or be a further memory of the power manager 300.
- the memory may for example comprise information relating to the power manager 300, to statistics of operation of the power manager 300, just to give a couple of illustrating examples.
- Figure 3 further illustrates the power manager 300 comprising processing means 320, which comprises the memory 322 and the processor 321 .
- figure 3 illustrates the power manager 300 comprising a communication unit 330.
- the communication unit 330 may comprise an interface through which the power manager 300 communicates with other nodes or entities of the datacentre as well as other communication units.
- Figure 3 also illustrates the power manager 300 comprising further functionality 340.
- the further functionality 340 may comprise hardware or software necessary for the power manager 300 to perform different tasks that are not disclosed herein.
- FIG 4 illustrates the power manager 400 comprising a determining unit 403 for determining characteristics of incoming traffic to the datacentre; and a predicting unit 404 for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the determining unit 403 of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the power manager 400 is also illustrated comprising a communication unit 401 . Through this unit, the power manager 400 is adapted to communicate with other nodes and/or entities in the datacentre or associated therewith.
- the power manager 400 is further illustrated comprising a memory 402 for storing data.
- the power manager 400 may comprise a control or processing unit (not shown) which in turn is connected to the different units 403- 404. It shall be pointed out that this is merely an illustrative example and the power manager 400 may comprise more, less or other units or modules which execute the functions of the power manager 400 in the same manner as the units illustrated in figure 4.
- figure 4 merely illustrates various functional units in the power manager 400 in a logical sense.
- the functions in practice may be implemented using any suitable software and hardware means/circuits etc.
- the embodiments are generally not limited to the shown structures of the power manager 400 and the functional units.
- the previously described exemplary embodiments may be realised in many ways.
- one embodiment includes a computer-readable medium having instructions stored thereon that are executable by the control or processing unit for executing the method actions or steps in the power manager 400.
- the instructions executable by the computing system and stored on the computer-readable medium perform the method actions or steps of the power manager 400 as set forth in the claims.
- the power manager receiver has the same possible advantages as the method performed by the power manager.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
- the power manager 300, 400 is further configured for informing a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
- the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting migration of an application instance from a first server to a second server.
- the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
- the power manager 300, 400 is further configured for informing an Infrastructure Manager about the predicted compute and/or memory resources needed.
- the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
- the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
- DVFS Dynamic Voltage and Frequency Scaling
- the power manager 300, 400 is further configured for determining characteristics of incoming traffic to the datacentre by determining which application instance is the destination address of the incoming traffic.
- the power manager 300, 400 is further configured for predicting compute and/or memory resources needed by
- the power manager 300, 400 is further configured for predicting of compute and/or memory resources needed by mapping the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
- the power manager 300, 400 is further configured for predicting compute and/or memory resources needed further by also using statistical data together with the determined characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
- FIG. 5 schematically shows an embodiment of an arrangement 500 in a power manager 400.
- a processing unit 506 e.g. with a Digital Signal Processor, DSP.
- the processing unit 506 may be a single unit or a plurality of units to perform different actions of procedures described herein.
- the arrangement 500 of the power manager 400 may also comprise an input unit 502 for receiving signals from other entities, and an output unit 504 for providing signal(s) to other entities.
- the input unit and the output unit may be arranged as an integrated entity or as illustrated in the example of figure 4, as one or more interfaces 401 .
- the arrangement 500 in the power manager 400 comprises at least one computer program product 508 in the form of a non-volatile memory, e.g. an Electrically Erasable Programmable Read-Only Memory,
- the computer program product 508 comprises a computer program 510, which comprises code means, which when executed in the processing unit 506 in the arrangement 500 in the power manager 400 causes the power manager to perform the actions e.g. of the procedure described earlier in conjunction with figures 1a-1d.
- the computer program 510 may be configured as a computer program code structured in computer program modules 510a-510e. Hence, in an
- the code means in the computer program of the arrangement 500 in the power manager 400 comprises a determining unit, or module, for determining characteristics of incoming traffic to the datacentre; and a predicting unit, or module, for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the determining unit, or module, of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the computer program modules could essentially perform the actions of the flow illustrated in figures 1a-1d, to emulate the power manager 400.
- the different computer program modules when executed in the processing unit 506, they may correspond to the units 403-406 of figure 4.
- code means in the embodiments disclosed above in conjunction with figure 4 is implemented as computer program modules which when executed in the respective processing unit causes the power manager to perform the actions described above in the conjunction with figures mentioned above, at least one of the code means may in alternative embodiments be implemented at least partly as hardware circuits.
- the processor may be a single Central Processing Unit, CPU, but could also comprise two or more processing units.
- the processor may include general purpose microprocessors; instruction set processors and/or related chips sets and/or special purpose microprocessors such as Application Specific Integrated Circuits, ASICs.
- the processor may also comprise board memory for caching purposes.
- the computer program may be carried by a computer program product connected to the processor.
- the computer program product may comprise a computer readable medium on which the computer program is stored.
- the computer program product may be a flash memory, a Random-Access Memory RAM, Read-Only Memory, ROM, or an EEPROM, and the computer program modules described above could in alternative embodiments be distributed on different computer program products in the form of memories within the power manager.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Power Sources (AREA)
Abstract
La présente invention concerne un gestionnaire de puissance et un procédé mis en œuvre permettant de gérer une puissance de gestion d'un centre de données comprenant des ressources de réseau et de calcul et mettant en œuvre des instances d'application d'exécution et de virtualisation. Le procédé (100) consiste : à déterminer (110) des caractéristiques du trafic entrant vers le centre de données; et à prédire (120) des ressources de mémoire et/ou de calcul nécessaires sur la base des caractéristiques déterminées du trafic entrant vers le centre de données et des applications qui traiteront le trafic. Le procédé consiste en outre : à déterminer (130) une consommation de puissance pour au moins une machine virtuelle sur la base des ressources de mémoire et/ou de calcul prédites nécessaires.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/SE2016/050686 WO2018009103A1 (fr) | 2016-07-05 | 2016-07-05 | Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/SE2016/050686 WO2018009103A1 (fr) | 2016-07-05 | 2016-07-05 | Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018009103A1 true WO2018009103A1 (fr) | 2018-01-11 |
Family
ID=56418582
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2016/050686 Ceased WO2018009103A1 (fr) | 2016-07-05 | 2016-07-05 | Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018009103A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111722907A (zh) * | 2020-05-20 | 2020-09-29 | 中天通信技术有限公司 | 基于dvfs的数据中心映射方法、装置及存储介质 |
| CN114064282A (zh) * | 2021-11-23 | 2022-02-18 | 北京百度网讯科技有限公司 | 资源挖掘方法、装置及电子设备 |
| WO2022048674A1 (fr) * | 2020-09-07 | 2022-03-10 | 华为云计算技术有限公司 | Procédé et appareil de gestion d'une machine virtuelle sur la base d'une armoire de serveur |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090293022A1 (en) * | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
| WO2010057775A2 (fr) * | 2008-11-20 | 2010-05-27 | International Business Machines Corporation | Procédé et appareil de gestion de l’efficacité de puissance dans un système d’amas virtuel |
| US20100235840A1 (en) * | 2009-03-10 | 2010-09-16 | International Business Machines Corporation | Power management using dynamic application scheduling |
| US20130190899A1 (en) * | 2008-12-04 | 2013-07-25 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20130339759A1 (en) * | 2012-06-15 | 2013-12-19 | Infosys Limted | Method and system for automated application layer power management solution for serverside applications |
-
2016
- 2016-07-05 WO PCT/SE2016/050686 patent/WO2018009103A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090293022A1 (en) * | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
| WO2010057775A2 (fr) * | 2008-11-20 | 2010-05-27 | International Business Machines Corporation | Procédé et appareil de gestion de l’efficacité de puissance dans un système d’amas virtuel |
| US20130190899A1 (en) * | 2008-12-04 | 2013-07-25 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20100235840A1 (en) * | 2009-03-10 | 2010-09-16 | International Business Machines Corporation | Power management using dynamic application scheduling |
| US20130339759A1 (en) * | 2012-06-15 | 2013-12-19 | Infosys Limted | Method and system for automated application layer power management solution for serverside applications |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111722907A (zh) * | 2020-05-20 | 2020-09-29 | 中天通信技术有限公司 | 基于dvfs的数据中心映射方法、装置及存储介质 |
| CN111722907B (zh) * | 2020-05-20 | 2024-01-19 | 中天通信技术有限公司 | 基于dvfs的数据中心映射方法、装置及存储介质 |
| WO2022048674A1 (fr) * | 2020-09-07 | 2022-03-10 | 华为云计算技术有限公司 | Procédé et appareil de gestion d'une machine virtuelle sur la base d'une armoire de serveur |
| CN114064282A (zh) * | 2021-11-23 | 2022-02-18 | 北京百度网讯科技有限公司 | 资源挖掘方法、装置及电子设备 |
| CN114064282B (zh) * | 2021-11-23 | 2023-07-25 | 北京百度网讯科技有限公司 | 资源挖掘方法、装置及电子设备 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190230004A1 (en) | Network slice management method and management unit | |
| CN112583861B (zh) | 服务部署方法、资源配置方法、系统、装置及服务器 | |
| EP3606008B1 (fr) | Procédé et dispositif permettant la réalisation d'une planification de ressource | |
| CN109684074B (zh) | 物理机资源分配方法及终端设备 | |
| CN108667777B (zh) | 一种服务链生成方法及网络功能编排器nfvo | |
| US9529619B2 (en) | Method of distributing network policies of virtual machines in a datacenter | |
| US9575794B2 (en) | Methods and systems for controller-based datacenter network sharing | |
| US20240354149A1 (en) | Rightsizing virtual machine deployments in a cloud computing environment | |
| US9722930B2 (en) | Exploiting probabilistic latency expressions for placing cloud applications | |
| CN104468688A (zh) | 用于网络虚拟化的方法和设备 | |
| EP3103217B1 (fr) | Système et procédé de surveillance pour des réseaux définis par logiciel | |
| US20170063645A1 (en) | Method, Computer Program and Node for Management of Resources | |
| CN107562512A (zh) | 一种迁移虚拟机的方法、装置及系统 | |
| WO2014140790A1 (fr) | Appareil et procédé pour maintenir des états opérationnels cohérents dans des infrastructures en nuage | |
| CN104601664A (zh) | 一种云计算平台资源管理与虚拟机调度的控制系统 | |
| JP6490806B2 (ja) | 計算リソースの新たな構成を決定するための構成方法、機器、システム及びコンピュータ可読媒体 | |
| Zhou et al. | Goldilocks: Adaptive resource provisioning in containerized data centers | |
| Velasco et al. | Elastic operations in federated datacenters for performance and cost optimization | |
| CN103744735A (zh) | 一种多核资源的调度方法及装置 | |
| WO2018009103A1 (fr) | Gestionnaire de puissance et procédé mis en œuvre permettant de gérer la puissance d'un centre de données | |
| Telenyk et al. | Architecture and conceptual bases of cloud IT infrastructure management | |
| WO2018013023A1 (fr) | Serveur et procédé alors mis en œuvre servant à déterminer une fréquence et une tension d'un ou de plusieurs processeurs du serveur | |
| Sharma et al. | A Machine learning-based framework for energy-efficient load balancing in sustainable urban infrastructure and smart buildings | |
| WO2017133020A1 (fr) | Procédé et dispositif de transmission de principes dans un système nfv | |
| Carrega et al. | Energy-aware consolidation scheme for data center cloud applications |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16739598 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16739598 Country of ref document: EP Kind code of ref document: A1 |