WO2018009103A1 - Power manager and method performed thereby for managing power of a datacentre - Google Patents
Power manager and method performed thereby for managing power of a datacentre Download PDFInfo
- Publication number
- WO2018009103A1 WO2018009103A1 PCT/SE2016/050686 SE2016050686W WO2018009103A1 WO 2018009103 A1 WO2018009103 A1 WO 2018009103A1 SE 2016050686 W SE2016050686 W SE 2016050686W WO 2018009103 A1 WO2018009103 A1 WO 2018009103A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- compute
- memory resources
- power manager
- resources needed
- datacentre
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5094—Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to power management and in particular to power management of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- SDN Software-Defined Networks
- NFV Network Functions Virtualisation
- SDN is fundamentally based on decoupling software from hardware, consolidating the control plane so that single software controls multiple data plane elements
- NFV regards the virtualisation of network functions and their dynamic allocation and execution on generic servers.
- Aligned, SDN and NFV offer agile and programmable network infrastructures toward generic network hardware deployed on open software, in which functions of the centralised control can be performed through virtualised functions and capabilities from NFV.
- VPS Virtualised Power Shifting
- the object is to obviate at least some of the problems outlined above.
- it is an object to provide a power manager and a method performed thereby for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- a method performed by a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- the method comprises determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the
- the method further comprises determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
- the power manager is configured for determining characteristics of incoming traffic to the datacentre; and for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the
- the power manager is further configured for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the power manager and the method performed by the power manager have several advantages.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces Operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity
- Figure 1 a is a flowchart of a method performed by a power manager according to an exemplifying embodiment.
- Figure 1 b is a flowchart of a method performed by a power manager according to another exemplifying embodiment.
- Figure 1 c is a flowchart of a method performed by a power manager according to yet an exemplifying embodiment.
- Figure 1 d is a flowchart of a method performed by a power manager according to still an exemplifying embodiment.
- Figure 2a is an example of a system architecture of a datacentre connected to the Internet.
- Figure 2b is a block diagram of an exemplifying implementation of a prediction engine.
- Figure 2c is a sequence diagram of steps for configuring a server's green capabilities.
- Figure 2d is a sequence diagram of steps towards service provisioning.
- Figure 2e is a sequence diagram of steps to coordinate datacentre infrastructure capabilities.
- Figure 2f is an illustration of dependencies between CPU consumption and network traffic.
- Figure 2g is an illustration of dependency between CPU power consumption and CPU frequency (load).
- Figure 3 is a block diagram of a power manager according to an exemplifying embodiment.
- Figure 4 is a block diagram of a power manager according to another exemplifying embodiment.
- Figure 5 is a block diagram of an arrangement in a power manager according to an exemplifying embodiment.
- a power manager and a method performed thereby for managing power of a datacentre comprising compute, network and infrastructure resources and implementing virtualisation and executing application instances are provided.
- the power manager may determine characteristics of the incoming traffic to the datacentre. Depending on the incoming traffic, a certain amount of processing is required and thus a certain amount of power is consumed by the datacentre in order to perform the required processing.
- the power manager thus uses the characteristics of the incoming traffic in order to predict how much resources of the datacentre are required for performing the required processing. Based on that, the power manager may determine a power consumption for individual machines and may further based on that determine to e.g. migrate one or more application instances between virtual machines and/or servers in order to optimise (which usually is minimise) power consumption of the datacentre.
- an application instance may be e.g. a Virtual Network Function, VNF. It is observed that there are direct dependencies between network traffic and the CPU and memory resources used by VNFs.
- VNF Virtual Network Function
- Figure 1 a illustrates the method 100 comprising determining 1 10 characteristics of incoming traffic to the datacentre; and predicting 120 compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the method further comprises determining 130 a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the traffic coming in to the datacentre may vary over time, wherein the characteristics of the incoming traffic to the datacentre vary over time.
- the characteristics may be for example number of packets per time unit, type of packets, packet size, target address of the packet(s) etc.
- the target address of the packet(s) may be one or more applications, the target address of the packet(s) may be e.g. an Internet Protocol, IP, address.
- the power manager may determine the characteristics of the incoming traffic to the datacentre itself, e.g. by sampling the incoming traffic.
- the power manager may also receive the characteristics from a prediction engine, which may determine the characteristics e.g. based on samples of the incoming traffic.
- the power manager predicts compute and/or memory resources needed. Different applications may require different amount of processing, i.e. compute resources, e.g. different number of compute cycles. Also different type of packets, which may relate to different types of services associated with the packets may require different amount of
- the power manager may thus make a good prediction of the total amount of required compute and/or memory resources needed to meet the demands of the incoming traffic.
- the power manager may make use of one or more Application Resource Prediction models, which may be pre-configured in the power manager or in the prediction engine for each type of application that needs to be supported. Such models could be determined for example by machine learning techniques offline (an example of such models is show in figure 2f).
- the power manager may then, based on the predicted compute and/or memory resources needed, determine the power consumption for individual one or more virtual machines.
- the datacentre may comprise one or more physical servers, each may execute one or more virtual machines.
- a virtual machine may execute one or more application instances, e.g. in the form of Virtual Network Function(s), VNF(s).
- the power manager may make use of one or more Resource Power Consumption models, which may be pre-configured in the power manager or the prediction engine for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre.
- Parametric models are available, either based on direct measurements as illustrated in figure 2g or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy consumption). More elaborate forms and
- the method performed by the power manager has several possible advantages.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
- the method may further comprise, as illustrated in figure 1 b, informing 140 a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
- the Cloud Resource Orchestrator is enabled to take appropriate actions, as will be described in more detail below.
- the Cloud Resource Orchestrator is responsible for e.g. which virtual machines are to be executed and on which physical server they should be executed on.
- the Cloud Resource Orchestrator is also responsible for which virtual machine shall execute which application(s), or VNF(s).
- the incoming traffic only requires one physical server to be powered in order to execute the necessary virtual machine(s) and/or application(s)/VNF(s). Assume further that there are at least two physical servers being up and running. Based on the determined power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed for the incoming traffic to the
- the Cloud Resource Orchestrator may determine to execute all the one or more virtual machines on one physical server only, wherein the other two or more physical servers that are currently up and running could be put in a power saving mode.
- the informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed may comprise requesting migration of an application instance from a first server or virtual machine to a second server or virtual machine.
- the power manager determines that at least the first virtual machine needs not be running, but instead the second virtual machine has capacity to execute the application instance (e.g. a VNF) in question. The power manager may then request the Cloud Resource Orchestrator to migrate the application instance from the first to the second virtual machine.
- the application instance e.g. a VNF
- a physical server executing both the first and the second virtual machine may consume less energy just executing one virtual machine as both compute and memory resources need to be assigned to both the first and the second virtual machine. Compute and memory resources may also be saved by just executing one server instead of two.
- the informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed comprises requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
- each server may be associated with an individual power consumption budget, or the power consumption budget is the same for all servers of the datacentre.
- the power consumption budget of a server may be associated with a respective usage level of compute and memory resources of that server. The more heavily loaded a Central Processing Unit, CPU, is or how full memories and/or buffers are, the higher the power consumption.
- the CPU may be operating close to its maximum capacity and/or memories and buffers are getting to be almost full, wherein it is necessary to start a second server and/or to migrate application instances to a virtual machine on the second server.
- the method further comprises informing 150 an Infrastructure Manager about the predicted compute and/or memory resources needed.
- the Infrastructure Manager of the datacentre controls e.g. cooling of the datacentre, which may be done by air conditioning or water cooling. Depending on how many servers that are powered and operating, and/or a resource usage level of individual servers, more or less energy is consumed by the datacentre.
- the more resource usage level of individual servers the more heat is generated thereby, wherein the more cooling of the datacentre may be required.
- the Infrastructure Manager may determine how much cooling the datacentre needs, which servers required cooling etc. The Infrastructure Manager may then take appropriate actions in order to keep the datacentre properly cooled, wherein unnecessary cooling may be avoided thereby saving energy and possible overheating may be avoided.
- the informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed may comprise requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
- the power manager may be aware of the present resource usage level (level of compute and/or memory resources) of the datacentre as a whole and of individual servers. Comparing that information with the predicted compute and/or memory resources needed, the power manager may determine increase or decrease of manageable site infrastructure resources.
- the informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed comprises requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
- DVFS Dynamic Voltage and Frequency Scaling
- DVFS enables the power manager to e.g. lower the voltage and/or the frequency of a CPU of a server in case that CPU could still fulfil the requirements of the predicted compute and/or memory resources needed even with a reduced voltage and/or frequency.
- datacentre may comprise determining which application instance is the destination address of the incoming traffic.
- processing resources e.g. with regard to number of compute cycles and/or level of memory/buffer usage.
- the different application instances may be identified by their address. Consequently, the power manager may determine the characteristics of incoming traffic to the datacentre by determining which application instance is the
- the predicting 120 of compute and/or memory resources needed comprises determining a type of application instance to which the incoming traffic is addressed.
- a first type may require two compute cycles
- a second type may require five compute cycles
- a third type may require eight compute cycles
- a fourth type may require one compute cycle.
- the power manager may predict compute and/or memory resources needed by determining the type of application instance to which the incoming traffic is addressed.
- the predicting 120 of compute and/or memory resources needed further comprises mapping 125 the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
- the power manager may relatively accurately predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- a first type of application instances requires a first power consumption per incoming packet
- a second type of application instances requires a second power consumption per incoming packet
- a third type of application instances requires a third power consumption per incoming packet
- a fourth type of application instances requires a fourth power consumption per incoming packet. Then the five incoming packets tell the power manager that: 2 * second power consumption + 2 * third power
- the predicting 120 of compute and/or memory resources needed may further comprise also using statistical data together with the determined
- Statistical data may provide useful information to the power manager about how the traffic characteristics statistically changes over time. Of course there may be deviations from the statistics but still valuable information may be obtained from statistical data.
- the traffic varies according to the same pattern over most Mondays, over most Tuesdays etc., wherein the pattern comprises peeks which usually occur at the same point in time most Mondays, Tuesdays, Wednesdays, Sundays. Similar models may be obtained over time for the datacentre.
- Figure 2a is an illustration of an example of a system architecture of the datacentre and how it is connected to the Internet. External traffic, e.g. from the Internet is coming to the datacentre via a datacentre gateway 270, which is part of a layer 3 (L3) network.
- the datacentre comprises in this illustrative example three servers 240a, 240b and 240c and optionally also the gateway 270.
- the datacentre also comprises a layer 2 (L2) network 250 for communication within the datacentre and datacentre infrastructure resources 260.
- Figure 2a also illustrates a prediction engine 210, the power manager 200, Cloud Resource Orchestration (CRO) 220 and Infrastructure Management System (IMS) 230.
- CRO Cloud Resource Orchestration
- IMS Infrastructure Management System
- the datacentre also comprises in this illustrative example a cloud monitoring bus, which connects respective Governors of the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, the prediction engine 210 and the power manager 200.
- the datacentre also comprises in this illustrative example a cloud management bus, which connects the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, and the CRO 220.
- the datacentre also comprises in this illustrative example an infrastructure
- management bus which connects the IMS 230 and the datacentre infrastructure resources 260.
- the prediction engine 210 may comprise various functions and/or units as illustrated in figure 2b, e.g. traffic feature extraction, VNF/application resource prediction model, prediction model and resource power consumption model.
- the traffic feature extraction function determines which VNF or application instance is the destination of the traffic (for example, by examining the IP address). It might also aggregate information from several traffic samples received (such as to provide a statistic of the packet sizes received during a particular time interval).
- the VNF/Application resource prediction model may be pre-configured in the prediction engine 210 for each type of VNF or application instance that needs to be supported. Such models may be determined for example by offline machine learning techniques.
- the Resource Power Consumption model may be pre-configured in the prediction engine 210 for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre.
- resource for example, CPU
- Parametric models are available, either based on direct measurements or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy
- the prediction model function is optional and it takes the output of the resource prediction model and tries to determine how much resources may be used in the near future (equivalent to predicting network traffic based on the history). As stated above, it is optional, and it may help by enabling a pro-active rather than reactive management.
- the prediction engine 210 may proactively send power consumption estimates to a Power Management System, PoMS, which defines a particular power budget allocation, redistribution or capping for VNF/applications executed in the datacentre.
- PoMS Power Management System
- the PoMS may interact with both the Cloud resource Orchestration and the datacentre Infrastructure Management system.
- the interaction with the Cloud Resource Orchestration may be made in terms of: (1 ) requesting migration of a VNF or application instance such that overall power consumption is optimised either on the source or destination servers, and (2) requesting provisioning a new instance of a VNF or application instance in case the power consumption budget on a given server is about to be fully utilised, but overall the VNF is within the allowed power budget.
- the PoMS may also interact with the datacentre Infrastructure
- the Management system (that controls the air conditioning or water cooling for example). It may request the DC Infrastructure Management to increase the air conditioning or water cooling flow on certain locations, in case a significant number of VNFs or application instances are using a lot of power (thus creating significant amounts of heat from the CPU). Or it may request the datacentre Infrastructure Management to decrease the air conditioning or water cooling in certain locations where VNFs or application instances are using significantly less power than before, thus reducing the amount of heat from the CPU.
- the PoMS may interact with the Governors of these capabilities and configure policies that determine how the power will be consumed. For example, for DVFS on a server, the PoMS may configure the DVFS Governor with a certain value for the maximum and minimum frequency of the processor, as well as maximum and minimum voltage. These values need to be in the range offered by the manufacturer, but the entire operating range may be restricted through the values configured by the PoMS. For the Energy Efficient Ethernet, the PoMS may configure the associated Governor with permitted data rates (out of the possible 10, 100, 1000 or 10000 in case the NICs and cabling allow this) as well as values for transition timeouts.
- Green Capabilities such as DVFS (allowing changing the frequency and/or CPU voltage depending on load and other policies) or Energy Efficient Ethernet, and some of these capabilities are exposed to the Virtualisation Management platform (be it Kernel-based Virtual Machine for virtual machines or Docker for containers)
- the PoMS may interact with the Governors of these capabilities and configure policies that determine how the power will be consumed.
- the PoMS may configure the
- Embodiments herein also relate to a power manager for managing power of a datacentre comprising compute and network resources and
- the power manager has the same technical features, objects and advantages as the method performed by the power manager described above. The power manager will therefore be described only in brief in order to avoid unnecessary repetition. The power manager will be described with reference to figures 3 and 4.
- Figure 3 illustrates the power manager 300, 400 being configured for determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the determined
- the power manager 300, 400 is further configured determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- FIG. 3 illustrates the power manager 300 comprising a processor 321 and memory 322, the memory comprising instructions, e.g. by means of a computer program 323, which when executed by the processor 321 causes the power manager 300 to determine characteristics of incoming traffic to the datacentre; and to predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the memory further comprises
- Figure 3 also illustrates the power manager 300 comprising a memory 310. It shall be pointed out that figure 3 is merely an exemplifying illustration and memory 310 may be optional, be a part of the memory 322 or be a further memory of the power manager 300.
- the memory may for example comprise information relating to the power manager 300, to statistics of operation of the power manager 300, just to give a couple of illustrating examples.
- Figure 3 further illustrates the power manager 300 comprising processing means 320, which comprises the memory 322 and the processor 321 .
- figure 3 illustrates the power manager 300 comprising a communication unit 330.
- the communication unit 330 may comprise an interface through which the power manager 300 communicates with other nodes or entities of the datacentre as well as other communication units.
- Figure 3 also illustrates the power manager 300 comprising further functionality 340.
- the further functionality 340 may comprise hardware or software necessary for the power manager 300 to perform different tasks that are not disclosed herein.
- FIG 4 illustrates the power manager 400 comprising a determining unit 403 for determining characteristics of incoming traffic to the datacentre; and a predicting unit 404 for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the determining unit 403 of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the power manager 400 is also illustrated comprising a communication unit 401 . Through this unit, the power manager 400 is adapted to communicate with other nodes and/or entities in the datacentre or associated therewith.
- the power manager 400 is further illustrated comprising a memory 402 for storing data.
- the power manager 400 may comprise a control or processing unit (not shown) which in turn is connected to the different units 403- 404. It shall be pointed out that this is merely an illustrative example and the power manager 400 may comprise more, less or other units or modules which execute the functions of the power manager 400 in the same manner as the units illustrated in figure 4.
- figure 4 merely illustrates various functional units in the power manager 400 in a logical sense.
- the functions in practice may be implemented using any suitable software and hardware means/circuits etc.
- the embodiments are generally not limited to the shown structures of the power manager 400 and the functional units.
- the previously described exemplary embodiments may be realised in many ways.
- one embodiment includes a computer-readable medium having instructions stored thereon that are executable by the control or processing unit for executing the method actions or steps in the power manager 400.
- the instructions executable by the computing system and stored on the computer-readable medium perform the method actions or steps of the power manager 400 as set forth in the claims.
- the power manager receiver has the same possible advantages as the method performed by the power manager.
- One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the operator of the datacentre.
- Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
- the power manager 300, 400 is further configured for informing a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
- the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting migration of an application instance from a first server to a second server.
- the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
- the power manager 300, 400 is further configured for informing an Infrastructure Manager about the predicted compute and/or memory resources needed.
- the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
- the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
- DVFS Dynamic Voltage and Frequency Scaling
- the power manager 300, 400 is further configured for determining characteristics of incoming traffic to the datacentre by determining which application instance is the destination address of the incoming traffic.
- the power manager 300, 400 is further configured for predicting compute and/or memory resources needed by
- the power manager 300, 400 is further configured for predicting of compute and/or memory resources needed by mapping the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
- the power manager 300, 400 is further configured for predicting compute and/or memory resources needed further by also using statistical data together with the determined characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
- FIG. 5 schematically shows an embodiment of an arrangement 500 in a power manager 400.
- a processing unit 506 e.g. with a Digital Signal Processor, DSP.
- the processing unit 506 may be a single unit or a plurality of units to perform different actions of procedures described herein.
- the arrangement 500 of the power manager 400 may also comprise an input unit 502 for receiving signals from other entities, and an output unit 504 for providing signal(s) to other entities.
- the input unit and the output unit may be arranged as an integrated entity or as illustrated in the example of figure 4, as one or more interfaces 401 .
- the arrangement 500 in the power manager 400 comprises at least one computer program product 508 in the form of a non-volatile memory, e.g. an Electrically Erasable Programmable Read-Only Memory,
- the computer program product 508 comprises a computer program 510, which comprises code means, which when executed in the processing unit 506 in the arrangement 500 in the power manager 400 causes the power manager to perform the actions e.g. of the procedure described earlier in conjunction with figures 1a-1d.
- the computer program 510 may be configured as a computer program code structured in computer program modules 510a-510e. Hence, in an
- the code means in the computer program of the arrangement 500 in the power manager 400 comprises a determining unit, or module, for determining characteristics of incoming traffic to the datacentre; and a predicting unit, or module, for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
- the determining unit, or module, of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
- the computer program modules could essentially perform the actions of the flow illustrated in figures 1a-1d, to emulate the power manager 400.
- the different computer program modules when executed in the processing unit 506, they may correspond to the units 403-406 of figure 4.
- code means in the embodiments disclosed above in conjunction with figure 4 is implemented as computer program modules which when executed in the respective processing unit causes the power manager to perform the actions described above in the conjunction with figures mentioned above, at least one of the code means may in alternative embodiments be implemented at least partly as hardware circuits.
- the processor may be a single Central Processing Unit, CPU, but could also comprise two or more processing units.
- the processor may include general purpose microprocessors; instruction set processors and/or related chips sets and/or special purpose microprocessors such as Application Specific Integrated Circuits, ASICs.
- the processor may also comprise board memory for caching purposes.
- the computer program may be carried by a computer program product connected to the processor.
- the computer program product may comprise a computer readable medium on which the computer program is stored.
- the computer program product may be a flash memory, a Random-Access Memory RAM, Read-Only Memory, ROM, or an EEPROM, and the computer program modules described above could in alternative embodiments be distributed on different computer program products in the form of memories within the power manager.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Power Sources (AREA)
Abstract
A power manager and a method performed thereby for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances are provided. The method (100) comprises determining (110)characteristics of incoming traffic to the data centre; and predicting (120)compute and/or memory resources needed based on the determined characteristics of incoming traffic to the data centre and the applications that will process the traffic. The method further comprises determining (130)a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
Description
POWER MANAGER AND METHOD PERFORMED THEREBY FOR MANAGING
POWER OF A DATACENTRE
Technical field
[0001 ] The present disclosure relates to power management and in particular to power management of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances.
Background
[0002] Reducing power consumption of datacentre infrastructures has become a key concern for datacentre operators. As a number of enterprises continuously move services and processes onto cloud-based resources, the power demand is growing, and the management of power consumption becoming equally significant. Simultaneously, there is an ongoing transition towards Software- Defined Infrastructures, SDIs, in order to decouple hardware from software, replacing dedicated hardware onto generic servers and extending management capabilities of the datacentre infrastructure. This means that power consumption in the network will be more intimately related to the power consumption on compute resources within the datacentre or in a distributed telecommunication cloud.
[0003] Due the pervasive diffusion of broadband access, increasing
performance of chipsets and IT hardware, and a widespread availability of open source software, Software-Defined Networks, SDN, and Network Functions Virtualisation, NFV, are two main paradigms gaining attention towards the establishment of SDIs. While SDN is fundamentally based on decoupling software from hardware, consolidating the control plane so that single software controls multiple data plane elements, NFV regards the virtualisation of network functions and their dynamic allocation and execution on generic servers. Aligned, SDN and NFV offer agile and programmable network infrastructures toward generic network hardware deployed on open software, in which functions of the centralised control can be performed through virtualised functions and capabilities from NFV.
[0004] Several problems exist and several issues need to be addressed. For example, solutions have to take both compute infrastructure and the power consumed from physical and virtual network resources into account. The unpredictable nature of the traffic makes it difficult for application instances developers to present precise demands to a power management module. When employing a Virtualised Power Shifting, VPS, requires a hierarchy of controllers to be implemented, two of them being potentially specific to applications. In an environment shared by multiple applications, such a solution would require encoding the application-specific behaviour through plugins which mean significant costs for system integration and delays for introducing new
applications.
Summary
[0005] The object is to obviate at least some of the problems outlined above. In particular, it is an object to provide a power manager and a method performed thereby for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances. These objects and others may be obtained by providing a power manager and a method performed by a power manager according to the independent claims attached below.
[0006] According to an aspect a method performed by a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances is provided. The method comprises determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the
determined characteristics of incoming traffic to the datacentre and the
applications that will process the traffic. The method further comprises determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[0007] According to an aspect a power manager for managing power of a datacentre comprising compute and network resources and implementing
virtualisation and executing application instances is provided. The power manager is configured for determining characteristics of incoming traffic to the datacentre; and for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the
applications that will process the traffic. The power manager is further configured for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[0008] The power manager and the method performed by the power manager have several advantages. One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces Operational expenditures for the operator of the datacentre. Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity
consumption.
Brief description of drawings
[0009] Embodiments will now be described in more detail in relation to the accompanying drawings, in which:
[00010] Figure 1 a is a flowchart of a method performed by a power manager according to an exemplifying embodiment.
[0001 1 ] Figure 1 b is a flowchart of a method performed by a power manager according to another exemplifying embodiment.
[00012] Figure 1 c is a flowchart of a method performed by a power manager according to yet an exemplifying embodiment.
[00013] Figure 1 d is a flowchart of a method performed by a power manager according to still an exemplifying embodiment.
[00014] Figure 2a is an example of a system architecture of a datacentre connected to the Internet.
[00015] Figure 2b is a block diagram of an exemplifying implementation of a prediction engine.
[00016] Figure 2c is a sequence diagram of steps for configuring a server's green capabilities.
[00017] Figure 2d is a sequence diagram of steps towards service provisioning.
[00018] Figure 2e is a sequence diagram of steps to coordinate datacentre infrastructure capabilities.
[00019] Figure 2f is an illustration of dependencies between CPU consumption and network traffic.
[00020] Figure 2g is an illustration of dependency between CPU power consumption and CPU frequency (load).
[00021 ] Figure 3 is a block diagram of a power manager according to an exemplifying embodiment.
[00022] Figure 4 is a block diagram of a power manager according to another exemplifying embodiment.
[00023] Figure 5 is a block diagram of an arrangement in a power manager according to an exemplifying embodiment.
Detailed description
[00024] Briefly described, a power manager and a method performed thereby for managing power of a datacentre comprising compute, network and infrastructure resources and implementing virtualisation and executing application instances are provided. By sampling incoming traffic to the datacentre, the power manager may determine characteristics of the incoming traffic to the datacentre. Depending on the incoming traffic, a certain amount of processing is required and thus a certain amount of power is consumed by the datacentre in order to perform the required processing. The power manager thus uses the characteristics of the incoming traffic in order to predict how much resources of the datacentre are required for performing the required processing. Based on that, the power manager may determine a power consumption for individual machines and may further based on that determine to e.g. migrate one or more application instances between virtual
machines and/or servers in order to optimise (which usually is minimise) power consumption of the datacentre.
[00025] In this disclosure, an application instance may be e.g. a Virtual Network Function, VNF. It is observed that there are direct dependencies between network traffic and the CPU and memory resources used by VNFs.
[00026] Embodiments of such a method performed by a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances will now be described with reference to figures 1a-1d.
[00027] Figure 1 a illustrates the method 100 comprising determining 1 10 characteristics of incoming traffic to the datacentre; and predicting 120 compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic. The method further comprises determining 130 a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[00028] The traffic coming in to the datacentre may vary over time, wherein the characteristics of the incoming traffic to the datacentre vary over time. The characteristics may be for example number of packets per time unit, type of packets, packet size, target address of the packet(s) etc. The target address of the packet(s) may be one or more applications, the target address of the packet(s) may be e.g. an Internet Protocol, IP, address.
[00029] The power manager may determine the characteristics of the incoming traffic to the datacentre itself, e.g. by sampling the incoming traffic. The power manager may also receive the characteristics from a prediction engine, which may determine the characteristics e.g. based on samples of the incoming traffic.
[00030] Using the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic, the power manager predicts compute and/or memory resources needed. Different applications may require
different amount of processing, i.e. compute resources, e.g. different number of compute cycles. Also different type of packets, which may relate to different types of services associated with the packets may require different amount of
processing, e.g. different number of compute cycles. Also different applications and/or different types of packets may require different amount of memory resources. Based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic, the power manager may thus make a good prediction of the total amount of required compute and/or memory resources needed to meet the demands of the incoming traffic. The power manager may make use of one or more Application Resource Prediction models, which may be pre-configured in the power manager or in the prediction engine for each type of application that needs to be supported. Such models could be determined for example by machine learning techniques offline (an example of such models is show in figure 2f).
[00031 ] The power manager may then, based on the predicted compute and/or memory resources needed, determine the power consumption for individual one or more virtual machines. The datacentre may comprise one or more physical servers, each may execute one or more virtual machines. A virtual machine may execute one or more application instances, e.g. in the form of Virtual Network Function(s), VNF(s). The power manager may make use of one or more Resource Power Consumption models, which may be pre-configured in the power manager or the prediction engine for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre. Parametric models are available, either based on direct measurements as illustrated in figure 2g or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy consumption). More elaborate forms and
expressions may exist as well.
[00032] The method performed by the power manager has several possible advantages. One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the
operator of the datacentre. Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
[00033] The method may further comprise, as illustrated in figure 1 b, informing 140 a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
[00034] By informing the Cloud Resource Orchestrator, the Cloud Resource Orchestrator is enabled to take appropriate actions, as will be described in more detail below. The Cloud Resource Orchestrator is responsible for e.g. which virtual machines are to be executed and on which physical server they should be executed on. The Cloud Resource Orchestrator is also responsible for which virtual machine shall execute which application(s), or VNF(s).
[00035] Merely as an illustrative example, assume that the incoming traffic only requires one physical server to be powered in order to execute the necessary virtual machine(s) and/or application(s)/VNF(s). Assume further that there are at least two physical servers being up and running. Based on the determined power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed for the incoming traffic to the
datacentre, the Cloud Resource Orchestrator may determine to execute all the one or more virtual machines on one physical server only, wherein the other two or more physical servers that are currently up and running could be put in a power saving mode.
[00036] The informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed may comprise requesting migration of an application instance from a first server or virtual machine to a second server or virtual machine.
[00037] There are various ways of informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed. In one example, the power manager determines that at least the first virtual machine needs not be running, but instead the second virtual machine has capacity to execute the
application instance (e.g. a VNF) in question. The power manager may then request the Cloud Resource Orchestrator to migrate the application instance from the first to the second virtual machine.
[00038] A physical server executing both the first and the second virtual machine may consume less energy just executing one virtual machine as both compute and memory resources need to be assigned to both the first and the second virtual machine. Compute and memory resources may also be saved by just executing one server instead of two.
[00039] The informing 140 of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed comprises requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
[00040] When the power consumption budget on the first server meets the threshold, the power consumption budget on the first server is almost fully utilised. Each server may be associated with an individual power consumption budget, or the power consumption budget is the same for all servers of the datacentre. The power consumption budget of a server may be associated with a respective usage level of compute and memory resources of that server. The more heavily loaded a Central Processing Unit, CPU, is or how full memories and/or buffers are, the higher the power consumption.
[00041 ] Thus, when the power consumption budget on the first server meets the threshold, the CPU may be operating close to its maximum capacity and/or memories and buffers are getting to be almost full, wherein it is necessary to start a second server and/or to migrate application instances to a virtual machine on the second server.
[00042] In an example illustrated in figure 1c, the method further comprises informing 150 an Infrastructure Manager about the predicted compute and/or memory resources needed.
[00043] The Infrastructure Manager of the datacentre controls e.g. cooling of the datacentre, which may be done by air conditioning or water cooling. Depending on how many servers that are powered and operating, and/or a resource usage level of individual servers, more or less energy is consumed by the datacentre.
Generally, the more resource usage level of individual servers, the more heat is generated thereby, wherein the more cooling of the datacentre may be required.
[00044] By informing 150 the Infrastructure Manager about the predicted compute and/or memory resources needed, the Infrastructure Manager may determine how much cooling the datacentre needs, which servers required cooling etc. The Infrastructure Manager may then take appropriate actions in order to keep the datacentre properly cooled, wherein unnecessary cooling may be avoided thereby saving energy and possible overheating may be avoided.
[00045] The informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed may comprise requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
[00046] There are various ways for the power manager to inform the
Infrastructure Manager about the predicted compute and/or memory resources needed. As just describe above, different resource usage levels generally require different amount of cooling of the datacentre.
[00047] The power manager may be aware of the present resource usage level (level of compute and/or memory resources) of the datacentre as a whole and of individual servers. Comparing that information with the predicted compute and/or memory resources needed, the power manager may determine increase or decrease of manageable site infrastructure resources.
[00048] The informing 140 of the Infrastructure Manager about the predicted compute and/or memory resources needed comprises requesting a change in
Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
[00049] This is another example of how to manage the datacentre with regard to power consumption. DVFS enables the power manager to e.g. lower the voltage and/or the frequency of a CPU of a server in case that CPU could still fulfil the requirements of the predicted compute and/or memory resources needed even with a reduced voltage and/or frequency.
[00050] Merely as an illustrative example, assume one server is operating in the datacentre executing one or more virtual machines, wherein one or more CPUs of the server is operating at reduced voltage and/or the frequency. Assume further in this illustrative example that the predicted compute and/or memory resources needed entails an increase, wherein the power manager has the option of either start another server in order to cope with the increase of compute and/or memory resources or increase the voltage and/or the frequency of the one or more CPUs of the server that is currently operating. In this illustrative example, the datacentre will consume less energy with the second option, and consequently the power manager requests an increase in DVFS for at least one of the one or more CPUs of the server that is currently operating.
[00051 ] The determining 1 10 of characteristics of incoming traffic to the
datacentre may comprise determining which application instance is the destination address of the incoming traffic.
[00052] Different application instances may require different amount of
processing resources, e.g. with regard to number of compute cycles and/or level of memory/buffer usage.
[00053] The different application instances may be identified by their address. Consequently, the power manager may determine the characteristics of incoming traffic to the datacentre by determining which application instance is the
destination address of the incoming traffic.
[00054] The predicting 120 of compute and/or memory resources needed comprises determining a type of application instance to which the incoming traffic is addressed.
[00055] There are different types of application instances, wherein e.g. a first type may require two compute cycles, a second type may require five compute cycles, a third type may require eight compute cycles and a fourth type may require one compute cycle.
[00056] In this manner, the power manager may predict compute and/or memory resources needed by determining the type of application instance to which the incoming traffic is addressed.
[00057] Merely as an illustrative and non-limiting example, assume sampling the incoming traffic to the datacentre tells the power manager that currently five packets are coming in per time unit(s), wherein of the five packets, two are addressed to the second type of application instances requiring five compute cycles resulting in 10 compute cycles. Two packets are addressed to the third type of application instances requiring eight compute cycles resulting in 16 compute cycles. One packet is address to the fourth type of application instances requiring only one compute cycle. The total required compute cycles associated with these five packets are 10+16+1 =27 compute cycles.
[00058] In an example, the predicting 120 of compute and/or memory resources needed further comprises mapping 125 the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
[00059] By mapping the type of application instance to which the incoming traffic is addressed to the power consumption model for that type of application instance, the power manager may relatively accurately predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic.
[00060] Reverting to the illustrative and non-limiting example right above, assume that according to the power consumption model: a first type of application instances requires a first power consumption per incoming packet; a second type of application instances requires a second power consumption per incoming packet; a third type of application instances requires a third power consumption per incoming packet; and a fourth type of application instances requires a fourth power consumption per incoming packet. Then the five incoming packets tell the power manager that: 2* second power consumption + 2* third power
consumption+ 1 * fourth power consumption in total are needed by mapping the type of application instance to which the incoming traffic is addressed to the power consumption model.
[00061 ] The predicting 120 of compute and/or memory resources needed may further comprise also using statistical data together with the determined
characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
[00062] Statistical data may provide useful information to the power manager about how the traffic characteristics statistically changes over time. Of course there may be deviations from the statistics but still valuable information may be obtained from statistical data.
[00063] Generally in telecommunication networks, the traffic varies according to the same pattern over most Mondays, over most Tuesdays etc., wherein the pattern comprises peeks which usually occur at the same point in time most Mondays, Tuesdays, Wednesdays, Sundays. Similar models may be obtained over time for the datacentre.
[00064] Figure 2a is an illustration of an example of a system architecture of the datacentre and how it is connected to the Internet. External traffic, e.g. from the Internet is coming to the datacentre via a datacentre gateway 270, which is part of a layer 3 (L3) network. The datacentre comprises in this illustrative example three servers 240a, 240b and 240c and optionally also the gateway 270. The datacentre also comprises a layer 2 (L2) network 250 for communication within the datacentre
and datacentre infrastructure resources 260. Figure 2a also illustrates a prediction engine 210, the power manager 200, Cloud Resource Orchestration (CRO) 220 and Infrastructure Management System (IMS) 230. Although these are all illustrated as separate entities being part of a cloud, e.g. a power aware cloud management system, the separate entities may be only one and/or may be part of the datacentre. The datacentre also comprises in this illustrative example a cloud monitoring bus, which connects respective Governors of the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, the prediction engine 210 and the power manager 200. The datacentre also comprises in this illustrative example a cloud management bus, which connects the gateway 270, the servers 240a, 240b and 240c, the L2 network 250, the datacentre infrastructure resources 260, and the CRO 220. The datacentre also comprises in this illustrative example an infrastructure
management bus, which connects the IMS 230 and the datacentre infrastructure resources 260.
[00065] The prediction engine 210 may comprise various functions and/or units as illustrated in figure 2b, e.g. traffic feature extraction, VNF/application resource prediction model, prediction model and resource power consumption model.
[00066] The traffic feature extraction function determines which VNF or application instance is the destination of the traffic (for example, by examining the IP address). It might also aggregate information from several traffic samples received (such as to provide a statistic of the packet sizes received during a particular time interval).
[00067] The VNF/Application resource prediction model may be pre-configured in the prediction engine 210 for each type of VNF or application instance that needs to be supported. Such models may be determined for example by offline machine learning techniques.
[00068] The Resource Power Consumption model may be pre-configured in the prediction engine 210 for each type of resource (for example, CPU) that may be managed in terms of power consumption in the datacentre. Parametric models
are available, either based on direct measurements or through references from academic literature or manufacturer datasheets. In its simplest form, this is a lookup table with two columns (one for load, another one for energy
consumption). More elaborate forms and expressions may exist as well.
[00069] The prediction model function is optional and it takes the output of the resource prediction model and tries to determine how much resources may be used in the near future (equivalent to predicting network traffic based on the history). As stated above, it is optional, and it may help by enabling a pro-active rather than reactive management.
[00070] The prediction engine 210 may proactively send power consumption estimates to a Power Management System, PoMS, which defines a particular power budget allocation, redistribution or capping for VNF/applications executed in the datacentre.
[00071 ] The PoMS may interact with both the Cloud resource Orchestration and the datacentre Infrastructure Management system. The interaction with the Cloud Resource Orchestration may be made in terms of: (1 ) requesting migration of a VNF or application instance such that overall power consumption is optimised either on the source or destination servers, and (2) requesting provisioning a new instance of a VNF or application instance in case the power consumption budget on a given server is about to be fully utilised, but overall the VNF is within the allowed power budget.
[00072] The PoMS may also interact with the datacentre Infrastructure
Management system (that controls the air conditioning or water cooling for example). It may request the DC Infrastructure Management to increase the air conditioning or water cooling flow on certain locations, in case a significant number of VNFs or application instances are using a lot of power (thus creating significant amounts of heat from the CPU). Or it may request the datacentre Infrastructure Management to decrease the air conditioning or water cooling in certain locations where VNFs or application instances are using significantly less power than before, thus reducing the amount of heat from the CPU.
[00073] As an option for the case the servers and network nodes are equipped with Green Capabilities such as DVFS (allowing changing the frequency and/or CPU voltage depending on load and other policies) or Energy Efficient Ethernet, and some of these capabilities are exposed to the Virtualisation Management platform (be it Kernel-based Virtual Machine for virtual machines or Docker for containers), the PoMS may interact with the Governors of these capabilities and configure policies that determine how the power will be consumed. For example, for DVFS on a server, the PoMS may configure the DVFS Governor with a certain value for the maximum and minimum frequency of the processor, as well as maximum and minimum voltage. These values need to be in the range offered by the manufacturer, but the entire operating range may be restricted through the values configured by the PoMS. For the Energy Efficient Ethernet, the PoMS may configure the associated Governor with permitted data rates (out of the possible 10, 100, 1000 or 10000 in case the NICs and cabling allow this) as well as values for transition timeouts.
[00074] Embodiments herein also relate to a power manager for managing power of a datacentre comprising compute and network resources and
implementing virtualisation and executing application instances. The power manager has the same technical features, objects and advantages as the method performed by the power manager described above. The power manager will therefore be described only in brief in order to avoid unnecessary repetition. The power manager will be described with reference to figures 3 and 4.
[00075] Figure 3 illustrates the power manager 300, 400 being configured for determining characteristics of incoming traffic to the datacentre; and predicting compute and/or memory resources needed based on the determined
characteristics of incoming traffic to the datacentre and the applications that will process the traffic. The power manager 300, 400 is further configured determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[00076] The power manager 300, 400 may be realised or implemented in different ways. A first exemplifying implementation or realisation is illustrated in
figure 3. Figure 3 illustrates the power manager 300 comprising a processor 321 and memory 322, the memory comprising instructions, e.g. by means of a computer program 323, which when executed by the processor 321 causes the power manager 300 to determine characteristics of incoming traffic to the datacentre; and to predict compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic. The memory further comprises
instructions, which when executed by the processor 321 causes the power manager 300 to determine a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[00077] Figure 3 also illustrates the power manager 300 comprising a memory 310. It shall be pointed out that figure 3 is merely an exemplifying illustration and memory 310 may be optional, be a part of the memory 322 or be a further memory of the power manager 300. The memory may for example comprise information relating to the power manager 300, to statistics of operation of the power manager 300, just to give a couple of illustrating examples. Figure 3 further illustrates the power manager 300 comprising processing means 320, which comprises the memory 322 and the processor 321 . Still further, figure 3 illustrates the power manager 300 comprising a communication unit 330. The communication unit 330 may comprise an interface through which the power manager 300 communicates with other nodes or entities of the datacentre as well as other communication units. Figure 3 also illustrates the power manager 300 comprising further functionality 340. The further functionality 340 may comprise hardware or software necessary for the power manager 300 to perform different tasks that are not disclosed herein.
[00078] An alternative exemplifying implementation of the power manager 300, 400 is illustrated in figure 4. Figure 4 illustrates the power manager 400 comprising a determining unit 403 for determining characteristics of incoming traffic to the datacentre; and a predicting unit 404 for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic. The
determining unit 403 of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[00079] In figure 4, the power manager 400 is also illustrated comprising a communication unit 401 . Through this unit, the power manager 400 is adapted to communicate with other nodes and/or entities in the datacentre or associated therewith. The power manager 400 is further illustrated comprising a memory 402 for storing data. Further, the power manager 400 may comprise a control or processing unit (not shown) which in turn is connected to the different units 403- 404. It shall be pointed out that this is merely an illustrative example and the power manager 400 may comprise more, less or other units or modules which execute the functions of the power manager 400 in the same manner as the units illustrated in figure 4.
[00080] It should be noted that figure 4 merely illustrates various functional units in the power manager 400 in a logical sense. The functions in practice may be implemented using any suitable software and hardware means/circuits etc. Thus, the embodiments are generally not limited to the shown structures of the power manager 400 and the functional units. Hence, the previously described exemplary embodiments may be realised in many ways. For example, one embodiment includes a computer-readable medium having instructions stored thereon that are executable by the control or processing unit for executing the method actions or steps in the power manager 400. The instructions executable by the computing system and stored on the computer-readable medium perform the method actions or steps of the power manager 400 as set forth in the claims.
[00081 ] The power manager receiver has the same possible advantages as the method performed by the power manager. One possible advantage is that the power consumption of the datacentre may be reduced, which in turn reduces operational expenditures for the operator of the datacentre. Another possible advantage is that it allows for the operator to reduce the carbon emissions associated with electricity consumption.
[00082] According to an embodiment, the power manager 300, 400 is further configured for informing a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
[00083] According to yet an embodiment, the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting migration of an application instance from a first server to a second server.
[00084] According to still an embodiment, the power manager 300, 400 is further configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
[00085] According to another embodiment, the power manager 300, 400 is further configured for informing an Infrastructure Manager about the predicted compute and/or memory resources needed.
[00086] According to a further embodiment, the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
[00087] According to an embodiment, the power manager 300, 400 is further configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
[00088] According to yet an embodiment, the power manager 300, 400 is further configured for determining characteristics of incoming traffic to the datacentre by
determining which application instance is the destination address of the incoming traffic.
[00089] According to still an embodiment, the power manager 300, 400 is further configured for predicting compute and/or memory resources needed by
determining a type of application instance to which the incoming traffic is addressed.
[00090] According to another embodiment, the power manager 300, 400 is further configured for predicting of compute and/or memory resources needed by mapping the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
[00091 ] According to a further embodiment, the power manager 300, 400 is further configured for predicting compute and/or memory resources needed further by also using statistical data together with the determined characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
[00092] Figure 5 schematically shows an embodiment of an arrangement 500 in a power manager 400. Comprised in the arrangement 500 in the power manager 400 are here a processing unit 506, e.g. with a Digital Signal Processor, DSP. The processing unit 506 may be a single unit or a plurality of units to perform different actions of procedures described herein. The arrangement 500 of the power manager 400 may also comprise an input unit 502 for receiving signals from other entities, and an output unit 504 for providing signal(s) to other entities. The input unit and the output unit may be arranged as an integrated entity or as illustrated in the example of figure 4, as one or more interfaces 401 .
[00093] Furthermore, the arrangement 500 in the power manager 400 comprises at least one computer program product 508 in the form of a non-volatile memory, e.g. an Electrically Erasable Programmable Read-Only Memory,
EEPROM, a flash memory and a hard drive. The computer program product 508 comprises a computer program 510, which comprises code means, which when
executed in the processing unit 506 in the arrangement 500 in the power manager 400 causes the power manager to perform the actions e.g. of the procedure described earlier in conjunction with figures 1a-1d.
[00094] The computer program 510 may be configured as a computer program code structured in computer program modules 510a-510e. Hence, in an
exemplifying embodiment, the code means in the computer program of the arrangement 500 in the power manager 400 comprises a determining unit, or module, for determining characteristics of incoming traffic to the datacentre; and a predicting unit, or module, for predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic. The determining unit, or module, of the power manager 400 is also used for determining a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
[00095] The computer program modules could essentially perform the actions of the flow illustrated in figures 1a-1d, to emulate the power manager 400. In other words, when the different computer program modules are executed in the processing unit 506, they may correspond to the units 403-406 of figure 4.
[00096] Although the code means in the embodiments disclosed above in conjunction with figure 4, is implemented as computer program modules which when executed in the respective processing unit causes the power manager to perform the actions described above in the conjunction with figures mentioned above, at least one of the code means may in alternative embodiments be implemented at least partly as hardware circuits.
[00097] The processor may be a single Central Processing Unit, CPU, but could also comprise two or more processing units. For example, the processor may include general purpose microprocessors; instruction set processors and/or related chips sets and/or special purpose microprocessors such as Application Specific Integrated Circuits, ASICs. The processor may also comprise board memory for caching purposes. The computer program may be carried by a
computer program product connected to the processor. The computer program product may comprise a computer readable medium on which the computer program is stored. For example, the computer program product may be a flash memory, a Random-Access Memory RAM, Read-Only Memory, ROM, or an EEPROM, and the computer program modules described above could in alternative embodiments be distributed on different computer program products in the form of memories within the power manager.
[00098] It is to be understood that the choice of interacting units, as well as the naming of the units within this disclosure are only for exemplifying purpose, and nodes suitable to execute any of the methods described above may be configured in a plurality of alternative ways in order to be able to execute the suggested procedure actions.
[00099] It should also be noted that the units described in this disclosure are to be regarded as logical entities and not with necessity as separate physical entities.
[000100] While the embodiments have been described in terms of several embodiments, it is contemplated that alternatives, modifications, permutations and equivalents thereof will become apparent upon reading of the specifications and study of the drawings. It is therefore intended that the following appended claims include such alternatives, modifications, permutations and equivalents as fall within the scope of the embodiments and defined by the pending claims.
Claims
1 . A method (100) performed by a power manager for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances, the method comprising:
- determining (1 10) characteristics of incoming traffic to the datacentre,
- predicting (120) compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic,
- determining (130) a power consumption for individual one or more virtual machines based on the predicted compute and/or memory resources needed.
2. The method (100) according to claim 1 , further comprising informing (140) a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
3. The method (100) according to claim 2, wherein the informing (140) of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed comprises requesting migration of an application instance from a first server to a second server.
4. The method (100) according to claim 2 or 3, wherein the informing (140) of the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed comprises requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
5. The method (100) according to any of claims 1 -4, further comprising informing (150) an Infrastructure Manager about the predicted compute and/or memory resources needed.
6. The method (100) according to claim 5, wherein the informing (140) of the Infrastructure Manager about the predicted compute and/or memory resources
needed comprises requesting increase or decrease of manageable site
infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
7. The method (100) according to claim 5 or 6, wherein the informing (140) of the Infrastructure Manager about the predicted compute and/or memory resources needed comprises requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
8. The method (100) according to any of claims 1 -7, wherein the
determining (1 10) of characteristics of incoming traffic to the datacentre comprises determining which application instance is the destination address of the incoming traffic.
9. The method (100) according to any of claims 1 -8, wherein the predicting (120) of compute and/or memory resources needed comprises determining a type of application instance to which the incoming traffic is addressed.
10. The method (100) according to claim 9, wherein the predicting (120) of compute and/or memory resources needed further comprises mapping (125) the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
1 1 . The method (100) according to any of claims 1 -10, wherein the predicting (120) of compute and/or memory resources needed further comprises also using statistical data together with the determined characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
12. A power manager (300, 400) for managing power of a datacentre comprising compute and network resources and implementing virtualisation and executing application instances, the power manager (300, 400) being configured for:
- determining characteristics of incoming traffic to the datacentre,
- predicting compute and/or memory resources needed based on the determined characteristics of incoming traffic to the datacentre and the applications that will process the traffic,
- determining a power consumption for individual one or more virtual
machines based on the predicted compute and/or memory resources needed.
13. The power manager (300, 400) according to claim 12, further being configured for informing a Cloud Resource Orchestrator about the predicted compute and/or memory resources needed.
14. The power manager (300, 400) according to claim 13, wherein the power manager (300, 400) is configured for informing the Cloud Resource Orchestrator about the predicted compute and/or memory resources needed by requesting migration of an application instance from a first server to a second server.
15. The power manager (300, 400) according to claim 13 or 14, wherein the power manager (300, 400) is configured for informing the Cloud Resource
Orchestrator about the predicted compute and/or memory resources needed by requesting provisioning a new instance of an application instance when power consumption budget on a first server is not predicted to meet a threshold.
16. The power manager (300, 400) according to any of claims 12-15, further being configured for informing an Infrastructure Manager about the predicted compute and/or memory resources needed.
17. The power manager (300, 400) according to claim 16, wherein the power manager (300, 400) is configured for informing the Infrastructure Manager about the predicted compute and/or memory resources needed by requesting increase or decrease of manageable site infrastructure resources including but not limited to power distribution units, air conditioning and/or water cooling flow with regard to one or more server on which the one or more virtual machines is/are running.
18. The power manager (300, 400) according to claim 16 or 17, wherein power manager (300, 400) is configured for informing the Infrastructure Manager
about the predicted compute and/or memory resources needed by requesting a change in Dynamic Voltage and Frequency Scaling, DVFS, with regard to the one or more server on which the one or more virtual machines is/are running.
19. The power manager (300, 400) according to any of claims 12-18, wherein the power manager (300, 400) is configured for determining
characteristics of incoming traffic to the datacentre by determining which
application instance is the destination address of the incoming traffic.
20. The power manager (300, 400) according to any of claims 12-19, wherein the power manager (300, 400) is configured for predicting compute and/or memory resources needed by determining a type of application instance to which the incoming traffic is addressed.
21 . The power manager (300, 400) according to claim 20, wherein the power manager (300, 400) is configured for predicting of compute and/or memory resources needed by mapping the type of application instance to which the incoming traffic is addressed to a power consumption model for that type of application instance.
22. The power manager (300, 400) according to any of claims 12-21 , wherein the power manager (300, 400) is configured for predicting compute and/or memory resources needed further by also using statistical data together with the determined characteristics of incoming traffic to the datacentre in order to predict compute and/or memory resources needed.
23. A Computer program (510), comprising computer readable code means, which when run in a processing unit (506) comprised in an arrangement (500) in a power manager (400) according to claims 12-22 causes the power manager (500) to perform the corresponding method according to any of claims 1 -1 1 .
24. A Computer program product (508) comprising the computer program (510) according to claim 23.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/SE2016/050686 WO2018009103A1 (en) | 2016-07-05 | 2016-07-05 | Power manager and method performed thereby for managing power of a datacentre |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/SE2016/050686 WO2018009103A1 (en) | 2016-07-05 | 2016-07-05 | Power manager and method performed thereby for managing power of a datacentre |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018009103A1 true WO2018009103A1 (en) | 2018-01-11 |
Family
ID=56418582
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2016/050686 Ceased WO2018009103A1 (en) | 2016-07-05 | 2016-07-05 | Power manager and method performed thereby for managing power of a datacentre |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018009103A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111722907A (en) * | 2020-05-20 | 2020-09-29 | 中天通信技术有限公司 | DVFS-based data center mapping method, device and storage medium |
| CN114064282A (en) * | 2021-11-23 | 2022-02-18 | 北京百度网讯科技有限公司 | Resource mining method and device and electronic equipment |
| WO2022048674A1 (en) * | 2020-09-07 | 2022-03-10 | 华为云计算技术有限公司 | Server cabinet-based virtual machine management method and apparatus |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090293022A1 (en) * | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
| WO2010057775A2 (en) * | 2008-11-20 | 2010-05-27 | International Business Machines Corporation | Method and apparatus for power-efficiency management in a virtualized cluster system |
| US20100235840A1 (en) * | 2009-03-10 | 2010-09-16 | International Business Machines Corporation | Power management using dynamic application scheduling |
| US20130190899A1 (en) * | 2008-12-04 | 2013-07-25 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20130339759A1 (en) * | 2012-06-15 | 2013-12-19 | Infosys Limted | Method and system for automated application layer power management solution for serverside applications |
-
2016
- 2016-07-05 WO PCT/SE2016/050686 patent/WO2018009103A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090293022A1 (en) * | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
| WO2010057775A2 (en) * | 2008-11-20 | 2010-05-27 | International Business Machines Corporation | Method and apparatus for power-efficiency management in a virtualized cluster system |
| US20130190899A1 (en) * | 2008-12-04 | 2013-07-25 | Io Data Centers, Llc | Data center intelligent control and optimization |
| US20100235840A1 (en) * | 2009-03-10 | 2010-09-16 | International Business Machines Corporation | Power management using dynamic application scheduling |
| US20130339759A1 (en) * | 2012-06-15 | 2013-12-19 | Infosys Limted | Method and system for automated application layer power management solution for serverside applications |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111722907A (en) * | 2020-05-20 | 2020-09-29 | 中天通信技术有限公司 | DVFS-based data center mapping method, device and storage medium |
| CN111722907B (en) * | 2020-05-20 | 2024-01-19 | 中天通信技术有限公司 | Data center mapping method, device and storage medium based on DVFS |
| WO2022048674A1 (en) * | 2020-09-07 | 2022-03-10 | 华为云计算技术有限公司 | Server cabinet-based virtual machine management method and apparatus |
| CN114064282A (en) * | 2021-11-23 | 2022-02-18 | 北京百度网讯科技有限公司 | Resource mining method and device and electronic equipment |
| CN114064282B (en) * | 2021-11-23 | 2023-07-25 | 北京百度网讯科技有限公司 | Resource mining method, device and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190230004A1 (en) | Network slice management method and management unit | |
| CN112583861B (en) | Service deployment method, resource allocation method, system, device and server | |
| EP3606008B1 (en) | Method and device for realizing resource scheduling | |
| CN109684074B (en) | Physical machine resource allocation method and terminal equipment | |
| CN108667777B (en) | Service chain generation method and network function orchestrator NFVO | |
| US9529619B2 (en) | Method of distributing network policies of virtual machines in a datacenter | |
| US9575794B2 (en) | Methods and systems for controller-based datacenter network sharing | |
| US20240354149A1 (en) | Rightsizing virtual machine deployments in a cloud computing environment | |
| US9722930B2 (en) | Exploiting probabilistic latency expressions for placing cloud applications | |
| CN104468688A (en) | Method and apparatus for network virtualization | |
| EP3103217B1 (en) | Monitoring system and monitoring method for software defined networks | |
| US20170063645A1 (en) | Method, Computer Program and Node for Management of Resources | |
| CN107562512A (en) | A method, device and system for migrating a virtual machine | |
| WO2014140790A1 (en) | Apparatus and method to maintain consistent operational states in cloud-based infrastructures | |
| CN104601664A (en) | Cloud computing platform resource management and virtual machine dispatching control system | |
| JP6490806B2 (en) | Configuration method, apparatus, system and computer readable medium for determining a new configuration of computing resources | |
| Zhou et al. | Goldilocks: Adaptive resource provisioning in containerized data centers | |
| Velasco et al. | Elastic operations in federated datacenters for performance and cost optimization | |
| CN103744735A (en) | Method and device for scheduling multi-core resource | |
| WO2018009103A1 (en) | Power manager and method performed thereby for managing power of a datacentre | |
| Telenyk et al. | Architecture and conceptual bases of cloud IT infrastructure management | |
| WO2018013023A1 (en) | A server and method performed thereby for determining a frequency and voltage of one or more processors of the server | |
| Sharma et al. | A Machine learning-based framework for energy-efficient load balancing in sustainable urban infrastructure and smart buildings | |
| WO2017133020A1 (en) | Method and device for policy transmission in nfv system | |
| Carrega et al. | Energy-aware consolidation scheme for data center cloud applications |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16739598 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16739598 Country of ref document: EP Kind code of ref document: A1 |