[go: up one dir, main page]

WO2019066982A1 - Aging constrained operation for power plants - Google Patents

Aging constrained operation for power plants Download PDF

Info

Publication number
WO2019066982A1
WO2019066982A1 PCT/US2017/054665 US2017054665W WO2019066982A1 WO 2019066982 A1 WO2019066982 A1 WO 2019066982A1 US 2017054665 W US2017054665 W US 2017054665W WO 2019066982 A1 WO2019066982 A1 WO 2019066982A1
Authority
WO
WIPO (PCT)
Prior art keywords
complex system
maintenance
maintenance cost
aging
inputs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2017/054665
Other languages
French (fr)
Inventor
Ulrich Münz
Michael Jäntsch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Siemens Corp
Original Assignee
Siemens AG
Siemens Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG, Siemens Corp filed Critical Siemens AG
Priority to PCT/US2017/054665 priority Critical patent/WO2019066982A1/en
Publication of WO2019066982A1 publication Critical patent/WO2019066982A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0245Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a qualitative model, e.g. rule based; if-then decisions
    • G05B23/0251Abstraction hierarchy, e.g. "complex systems", i.e. system is divided in subsystems, subsystems are monitored and results are combined to decide on status of whole system
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks

Definitions

  • the present invention relates generally to methods, systems, and apparatuses for aging constrained operation for power plants and other complex systems.
  • the technology described herein may be applied, for example, to optimize ramp-up procedures.
  • Power generating devices such as gas turbines are used to provide stability to the electric grid by ramping power output up or down as demand and system loads fluctuate.
  • ramping down may be performed relatively quickly; however, power generating devices typically take a long period of time to ramp up.
  • Demand and system loads can fluctuate often through a day, and the fluctuation is not always predictable.
  • a method for aging constrained operation of a complex system includes obtaining one or more maintenance cost estimator for one or more components of a particular complex system.
  • the one or more maintenance cost estimator is used to determine inputs to the system for modifying operating points.
  • the operating points of the particular complex system are then modified using these inputs.
  • each maintenance cost estimator is trained based on (i) operational data collected over a plurality of operation cycles of at least one other complex system and (ii) maintenance data corresponding to the at least one other complex system.
  • the maintenance cost estimator estimates the remaining time until a next maintenance cycle of at least one component. In other embodiments, the
  • maintenance cost estimator is updated based on continuously collected operational data from a particular complex system over a time interval. In other embodiments, maintenance cost estimator estimates an incremental maintenance cost at a specific operation point.
  • the inputs to the particular complex system are determined by forming a model-predictive control (MPC) cost function comprising a summation of an aging rate for each of the components. Then, the inputs are determined for a control horizon by minimizing MPC cost function over a prediction horizon, where the prediction horizon is less than or equal to the control horizon.
  • MPC model-predictive control
  • each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate.
  • a method for aging constrained operation of a complex system includes collecting operational data from the complex system over a time interval.
  • the maintenance cost estimators are used to predict maintenance intervals associated with components in the complex system based on the operational data.
  • System inputs for performing modifying operating points of the complex system are determined based on the maintenance intervals associated with the components in the complex system. Then, the operating points of the complex system may be modified using the system inputs.
  • a non-transitory computer readable medium stores computer program instructions for operating a complex system with aging constraints. The computer program instructions, when executed by a processor, cause the processor to perform one or more of the methods discussed above.
  • FIG. 1 provides a conceptual overview of how the techniques described herein aid in the optimization of ramp-up procedures
  • FIG. 2 illustrates a workflow for implementing a neural network-based aging constrained MPC, according to some embodiments.
  • FIG. 3 illustrates a method for aging constrained operation of a power plant, according to some embodiments.
  • FIG. 4 provides an example of a parallel processing platform that may be utilized to implement the machine learning models and other aspects of the machine learning models and various workflows discussed herein.
  • the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses related to aging-constrained modification of operating points of power plants such as gas, coal, or nuclear power plants.
  • the techniques described herein allow for the trading of short-term profits against long-term costs explicitly. Additionally, these techniques automatically integrate long-term maintenance costs into short term operation decisions. For example, the disclosed techniques allow one to trade off the maintenance costs against the revenue a company gains when ramping up a power generating device. The maintenance costs are excessive in comparison to the revenue, the company may choose not to ramp up in a given situation. Moreover, these techniques are applicable to many different maintenance interval/cost estimators as well as to many different operation methods.
  • operating points refers to an operating state at a specific period of time.
  • a modification of an operating point is a change from one operating state to another.
  • the fast ramp-up procedure considers one example of a modification of operating points: the fast ramp-up procedure.
  • the fast ramp-up is required from customers and grid operators, especially for gas power plants, to compensate volatile renewable generation and/or to make higher profits on short-term reserve power markets.
  • Fast ramp-up of these power plants increases aging, wear, and fatigue of crucial parts of the power plant (e.g., of the blades of the gas and steam turbines of the power plants). Therefore, this fast ramp-up also increases maintenance costs because these critical parts have to be replaced earlier.
  • the techniques described herein combine aging- or maintenance-costs and fast ramp-up profits into a unified operation approach that maximizes profit-vs-cost.
  • the gas turbine is ramped up 1000 times from zero to 80% load within 120 minutes, maintenance is required. Assuming that the maintenance costs are $10M, one ramp-up from zero to 100% load within 90 minutes costs $100k whereas one ramp-up from zero to 80% within 120 minutes costs only $10k maintenance costs.
  • the plant operator can decide whether to ramp up the gas turbine to 100% in 90 minutes, to 80% within 120 minutes, or not at all, depending on the profit he gets from the reserve market as well as other economic considerations.
  • FIG. 1 provides a conceptual overview of how the techniques described herein aid in the optimization of ramp-up procedures.
  • module 120 analyzes the tradeoff between fast- ramp up profits and the maintenance costs associated with a fast ramp-up which are optimized using a maintenance cost estimator 105. Based on this trade off, the operation cycles 110 are adjusted accordingly. In turn, this affects the maintenance intervals 115 that define the effect of the operation cycles 110 on the overall aging of the component(s) being used for the ramp-up procedure.
  • the maintenance cost estimator 105 predicts correlations between operation cycles and maintenance intervals or aging based on large amounts of data retrieved either locally or from a remote source (e.g., cloud-based server).
  • This data includes (i) operational data collected over a plurality of operation cycles of a plurality of power plants and (ii) maintenance data corresponding to the plurality of power plants.
  • the plurality of power plants providing the data are referred to herein as a "fleet.”
  • this maintenance cost estimator 105 is an implemented machine learning model such as, for example, a neural network.
  • a fast ramp-up method is implemented by the trade-off module based on the correlation between operation cycles and maintenance intervals in order to trade short-term profits from fast ramp-up against long-term costs for maintenance.
  • FIG. 1 does not specify the operation method that is used during operation.
  • fast ramp-up is performed with a model-predictive control (MPC) approach that uses a physical model of the dynamics of the gas turbine.
  • MPC is a planning algorithm that determines an optimal open-loop plan from the current state of the system to a finite time horizon. A relatively small portion of the plan in executed and the optimal open-loop plan is recalculated using the updated state. This process is repeated, with re-planning occurring at each decision point.
  • the maintenance costs could also be integrated into another operational method.
  • a neural network (or another method for numerical regression) may be trained to predict the maintenance intervals of a number of components of the power plant. It can be decided if a single estimator is used for each component or a larger estimator for several components together. These estimators can take a number of parameters as an input. Examples include temperatures, temperature rates of change, load characteristics, rotational speed, etc. Training of the estimators can either be performed offline or online (i.e., during operation.)
  • the cost of maintenance can be considered in an MPC by changing the cost function, such that - apart from following a fast ramp-up - the cost of maintenance is added as an additional term, as follows: where u is the system inputs, a t is the aging rate of the i component, and w t is a weighting factor which could for example reflect the cost of maintenance for this component.
  • the term "aging rate,” as used herein refers to the rate at which the useful lifespan of a particular component changes over time. In general, the lifespan of each component is a known quantity that may be defined, for example, based on the number of operational cycles or hours that the component may be used.
  • the aging rate would correspond to the time that has elapsed since component installation.
  • factors such as subjecting the component to high levels of stress for long time periods may accelerate aging; thus as procedures such as fast ramp-up are performed, the aging rate may increase.
  • FIG. 2 illustrates a workflow 200 for implementing a neural network-based aging constrained MPC, according to some embodiments.
  • the optimizer 215 is a fast non-linear solver which determines the inputs to a gas turbine 225 for a given control horizon. These inputs are determined using a plant model 235 which describes the behavior of the gas turbine 225. Based on information from the plant model 235, the aforementioned cost function 205 is minimized for the control horizon.
  • This optimizer 215 may be fixed using constraints 230 on certain parameters such as temperature.
  • the neural network predictor model 210 supports the fast solution of the
  • the gradient of the aging rate ai with respect to each input of the neural input can be computed efficiently.
  • these inputs may be retrieved from a remote cloud-based server system 220.
  • MLP Multilayer Perceptron
  • the gradient is where ⁇ ,- is the output of the neural network, x k is the k input, Vj and Wj k are the input and output layer weights, respectively, and /( ⁇ 3 ⁇ 4 ⁇ ) is the non-linear activation function of the hidden layer neuron.
  • the terms are summed up over the number of j hidden layers.
  • the structure of an MLP with a single hidden layer is often sufficient.
  • the gradient computation can be easily extended to more complex structures of neural networks, as the gradient is also used for training such networks.
  • FIG. 3 illustrates a method 300 for aging constrained operation of a power plant, according to some embodiments.
  • operational data and maintenance data corresponding to a plurality of power plants are retrieved, either from a local storage source or from a remote source (e.g., a cloud-based server computing system).
  • the operational data will depend on the type of components present in the power plant. Examples of the types of operational data include temperature readings, temperature rates of change, force, stress or strain readings, vibration or load characteristics, rotational speed or acceleration, etc.
  • the maintenance data details information such as, for example, when particular components were repaired or replaced or when routine maintenance was performed on components.
  • the maintenance data includes characteristics of the components that may be indicative of aging. For example, the maintenance data may provide diameter measurements from a particular subcomponent known to decrease over time as the component ages.
  • each maintenance cost estimator is a neural network; however, various machine learning-based techniques generally known in the art may be used for creating the estimators.
  • a single estimator is used to predict values for an entire plant.
  • multiple estimators can be used for each component in the plant.
  • a distinct maintenance cost estimator is used for each critical component in the plant.
  • training of each maintenance cost estimator may be performed offline or online (i.e., during operation of the plant).
  • new operational data is collected at step 315 from a power plant over a time interval.
  • This particular power plant may be one of the plants that generated the data used for training (see steps 305, 310), or it may be a new power plant.
  • the maintenance cost estimators are used to predict maintenance intervals associated with components in the power plant based on the new operational data.
  • system inputs for performing an optimized fast ramp-up procedure are determined based on the maintenance intervals associated with the components in the power plant. Then, at step 330, the optimized fast ramp-up procedure is performed in the particular power plant using the system inputs. In some embodiments, optimization may be performed as described above with reference to FIG. 2 using the MPC cost function. As shown above in Equation 1, the cost function comprises a summation of the aging rate for each of the
  • each aging rate may be weighted in some embodiments. For example, as in Equation 1, in some embodiments, each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate.
  • the system inputs for a control horizon may be determined by using a fast non-linear solver to minimize the MPC cost function over a prediction horizon (where the prediction horizon is less than or equal to the control horizon).
  • the prediction horizon generally refers to the time up to which the system runs the prediction, while the control horizon refers to the time up to which system actually computes system inputs. This timing of each horizon may be configured by a user or automatically based on one or more system settings. The length of the horizon can vary depending on factors such as the desired fidelity of the data, the type of components being analyzed, and the computational available for performing the optimization method.
  • a parallel processing platform may be utilized to distribute computational tasks over multiple processing resources. Both a long control, as well as a long prediction horizon, lead to a more complex optimization problem but also the complexity of the models plays a role of course.
  • FIG. 4 provides an example of a parallel processing platform 400 that may be utilized to implement the machine learning models and other aspects of the machine learning models and various workflows discussed herein. This platform 400 may be used in
  • the architecture includes a host computing unit (“host”) 405 and a graphics processing unit (GPU) device (“device”) 410 connected via a bus 415 (e.g., a PCIe bus).
  • the host 405 includes the central processing unit, or "CPU” (not shown in FIG. 4), and host memory 425 accessible to the CPU.
  • the device 410 includes the graphics processing unit (GPU) and its associated memory 420, referred to herein as device memory.
  • the device memory 420 may include various types of memory, each optimized for different memory usages. For example, in some embodiments, the device memory includes global memory, constant memory, and texture memory.
  • Parallel portions of a big data platform and/or big simulation platform may be executed on the platform 400 as "device kernels" or simply “kernels.”
  • a kernel comprises parameterized code configured to perform a particular function.
  • the parallel computing platform is configured to execute these kernels in an optimal manner across the platform 400 based on parameters, settings, and other selections provided by the user. Additionally, in some embodiments, the parallel computing platform may include additional functionality to allow for automatic processing of kernels in an optimal manner with minimal input provided by the user.
  • the platform 400 of FIG. 4 may be used to parallelize portions of the model based operations performed in training or utilizing the smart editing processes discussed herein.
  • the platform 400 can be used to perform operations such as forward and backward convolution, pooling, normalization, etc.
  • the parallel processing platform 400 may be used to execute multiple instances of a machine learning model in parallel. For example, where each component has a different maintenance cost estimator, the estimators may be executed in parallel. Then, the results of the estimators can be aggregated to give an overall understanding of the system.
  • the device 410 includes one or more thread blocks 430 which represent the computation unit of the device 410.
  • the term thread block refers to a group of threads that can cooperate via shared memory and synchronize their execution to coordinate memory accesses. For example, in FIG. 4, threads 440, 445 and 450 operate in thread block 430 and access shared memory 435.
  • thread blocks may be organized in a grid structure. A computation or series of computations may then be mapped onto this grid. For example, in embodiments utilizing CUDA, computations may be mapped on one-, two-, or three-dimensional grids. Each grid contains multiple thread blocks, and each thread block contains multiple threads. For example, in FIG.
  • the thread blocks 430 are organized in a two dimensional grid structure with m+1 rows and n+l columns. Generally, threads in different thread blocks of the same grid cannot communicate or synchronize with each other. However, thread blocks in the same grid can run on the same multiprocessor within the GPU at the same time. The number of threads in each thread block may be limited by hardware or software constraints.
  • registers 455, 460, and 465 represent the fast memory available to thread block 430. Each register is only accessible by a single thread. Thus, for example, register 455 may only be accessed by thread 440. Conversely, shared memory is allocated per thread block, so all threads in the block have access to the same shared memory. Thus, shared memory 435 is designed to be accessed, in parallel, by each thread 440, 445, and 450 in thread block 430. Threads can access data in shared memory 435 loaded from device memory 420 by other threads within the same thread block (e.g., thread block 430). The device memory 420 is accessed by all blocks of the grid and may be implemented using, for example, Dynamic Random- Access Memory (DRAM).
  • DRAM Dynamic Random- Access Memory
  • Each thread can have one or more levels of memory access.
  • each thread may have three levels of memory access.
  • Second, each thread 440, 445, 450 in thread block 430, may read and write data to the shared memory 435 corresponding to that block 430.
  • the time required for a thread to access shared memory exceeds that of register access due to the need to synchronize access among all the threads in the thread block.
  • the shared memory is typically located close to the multiprocessor executing the threads.
  • the third level of memory access allows all threads on the device 410 to read and/or write to the device memory.
  • Device memory requires the longest time to access because access must be synchronized across the thread blocks operating on the device.
  • operational data can be divided into segments based on time or using data locality techniques generally known in the art. Then, each segment can be processed in parallel using register memory, with shared and device memory only being used as necessary to combine the results to provide the results for the complete dataset.
  • the embodiments of the present disclosure may be implemented with any combination of hardware and software.
  • standard computing platforms e.g., servers, desktop computer, etc.
  • the embodiments of the present disclosure may be included in an article of manufacture (e.g., one or more computer program products) having, for example, computer-readable, non-transitory media.
  • the media may have embodied therein computer readable program code for providing and facilitating the mechanisms of the embodiments of the present disclosure.
  • the article of manufacture can be included as part of a computer system or sold separately.
  • An executable application comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input.
  • An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
  • a graphical user interface comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions.
  • the GUI also includes an executable procedure or executable application.
  • the executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user.
  • the processor under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
  • the functions and process steps herein may be performed automatically or wholly or partially in response to user command.
  • An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un procédé de fonctionnement limité selon le vieillissement pour un système complexe qui consiste à obtenir un ou plusieurs estimateurs de coût de maintenance pour un ou plusieurs composants d'un système complexe particulier. Les estimateurs de coût de maintenance sont utilisés pour déterminer des entrées dans le système permettant de modifier des points de fonctionnement. Les points de fonctionnement du système complexe particulier sont ensuite modifiés à l'aide de ces entrées.An aging-dependent operation method for a complex system includes obtaining one or more maintenance cost estimators for one or more components of a particular complex system. Maintenance cost estimators are used to determine inputs to the system for modifying operating points. The operating points of the particular complex system are then modified using these inputs.

Description

AGING CONSTRAINED OPERATION FOR POWER PLANTS
TECHNICAL FIELD
[0001] The present invention relates generally to methods, systems, and apparatuses for aging constrained operation for power plants and other complex systems. The technology described herein may be applied, for example, to optimize ramp-up procedures.
BACKGROUND
[0002] Power generating devices such as gas turbines are used to provide stability to the electric grid by ramping power output up or down as demand and system loads fluctuate. In general, ramping down may be performed relatively quickly; however, power generating devices typically take a long period of time to ramp up. Demand and system loads can fluctuate often through a day, and the fluctuation is not always predictable. Thus, to ensure that the power output remains stable, there is a need to optimize the ramp-up of power generating devices to allow them to quickly provide the needed output.
[0003] Various conventional technical solutions exist to optimize the ramp-up of power generating devices. For example, some conventional solutions use model-predictive control, while other systems utilize machine learning. In each case, the ramp-up of the gas turbine is optimized on a short horizon of a few hours. However, the resulting aging and maintenance costs are only taken into account heuristically (e.g., by taking fixed temperature or stress constraints of certain parts into account). These constraints result from the experience of the engineers of the company. Yet, these constraints are not derived from maintenance cycle predictions or costs. On the other hand, maintenance cycles, such as when to exchange the blades of a gas turbine, are already today predicted based on continuous measurements during operations as well as comparison of these measurements across large fleets of gas turbines using cloud-based data centers and machine learning methods.
SUMMARY
[0004] Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks by providing methods, systems, and apparatuses related to aging constrained operation for power plants and other complex systems. [0005] According to some embodiments, a method for aging constrained operation of a complex system includes obtaining one or more maintenance cost estimator for one or more components of a particular complex system. The one or more maintenance cost estimator is used to determine inputs to the system for modifying operating points. The operating points of the particular complex system are then modified using these inputs.
[0006] Various techniques may be used for implementing each maintenance cost estimator in the aforementioned method. In some embodiments, each maintenance cost estimator is trained based on (i) operational data collected over a plurality of operation cycles of at least one other complex system and (ii) maintenance data corresponding to the at least one other complex system. In some embodiments, the maintenance cost estimator estimates the remaining time until a next maintenance cycle of at least one component. In other embodiments, the
maintenance cost estimator is updated based on continuously collected operational data from a particular complex system over a time interval. In other embodiments, maintenance cost estimator estimates an incremental maintenance cost at a specific operation point.
[0007] In some embodiments of the aforementioned method, the inputs to the particular complex system are determined by forming a model-predictive control (MPC) cost function comprising a summation of an aging rate for each of the components. Then, the inputs are determined for a control horizon by minimizing MPC cost function over a prediction horizon, where the prediction horizon is less than or equal to the control horizon. In one embodiment, each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate.
[0008] According to other embodiments of the present invention, a method for aging constrained operation of a complex system includes collecting operational data from the complex system over a time interval. The maintenance cost estimators are used to predict maintenance intervals associated with components in the complex system based on the operational data. System inputs for performing modifying operating points of the complex system are determined based on the maintenance intervals associated with the components in the complex system. Then, the operating points of the complex system may be modified using the system inputs. [0009] According to another aspect of the present invention, a non-transitory computer readable medium stores computer program instructions for operating a complex system with aging constraints. The computer program instructions, when executed by a processor, cause the processor to perform one or more of the methods discussed above.
[0010] Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there are shown in the drawing exemplary embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[0012] FIG. 1 provides a conceptual overview of how the techniques described herein aid in the optimization of ramp-up procedures;
[0013] FIG. 2 illustrates a workflow for implementing a neural network-based aging constrained MPC, according to some embodiments; and
[0014] FIG. 3 illustrates a method for aging constrained operation of a power plant, according to some embodiments; and
[0015] FIG. 4 provides an example of a parallel processing platform that may be utilized to implement the machine learning models and other aspects of the machine learning models and various workflows discussed herein.
DETAILED DESCRIPTION
[0016] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses related to aging-constrained modification of operating points of power plants such as gas, coal, or nuclear power plants. The techniques described herein allow for the trading of short-term profits against long-term costs explicitly. Additionally, these techniques automatically integrate long-term maintenance costs into short term operation decisions. For example, the disclosed techniques allow one to trade off the maintenance costs against the revenue a company gains when ramping up a power generating device. The maintenance costs are excessive in comparison to the revenue, the company may choose not to ramp up in a given situation. Moreover, these techniques are applicable to many different maintenance interval/cost estimators as well as to many different operation methods.
[0017] The technology described herein is generally applicable to optimizing any modification of operating points associated with a complex system. In this context, the term "operating points" refers to an operating state at a specific period of time. Thus, a modification of an operating point is a change from one operating state to another. For the purposes of illustration, the following disclosure considers one example of a modification of operating points: the fast ramp-up procedure.
[0018] The fast ramp-up is required from customers and grid operators, especially for gas power plants, to compensate volatile renewable generation and/or to make higher profits on short-term reserve power markets. Fast ramp-up of these power plants increases aging, wear, and fatigue of crucial parts of the power plant (e.g., of the blades of the gas and steam turbines of the power plants). Therefore, this fast ramp-up also increases maintenance costs because these critical parts have to be replaced earlier. The techniques described herein combine aging- or maintenance-costs and fast ramp-up profits into a unified operation approach that maximizes profit-vs-cost.
[0019] The techniques described herein are described in the context of the fast ramp-up of a gas turbine. However, it should be noted that the gas turbine is only one example of a complex system to which the techniques described herein may be applied. Similar approaches can be applied to other power plants like coal or nuclear power plants as well as wind turbines or battery storage systems. Additionally, the technology can be applied to fast changes of operation points, (e.g., if a gas power plant changes from 60% to 90% nominal load). In all these cases, fast profits have to be traded against long-term maintenance costs during operation. [0020] For purposes of illustration, assume the following correlations between operation cycles and maintenance intervals. If the gas turbine is ramped up 100 times from zero to 100% load within 90 minutes, maintenance is required. However, if the gas turbine is ramped up 1000 times from zero to 80% load within 120 minutes, maintenance is required. Assuming that the maintenance costs are $10M, one ramp-up from zero to 100% load within 90 minutes costs $100k whereas one ramp-up from zero to 80% within 120 minutes costs only $10k maintenance costs. During operation, the plant operator can decide whether to ramp up the gas turbine to 100% in 90 minutes, to 80% within 120 minutes, or not at all, depending on the profit he gets from the reserve market as well as other economic considerations.
[0021] FIG. 1 provides a conceptual overview of how the techniques described herein aid in the optimization of ramp-up procedures. Briefly, module 120 analyzes the tradeoff between fast- ramp up profits and the maintenance costs associated with a fast ramp-up which are optimized using a maintenance cost estimator 105. Based on this trade off, the operation cycles 110 are adjusted accordingly. In turn, this affects the maintenance intervals 115 that define the effect of the operation cycles 110 on the overall aging of the component(s) being used for the ramp-up procedure.
[0022] The maintenance cost estimator 105 predicts correlations between operation cycles and maintenance intervals or aging based on large amounts of data retrieved either locally or from a remote source (e.g., cloud-based server). This data includes (i) operational data collected over a plurality of operation cycles of a plurality of power plants and (ii) maintenance data corresponding to the plurality of power plants. The plurality of power plants providing the data are referred to herein as a "fleet." In some embodiments, this maintenance cost estimator 105 is an implemented machine learning model such as, for example, a neural network. A fast ramp-up method is implemented by the trade-off module based on the correlation between operation cycles and maintenance intervals in order to trade short-term profits from fast ramp-up against long-term costs for maintenance.
[0023] Note that FIG. 1 does not specify the operation method that is used during operation. In some embodiments, fast ramp-up is performed with a model-predictive control (MPC) approach that uses a physical model of the dynamics of the gas turbine. As is generally understood in the art, MPC is a planning algorithm that determines an optimal open-loop plan from the current state of the system to a finite time horizon. A relatively small portion of the plan in executed and the optimal open-loop plan is recalculated using the updated state. This process is repeated, with re-planning occurring at each decision point.
[0024] Alternatively, the maintenance costs could also be integrated into another operational method. For example, in some embodiments, a neural network (or another method for numerical regression) may be trained to predict the maintenance intervals of a number of components of the power plant. It can be decided if a single estimator is used for each component or a larger estimator for several components together. These estimators can take a number of parameters as an input. Examples include temperatures, temperature rates of change, load characteristics, rotational speed, etc. Training of the estimators can either be performed offline or online (i.e., during operation.)
[0025] For fast ramp-up of such a system, the cost of maintenance can be considered in an MPC by changing the cost function, such that - apart from following a fast ramp-up - the cost of maintenance is added as an additional term, as follows:
Figure imgf000008_0001
where u is the system inputs, at is the aging rate of the i component, and wt is a weighting factor which could for example reflect the cost of maintenance for this component. The term "aging rate," as used herein refers to the rate at which the useful lifespan of a particular component changes over time. In general, the lifespan of each component is a known quantity that may be defined, for example, based on the number of operational cycles or hours that the component may be used. Thus, under normal operating conditions, the aging rate would correspond to the time that has elapsed since component installation. However, factors such as subjecting the component to high levels of stress for long time periods may accelerate aging; thus as procedures such as fast ramp-up are performed, the aging rate may increase.
[0026] FIG. 2 illustrates a workflow 200 for implementing a neural network-based aging constrained MPC, according to some embodiments. In order to efficiently solve the optimization problem, the optimizer 215 is a fast non-linear solver which determines the inputs to a gas turbine 225 for a given control horizon. These inputs are determined using a plant model 235 which describes the behavior of the gas turbine 225. Based on information from the plant model 235, the aforementioned cost function 205 is minimized for the control horizon. This optimizer 215 may be fixed using constraints 230 on certain parameters such as temperature.
[0027] The neural network predictor model 210 supports the fast solution of the
optimization problem because the gradient of the aging rate ai with respect to each input of the neural input can be computed efficiently. As noted above, these inputs may be retrieved from a remote cloud-based server system 220. For a simple Multilayer Perceptron (MLP) neural network with a single hidden layer the gradient is
Figure imgf000009_0001
where α,- is the output of the neural network, xk is the k input, Vj and Wjk are the input and output layer weights, respectively, and /(<¾) is the non-linear activation function of the hidden layer neuron. The terms are summed up over the number of j hidden layers. For simple regression problems, the structure of an MLP with a single hidden layer is often sufficient. However, the gradient computation can be easily extended to more complex structures of neural networks, as the gradient is also used for training such networks.
[0028] FIG. 3 illustrates a method 300 for aging constrained operation of a power plant, according to some embodiments. Starting at step 305, operational data and maintenance data corresponding to a plurality of power plants are retrieved, either from a local storage source or from a remote source (e.g., a cloud-based server computing system). The operational data will depend on the type of components present in the power plant. Examples of the types of operational data include temperature readings, temperature rates of change, force, stress or strain readings, vibration or load characteristics, rotational speed or acceleration, etc. The maintenance data details information such as, for example, when particular components were repaired or replaced or when routine maintenance was performed on components. In some embodiments, the maintenance data includes characteristics of the components that may be indicative of aging. For example, the maintenance data may provide diameter measurements from a particular subcomponent known to decrease over time as the component ages.
[0029] Continuing with reference to FIG. 3, at step 310, the retrieved data is used to train one or more maintenance cost estimators. In some embodiments each maintenance cost estimator is a neural network; however, various machine learning-based techniques generally known in the art may be used for creating the estimators. In some embodiments, a single estimator is used to predict values for an entire plant. In other embodiments, multiple estimators can be used for each component in the plant. For example, in one embodiment, a distinct maintenance cost estimator is used for each critical component in the plant. As noted above, training of each maintenance cost estimator may be performed offline or online (i.e., during operation of the plant).
[0030] Once the maintenance cost estimators are trained, new operational data is collected at step 315 from a power plant over a time interval. This particular power plant may be one of the plants that generated the data used for training (see steps 305, 310), or it may be a new power plant. Next, at step 320, the maintenance cost estimators are used to predict maintenance intervals associated with components in the power plant based on the new operational data.
[0031] At step 325, system inputs for performing an optimized fast ramp-up procedure are determined based on the maintenance intervals associated with the components in the power plant. Then, at step 330, the optimized fast ramp-up procedure is performed in the particular power plant using the system inputs. In some embodiments, optimization may be performed as described above with reference to FIG. 2 using the MPC cost function. As shown above in Equation 1, the cost function comprises a summation of the aging rate for each of the
components (as determined from the maintenance cost estimators). Each aging rate may be weighted in some embodiments. For example, as in Equation 1, in some embodiments, each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate. Once the cost function is determined, the system inputs for a control horizon may be determined by using a fast non-linear solver to minimize the MPC cost function over a prediction horizon (where the prediction horizon is less than or equal to the control horizon). The prediction horizon generally refers to the time up to which the system runs the prediction, while the control horizon refers to the time up to which system actually computes system inputs. This timing of each horizon may be configured by a user or automatically based on one or more system settings. The length of the horizon can vary depending on factors such as the desired fidelity of the data, the type of components being analyzed, and the computational available for performing the optimization method.
[0032] Various computing platforms generally known in the art may be used for
implementing the techniques described herein. However, if the optimization problem is more complex or has to be solved faster (i.e., more often), a parallel processing platform may be utilized to distribute computational tasks over multiple processing resources. Both a long control, as well as a long prediction horizon, lead to a more complex optimization problem but also the complexity of the models plays a role of course.
[0033] FIG. 4 provides an example of a parallel processing platform 400 that may be utilized to implement the machine learning models and other aspects of the machine learning models and various workflows discussed herein. This platform 400 may be used in
embodiments of the present invention where NVIDIA CUD A™ (or a similar parallel computing platform) is used. The architecture includes a host computing unit ("host") 405 and a graphics processing unit (GPU) device ("device") 410 connected via a bus 415 (e.g., a PCIe bus). The host 405 includes the central processing unit, or "CPU" (not shown in FIG. 4), and host memory 425 accessible to the CPU. The device 410 includes the graphics processing unit (GPU) and its associated memory 420, referred to herein as device memory. The device memory 420 may include various types of memory, each optimized for different memory usages. For example, in some embodiments, the device memory includes global memory, constant memory, and texture memory.
[0034] Parallel portions of a big data platform and/or big simulation platform may be executed on the platform 400 as "device kernels" or simply "kernels." A kernel comprises parameterized code configured to perform a particular function. The parallel computing platform is configured to execute these kernels in an optimal manner across the platform 400 based on parameters, settings, and other selections provided by the user. Additionally, in some embodiments, the parallel computing platform may include additional functionality to allow for automatic processing of kernels in an optimal manner with minimal input provided by the user.
[0035] The processing required for each kernel is performed by a grid of thread blocks (described in greater detail below). Using concurrent kernel execution, streams, and
synchronization with lightweight events, the platform 400 of FIG. 4 (or similar architectures) may be used to parallelize portions of the model based operations performed in training or utilizing the smart editing processes discussed herein. For example, in embodiments where a convolutional neural network is used as the machine learning model, the platform 400 can be used to perform operations such as forward and backward convolution, pooling, normalization, etc. Additionally, the parallel processing platform 400 may be used to execute multiple instances of a machine learning model in parallel. For example, where each component has a different maintenance cost estimator, the estimators may be executed in parallel. Then, the results of the estimators can be aggregated to give an overall understanding of the system.
[0036] The device 410 includes one or more thread blocks 430 which represent the computation unit of the device 410. The term thread block refers to a group of threads that can cooperate via shared memory and synchronize their execution to coordinate memory accesses. For example, in FIG. 4, threads 440, 445 and 450 operate in thread block 430 and access shared memory 435. Depending on the parallel computing platform used, thread blocks may be organized in a grid structure. A computation or series of computations may then be mapped onto this grid. For example, in embodiments utilizing CUDA, computations may be mapped on one-, two-, or three-dimensional grids. Each grid contains multiple thread blocks, and each thread block contains multiple threads. For example, in FIG. 4, the thread blocks 430 are organized in a two dimensional grid structure with m+1 rows and n+l columns. Generally, threads in different thread blocks of the same grid cannot communicate or synchronize with each other. However, thread blocks in the same grid can run on the same multiprocessor within the GPU at the same time. The number of threads in each thread block may be limited by hardware or software constraints.
[0037] Continuing with reference to FIG. 4, registers 455, 460, and 465 represent the fast memory available to thread block 430. Each register is only accessible by a single thread. Thus, for example, register 455 may only be accessed by thread 440. Conversely, shared memory is allocated per thread block, so all threads in the block have access to the same shared memory. Thus, shared memory 435 is designed to be accessed, in parallel, by each thread 440, 445, and 450 in thread block 430. Threads can access data in shared memory 435 loaded from device memory 420 by other threads within the same thread block (e.g., thread block 430). The device memory 420 is accessed by all blocks of the grid and may be implemented using, for example, Dynamic Random- Access Memory (DRAM).
[0038] Each thread can have one or more levels of memory access. For example, in the platform 400 of FIG. 4, each thread may have three levels of memory access. First, each thread 440, 445, 450, can read and write to its corresponding registers 455, 460, and 465. Registers provide the fastest memory access to threads because there are no synchronization issues and the register is generally located close to a multiprocessor executing the thread. Second, each thread 440, 445, 450 in thread block 430, may read and write data to the shared memory 435 corresponding to that block 430. Generally, the time required for a thread to access shared memory exceeds that of register access due to the need to synchronize access among all the threads in the thread block. However, like the registers in the thread block, the shared memory is typically located close to the multiprocessor executing the threads. The third level of memory access allows all threads on the device 410 to read and/or write to the device memory. Device memory requires the longest time to access because access must be synchronized across the thread blocks operating on the device. Thus, in some embodiments, operational data can be divided into segments based on time or using data locality techniques generally known in the art. Then, each segment can be processed in parallel using register memory, with shared and device memory only being used as necessary to combine the results to provide the results for the complete dataset.
[0039] The embodiments of the present disclosure may be implemented with any combination of hardware and software. For example, aside from parallel processing architecture presented in FIG. 4, standard computing platforms (e.g., servers, desktop computer, etc.) may be specially configured to perform the techniques discussed herein. In addition, the embodiments of the present disclosure may be included in an article of manufacture (e.g., one or more computer program products) having, for example, computer-readable, non-transitory media. The media may have embodied therein computer readable program code for providing and facilitating the mechanisms of the embodiments of the present disclosure. The article of manufacture can be included as part of a computer system or sold separately.
[0040] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and
embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
[0041] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
[0042] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
[0043] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.
[0044] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be
implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for."

Claims

CLAIMS We claim:
1. A method for aging constrained operation of a complex system, the method comprising: obtaining one or more maintenance cost estimators for one or more components of a particular complex system; using the one or more maintenance cost estimator associated with the components in the particular complex system to determine inputs to a particular complex system for modifying operating points of the particular complex system; and modifying the operating points of the particular complex system using the inputs to the particular complex system.
2. The method of claim 1, where the one or more maintenance cost estimator is trained based on (i) operational data collected over a plurality of operation cycles of at least one other complex system and (ii) maintenance data corresponding to at least one other complex system.
3. The method of claim 1, where the one or more maintenance cost estimator estimates the remaining time until a next maintenance cycle of at least one component.
4. The method of claim 1, where the one or more maintenance cost estimator is updated based on continuously collected operational data from a particular complex system over a time interval.
5. The method of claim 1, where the one or more maintenance cost estimator estimates an incremental maintenance cost at a specific operation point.
6. The method of claim 1, wherein the one or more maintenance cost estimators comprise a plurality of maintenance cost estimators and each maintenance cost estimator corresponds to a distinct component in the particular complex system.
7. The method of claim 1, wherein the operational data comprises one or more of temperature readings, temperature rates of change, force, stress or strain readings, vibration or load characteristics, and rotational speed or acceleration corresponding to components of the plurality of complex systems.
8. The method of claim 1, wherein the inputs to the particular complex system are determined by: forming a model-predictive control (MPC) cost function comprising a summation of an aging rate for each of the components; and determining the inputs to the particular complex system for a control horizon by minimizing MPC cost function over a prediction horizon, where the prediction horizon is less than or equal to the control horizon.
9. The method of claim 8, wherein each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate.
10. A method for aging constrained operation of a complex system, the method comprising: collecting operational data from the complex system over a time interval; using one or more maintenance cost estimators to predict maintenance intervals associated with a plurality of components in the complex system based on the operational data; determining system inputs for performing modifying operating points of the complex system based on the maintenance intervals associated with the components in the complex system; and modifying the operating points of the complex system using the system inputs.
11. The method of claim 10, wherein each maintenance cost estimator is a neural network.
12. The method of claim 10, wherein the one or more maintenance cost estimators comprise a plurality of maintenance cost estimators and each maintenance cost estimator corresponds to a distinct component of the complex system.
13. The method of claim 10, further comprising: retrieving the maintenance data corresponding to a plurality of complex systems from one or more remote servers over a computer network; and training the one or more maintenance cost estimators based on (i) operational data collected over a plurality of operation cycles of a plurality of complex systems and (ii) maintenance data corresponding to a plurality of complex systems.
14. The method of claim 13, wherein the training is performed offline from operation of the complex system.
15. The method of claim 13, wherein the training is performed during operation of the complex system.
16. The method of claim 10, wherein the system inputs for performing the optimized fast ramp-up procedure are determined by: forming a model-predictive control (MPC) cost function comprising a summation of an aging rate for each of the components; and determining the system inputs for a control horizon by minimizing MPC cost function over a prediction horizon, where the prediction horizon is less than or equal to the control horizon.
17. The method of claim 16, wherein each aging rate is weighted in the summation based on a cost of maintenance for the component corresponding to the aging rate.
18. A non-transitory computer readable medium storing computer program instructions for operating a complex system with aging constraints, the computer program instructions when executed by a processor cause the processor to perform operations comprising: retrieving maintenance data corresponding to a plurality of complex systems from one or more remote servers over a computer network; training one or more maintenance cost estimators based on (i) operational data collected over a plurality of operation cycles of the plurality of complex systems and (ii) maintenance data corresponding to the plurality of complex systems; and using the one or more maintenance cost estimators to modify operating points of a particular complex system.
PCT/US2017/054665 2017-09-30 2017-09-30 Aging constrained operation for power plants Ceased WO2019066982A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2017/054665 WO2019066982A1 (en) 2017-09-30 2017-09-30 Aging constrained operation for power plants

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/054665 WO2019066982A1 (en) 2017-09-30 2017-09-30 Aging constrained operation for power plants

Publications (1)

Publication Number Publication Date
WO2019066982A1 true WO2019066982A1 (en) 2019-04-04

Family

ID=60164804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/054665 Ceased WO2019066982A1 (en) 2017-09-30 2017-09-30 Aging constrained operation for power plants

Country Status (1)

Country Link
WO (1) WO2019066982A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120352A1 (en) * 2000-12-21 2002-08-29 Alec Stothert Optimizing plant control values of a power plant
US20120010757A1 (en) * 2010-07-09 2012-01-12 Emerson Prcess Management Power & Water Solutions, Inc. Energy management system
US20150184549A1 (en) * 2013-12-31 2015-07-02 General Electric Company Methods and systems for enhancing control of power plant generating units

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120352A1 (en) * 2000-12-21 2002-08-29 Alec Stothert Optimizing plant control values of a power plant
US20120010757A1 (en) * 2010-07-09 2012-01-12 Emerson Prcess Management Power & Water Solutions, Inc. Energy management system
US20150184549A1 (en) * 2013-12-31 2015-07-02 General Electric Company Methods and systems for enhancing control of power plant generating units

Similar Documents

Publication Publication Date Title
EP3859455B1 (en) Learning apparatus, learning method, learning program, determination apparatus, determination method, determination program, and computer readable medium
JP7566080B2 (en) Improved predictive models
WO2019125976A1 (en) Switching from calendar-based to predictive maintenance: a leaner and faster software-based solution orchestrating data driven forecasting models
JP2006024017A (en) System, method and program for predicting capacity of computer resource
EP3836030A1 (en) Method and apparatus with model optimization, and accelerator system
JP2008524716A (en) System and method for production planning analysis using discrete event simulation
CN114936085A (en) ETL scheduling method and device based on deep learning algorithm
EP4083723B1 (en) Evaluation apparatus, evaluation method, evaluation program, and control apparatus
JP2018528511A (en) Optimizing output efficiency in production systems
US10558767B1 (en) Analytical derivative-based ARMA model estimation
Tran Structural-damage detection with big data using parallel computing based on MPSoC
JP7288189B2 (en) Job power prediction program, job power prediction method, and job power prediction device
Yi et al. Not all explorations are equal: Harnessing heterogeneous profiling cost for efficient mlaas training
US20250370739A1 (en) Software systems and methods for multiple talp family enhancement and management
JP7060130B1 (en) Operation support equipment, operation support methods and programs
WO2019066982A1 (en) Aging constrained operation for power plants
WO2017142737A1 (en) A prognostics and health management model for predicting wind turbine oil filter wear level
KR102521807B1 (en) Method for predicting energy production and consumption data using double sequence deep learning model, and apparatus thereof
US11656887B2 (en) System and method to simulate demand and optimize control parameters for a technology platform
Chung et al. Power Consumption Optimization of GPU Server with Offline Reinforcement Learning
Mihai et al. Using graphics processing units and openGL in adaptive-robust real time control
CN119022413B (en) Chiller load distribution method and device
US20250390650A1 (en) Simulation cloning for digital twins
Laosooksathit et al. Reliability-aware performance model for optimal GPU-enabled cluster environment
Shen et al. Data-Driven Prediction of Order Lead Time in Semiconductor Supply Chain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17788358

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17788358

Country of ref document: EP

Kind code of ref document: A1