[go: up one dir, main page]

WO2019240607A1 - Industrial data stream processing system and method - Google Patents

Industrial data stream processing system and method Download PDF

Info

Publication number
WO2019240607A1
WO2019240607A1 PCT/RU2018/000394 RU2018000394W WO2019240607A1 WO 2019240607 A1 WO2019240607 A1 WO 2019240607A1 RU 2018000394 W RU2018000394 W RU 2018000394W WO 2019240607 A1 WO2019240607 A1 WO 2019240607A1
Authority
WO
WIPO (PCT)
Prior art keywords
computational
computing
site
computation
computing resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/RU2018/000394
Other languages
French (fr)
Inventor
Egor Sergeevich GOLOSHCHAPOV
Artem Vladimirovich OZHIGIN
Sergey Valeryevich VINOGRADOV
Alexey Yurevich TCYMBAL
Sergey Sergeevich ZOBNIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Siemens Corp
Original Assignee
Siemens AG
Siemens Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG, Siemens Corp filed Critical Siemens AG
Priority to PCT/RU2018/000394 priority Critical patent/WO2019240607A1/en
Publication of WO2019240607A1 publication Critical patent/WO2019240607A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/4185Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by the network communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • the present invention relates to the field of industrial sensor data processing, and, more particularly, to an industrial data stream processing system and method.
  • server and/or cloud-based stream processing systems are increasingly used to acquire sensor data at an industrial facility, process the sensor data using a server farm or the like, and forward the processed sensor data stream to an operator for monitoring, visualization and the like.
  • the server farm, the industrial facility and the operator accessing the processed sensor data may each be located at different sites remote from each other.
  • Processing the data stream may involve a number of computational tasks, such as aggregating, sorting, trending, analyzing, FFT (Fast Fourier Transform) , visualization and the like, which may be deployed not only on the server farm, but also on various computing resources available at each of the sites.
  • a suitable placement of each computational task in terms of speed and/or cost associated with processing and transfer of the data stream may depend on computational capacities available at each of the sites, bandwidth available between the sites and the like.
  • an architecture of a stream processing system is manually designed so as to fulfil a specific requirement for a specific job at hand.
  • the proposed system advantageously enables automated determination of a suitable placement of each of the computational tasks registered in the registry device on the plurality of distributed computing resources.
  • a data stream may refer to a sequence of digitally encoded signals used to represent information in transmission. More particularly, the digital encoded signals may comprise a plurality of data packets.
  • a computational task may refer to a sequence of instructions configured to, when executed on a computing resource, cause the computing resource to process a data stream .
  • a computing resource may refer to any computational device capable of processing a data stream when a corresponding computational task is executed thereon.
  • "Processing a data stream”, or “executing a computational task with the data stream” may refer to one or more of : receiving the data stream, or packets thereof, as input, decoding the information represented by the data stream, performing processing on the information represented by the data stream so as to compute processed information, encoding the processed information, and outputting the data stream, or packets thereof, representing the processed information.
  • the processing performed on information represented by a data stream in response to a computational task being executed with the data stream may include, as non- limiting examples, one or more of: filtering, shaping, sorting, ordering, indexing, qualifying, transforming, such as computing a Fast Fourier Transform (FFT) , averaging, smoothing, trending, weighting, application of a filter, analyzing, visualizing or generating visualization data, and the like.
  • FFT Fast Fourier Transform
  • a site may refer to a physical location.
  • a first site may refer to a site where information to be represented by a respective data stream is acquired. More specifically, the first site may be a site where an industrial facility is installed from which raw sensor data is acquired.
  • a second site may refer to a site where the data stream is received and processed information is consumed. More particularly, the second site may refer to a site where remote operation, monitoring, analysis and the like is performed with respect to the industrial facility.
  • a computing site may refer to a site where a number of computing resources is arranged. The plurality of computing sites may include the first site and/or the second site.
  • raw data may be acquired at the first site, may be transmitted as a data stream from the first site, via the plurality of computing resources, to the second site. While passing through each of the computing resources, the data stream may be processed by a number of computational tasks placed on a respective computing resource. Processing the data stream may include altering the information represented by the data stream. Said altering may in particular comprise altering a bandwidth required for transmission of the data stream to a next computing resource .
  • a data stream may pass directly from one of the computing resource to another of the computing resources located at a same computing site.
  • a data stream may be transmitted over a communication link linking the one site and the other site.
  • a communication link may be characterized by having a predetermined capacity.
  • the capacity may refer, for example, to a maximum bandwidth and/or a certain cost associated with transmission of a data stream having a certain bandwidth thereover.
  • a computing resource may be characterized by having a predetermined capacity.
  • the capacity may refer, for example, to a maximum processing power and/or by a certain cost associated with processing a certain processing load thereon.
  • the registry device may implement a registry database. Registering a computational task and a requirements specification thereof in the registry device may comprise storing the computational task and the requirements specification thereof in the registry database.
  • the registry device may be configured to enable computational tasks and requirements specifications thereof to be registered or stored, changed, and deleted in the registry database by communicating with the registry device using an application programming interface or API, a user interface or UI, or the like .
  • a requirements specification may refer to information or data specifying requirements with respect to a computation device on which the corresponding computational may be placed and/or with respect to a communication link over which an incoming data stream input to the computational task and an outgoing data stream output by the computational task may be transmitted.
  • a respective of the requirements may be related to a minimum capacity of the computing resource and/or communication link and/or may be related to a maximum load placed on the respective computing resource and/or communication link.
  • a respective requirements specification may be registered in the registry device in association with a corresponding computational task.
  • a capacity specification may refer to information or data specifying capacities of a respective computing resource and/or a respective communication links.
  • a respective capacity specification may be pre- stored in the computation placement control device.
  • the computation placement control device may possess knowledge about a hardware configuration of the industrial data stream processing system.
  • the computation placement control device may be communicably connected to the registry device and to each of the computing resources .
  • the computation placement control device may automatically create a working deployment of the plurality of computational tasks on the computing resources of the industrial data stream processing system in consideration of the respective requirements specifications and capacity specifications.
  • the proposed industrial data stream processing system may reduce a burden of an operator tasked with designing computational tasks to be executed on the data streams.
  • an operator may concentrate on programming a required computational task and specifying the requirements of the computational task.
  • the operator may register the computational task and its requirements specifications with the registry device.
  • the computation placement control device may handle proper placement of ‘the computational task on one of the computing resources.
  • the operator may be relieved from the burden of considering the requirements of other computational tasks and the available capacities and determining a suitable placement of the computational task on one of the plurality of the computing resources available in the industrial data stream processing system.
  • Placing a computational task on a determined computing resource may include one or more of transmitting the computational task to storing the computational task on and/or installing the determined computation resource on the determined computing resource.
  • Causing the determined computing resource to execute the placed computational task may include transmitting an instruction to the determined computing resource.
  • the determined computing resource may, for example, start executing the placed computational task in a continuous loop polling for data packets to arrive, or may register the placed computational task for event -based execution.
  • a scheduler unit comprised by the computation resource may spawn or execute the computational task in response to arrival of a data packet of a respective data stream.
  • the registry device is further configured to register the plurality of capacity specifications of the plurality of computing resources and/or the plurality of communication links.
  • the requirements specifications may be registered or stored, changed and deleted in the registry database.
  • the computation placement control device is configured to repeat the steps a) , b) and c) in response to a change of the registry device.
  • a change of the registry device may refer to a change of information registered in the registry device, such as change of a computational task, a requirements specification and/or a capacity specification registered in the registry database.
  • repeating steps a) , b) and c) in response to a change of the registry device may include repeating steps a) , b) and c) for each of the computational tasks registered in the registry device after the change of the registry device.
  • an already placed computational task may be moved from its current computing resource to a different computing resource.
  • the industrial data stream processing system may advantageously enable dynamic reconfiguration whenever a computational task is registered, changed, added or deleted and/or whenever a computing resource or a computation link is added, changed or deleted.
  • the plurality of computing resources is a plurality of heterogenous computing resources implementing a variety of computing platforms.
  • a computing platform may refer to one or more of : a hardware architecture of the computing resource, an operating system of the computing resource, a number of runtime environments available on the computing resource, and the like .
  • a computational task may be placed on one or more of a PLC installed at an industrial facility at the first site, a personal computer used by an operator at the second site, a high-performance computer installed in a server farm at a third site, etc.
  • the computation placement control device may decide which computational task to place on which computing platform in consideration of the requirement specifications, the capacity specifications, the computing platforms and the like.
  • step c) includes natively executing the placed computational task on the determined computing resource if a respective of the computational tasks is native to the computing platform of the determined computing resource.
  • “Native/natively” may refer to a case wherein instructions comprised by the computational task are directly compatible with properties of the determined computing resource.
  • the instructions may correspond to an instruction set of a processing unit of the computing resource and may use an application programming interface of an operating system of the computing resource.
  • Native execution may advantageously maximize leverage of the performance offered by the determined computing resource.
  • step b) includes installing, on the determined computing resource, a runtime environment configured to enable execution of the computational task by the determined computing resource
  • step c) includes launching the runtime environment on the determined computing resource and causing the determined computing resource to execute the placed computational task using the runtime environment.
  • a runtime environment may comprise instructions native to the determined computing resource and configured to enable execution of non-native instructions comprised by the computational task.
  • the computational task may comprise platform- independent instructions, such as Java bytecode instructions, Python instructions and the like.
  • a corresponding runtime environment may comprise a Java Runtime Environment, a Python interpreter, and the like.
  • the computational task may comprise platform- dependent instructions native to a different one of the computing platforms present in the industrial data stream processing system.
  • a corresponding runtime environment may comprise an emulator emulating the different one computing platform on the computing platform of the determined computing resource .
  • a respective requirements specification specifies at least one of a required computing platform, a required latency, a required upstream bandwidth, a required downstream bandwidth, a required computational power, an expected computational load, an allowable computational cost, an allowable bandwidth-related cost
  • a respective capacity specification of a respective computing resource specifies at least one of a computing platform, a computational power
  • a computational cost of the computing resource and a respective capacity specification of a respective communication link specifies at least one of a latency, a bandwidth and a bandwidth-related cost of the communication link.
  • latency may refer to a time allowed for processing and/or transmission.
  • Upstream bandwidth may refer to a bandwidth of an incoming data stream used as an input to the computational task; and
  • downstream bandwidth may refer to a bandwidth of an outgoing data stream provided as an output of the computational task.
  • Bitwidth may refer to an amount of data to be transmitted per time unit and may be specified in units of byte per second or the like.
  • computational power may refer to a processing power specified in oscillations per second, floating point operations per second or the like.
  • Computational load may refer to an amount of operations executed per time unit, an amount of memory allocated or the like.
  • Bitwidth-related cost may refer to a cost associated with transmitting a predetermined amount of data over a respective communication link used to transmit the incoming and/or the outgoing data stream.
  • Computational cost may refer to a cost associated with performing a predetermined amount of processing, specified as a number of operations, as an amount of reserved processing time or reserved memory the like.
  • a "cost” may refer to any parameter related to execution of the plurality of computational tasks that is desired to be minimized.
  • the computation placement control device may advantageously be configured to consider the above-mentioned requirement specifications of the plurality of computational tasks and the above-mentioned capacity specifications of the plurality of computing resources and/or the number of communication links when determining a suitable placement of each of the plurality of computational tasks.
  • step a) includes determining the respective computing resources to execute the respective computational tasks such that the requirements specified by the plurality of computation definitions are met with corresponding capacities specified by the plurality capacity specifications.
  • step a) includes determining the respective computing resources to execute the respective computational tasks such that a total cost parameter associated with executing the plurality of computational tasks with the number of data streams is reduced.
  • the total cost parameter may be one of a total runtime, a total latericy, and/or a total cost incurred while processing a predetermined portion of each of the number of data streams .
  • the computation placement control device may be configured to calculate the total cost parameter for a plurality of candidate deployments and determine a deployment in which the desired parameter is reduced.
  • the computation placement control device may employ a method such as iterating through all candidate deployments, using a method of steepest descent, or the like.
  • a deployment herein, may refer to a specific placement of each of the plurality of computational tasks.
  • the industrial data stream processing system may advantageously be capable of automatically determining an optimized deployment.
  • the computation placement control device is configured to monitor an actual load experienced by each of the plurality of computing resources and/or an actual load experienced by each of the plurality of communication links.
  • a respective computing resource may be configured to measure a load experienced by the computing resource and/or a load experienced by a communication link coupled to the computing resource and may be configured to transmit the measured loads to the computation placement control device.
  • load may refer to a relative load such as a percental utilization of an available processing power of a respective computing resource and/or a percental utilization of an available bandwidth of a respective communication link, and/or to an absolute load such as a number of operations carried out per time out by a processing unit of a respective computing resource and/or an amount of data transmitted per time unit over a respective communication link.
  • step a) includes determining the respective computing resources to execute the respective computational tasks further based on the monitored actual loads .
  • the industrial data stream processing system may be configured to determine an optimum placement of the computational tasks by trial and error by adapting the deployment until all of the monitored loads are within an acceptable range .
  • the computation placement control device is configured to repeat steps a) , b) and c) in response to a monitored actual load exceeding a predetermined threshold for a predetermined amount of time .
  • the industrial data stream processing system may advantageously be able to adapt itself to changing loads experienced by the plurality of computing resources and/or the number of communication links.
  • the industrial data stream processing system may dynamically reconfigure the placement of the respective computing resources. Overloads and bottlenecks may be favorably avoided.
  • a first site is a site where data of the number of data streams is acquired from an industrial facility
  • the second site is a site where data of the number of data streams is presented in a user interface
  • a third site is a data center including a number of computing resources; and the plurality of computing sites includes the third site and at least one of the first site and the second site.
  • the industrial facility may be a gas turbine, a wind turbine, a pdwer plant, a steel mill, or any other type of industrial facility in which at least one sensor provides a data stream of sensor data.
  • a programmable logic controller, an industrial PC or the like may be located at the first site and may be configured to acquire the sensor data.
  • the programmable logic controller, industrial PC or the like may constitute a computing resource that may be used to execute one or more of the plurality of computational tasks. For example, raw sensor data preprocessing may be advantageously executed on the programmable logic controller or the industrial PC at the first site.
  • the second site may be an office where a personal computer, such as a laptop PC, a desktop PC or a workstation, of an operator is located.
  • the user interface may be displayed to the operator of the personal computer, for example in a web client executed on the personal computer.
  • the personal computer may comprise a computing resource that may be used to execute one or more of the plurality of computational tasks. For example, visualization processing may be advantageously executed on the personal computer at the second site.
  • the third site comprising the data center may be located remotely from the first site and/or the second site.
  • the data center may provide a high processing capability, however, a cost of transferring data to/and from the data center and/or a performing computation on the data center may likewise be high.
  • the industrial data stream processing system may determine, for each of the computational tasks, whether to place the respective computational task on a computing resource of the first site, the second site or the third site.
  • the system may advantageously reduce a cost incurred for transferring data streams to the third site and processing data streams by computing resources of the third site, while at the same time preventing an overload from occurring on the computing resources of the first site and the second site.
  • the system may flexibly respond to changing loads experienced in the system.
  • the system may flexibly respond to changes of its configuration, such as the registration of additional computational tasks, changes or deletion of existing computational tasks, and the addition, change or removal of computing resources and/or communication links, by redeploying the plurality of computational in response to such changes.
  • the respective entity e.g. the registry device and the computation placement control device
  • Any embodiment of the first aspect may be combined with any embodiment of the first aspect to obtain another embodiment of the first aspect.
  • an industrial data stream processing method for processing a number of data streams flowing from a first site to a second site by executing a number of computational tasks with the number of data streams using a plurality of computing resources distributed over a plurality of computing sites linked by a number of communication links comprises: registering, in a registry device, a plurality of computation definitions each including a computational task and a requirements specification of the computational task; for each of the computational tasks, performing, using a computation placement control device, the steps of : a) determining one of the computing resources to , execute the computational task based on the plurality of requirement specifications included in the plurality of computation definitions registered in the registry device and based on a plurality of capacity specifications of the plurality of computing resources and/or the number of communication links; b) placing the computational task on the determined computing resource; and c) causing the determined computing resource to execute the placed computational task.
  • a computer program product comprises a program code for executing the above-described industrial data stream processing method when run on at least , one computer .
  • a computer program product such as a computer program means, may be embodied as a memory card, USB stick, CD-ROM, DVD or as a file which may be downloaded from a server in a network.
  • a file may be provided by transferring the file comprising the computer program product from a wireless communication network.
  • Fig. 1 shows a block diagram of an industrial data stream processing system according to a first exemplary embodiment
  • Fig. 2 shows a flow chart of an industrial data stream processing method according to the first exemplary embodiment
  • Fig. 3 shows a block diagram of an industrial data stream processing system according to a second exemplary embodiment.
  • Fig. 1 shows a block diagram of an industrial data stream processing system 1 according to the first exemplary embodiment .
  • the industrial data stream processing system 1 of Fig. 1 comprises a first computing resource 31 located at a first computing site (first site) 21 and a second computing resource 32 located at a second computing site (second site) 22.
  • the first site 21 and the second site 22 are linked by a communication link 41.
  • a data stream 11 is shown flowing from the first site 21 through the first computing resource 31, the communication link 41 and the second computing resource 31 to the second site 22.
  • the industrial data stream processing system 1 further comprises a registry device 2 and a computation placement control device 3.
  • a plurality of computation definitions 51, 52 are registered in the registry device 2.
  • Each computation definition 51, 52 includes a respective computational task 61, 62 and a corresponding requirements specification 71, 72.
  • a capacity specification 81 of the first computing resource 31, a capacity specification 82 of the second computing resource 32 and a capacity specification 91 of the communication link 41 are pre-stored in the computation placement control device
  • the computation placement control device 3 is communicably connected to the registry device 2 and to each of the computing resources 31, 32.
  • the industrial data stream processing system 1 shown in Fig. 1 is configured to execute the industrial data stream processing method visualized in Fig. 2, which will now be described with reference to Fig. 2 and Fig. 1.
  • step SI the computational task 61 and the requirements specification 71 of the computational task 61 are combined to form the computation definition 61.
  • the computational task 62 and the requirements specification 72 thereof are combined to form the computation definition 62.
  • the computation definitions 61 and 62 are registered in the registry device 2.
  • Fig. 2 schematically shows step S2 being repeated for each of the computational tasks 61, 62. That is, in step S2, the computation placement control device 3 determines a respective one of the computing resources 31, 32 to execute each of the computational tasks 61, 62. In doing so, for each of the computational tasks 61, 62, the computation placement control device 3 considers the requirements specifications 71, 72 of both computational tasks 61, 62, the capacity specifications 81, 82 of both computing resources 31, 32 and the capacity specification 91 of the communication link 41.
  • the computation placement control device 3 may determine the computing device 31 as a computing device to execute the computational task 61, and may determine the computing device 31 as a computing device 62
  • Fig. 2 schematically shows steps S3 and S4 being repeated for each of the computational tasks 61, 62. That is, in a first iteration, in step S3, the computation placement control device 3 places the first computational task 61 on the first computing resource 31, and in step S4, the computation placement control device 3 causes the first computing resource 31 to execute the placed first computational task 61. Likewise, in the second iteration, in step S3, the computation placement control device 3 places the second computational task 62 on the second computing resource 32, and in step S4 , the computation placement control device 3 causes the second computing resource 32 to execute the placed second computational task 62.
  • the data stream 11 is processed by executing the computational task 61 with the data stream 11 on the first computing resource 31 at the first site 21. After that, the data stream 11 flows through the communication link 41 and arrives at the second site 22. At the second site 22, the data stream 11 is processed by executing the computational task 62 with the data stream 11 on the second computing resource 32. Thereby, the data stream 11 is processed using the plurality of computing resources 31, 32 distributed over the plurality of computing sites 21, 22.
  • step S3 depends on the specific contents of the registered requirements specifications 71, 72 and the pre-stored capacity specifications 81, 82, 91.
  • the computation placement control device 3 may also determine the first computing resource 31 as a computing resource to execute both the first and second computational tasks 61, 62 and may determine that no computational task is to be executed on the second computing resource 32.
  • the computation placement control device 3 may also determine the second computing resource 32 as a computing resource to execute the first and second computational tasks 61, 62 and may determine that no computational task is to be executed on the first computing resource 31.
  • the computation placement controller 3 may determine the computational task 61 to be favorably executed by computing resource 31 so as to reduce a bandwidth transmitted over the communication link 41.
  • the computation placement controller 3 may determine the computational task 61 to be favorably executed by the second computing resource 32 so as to avoid an overload situation on the first computing resource 31.
  • Fig. 3 shows a block diagram of an industrial data stream 1 processing system according to a second exemplary embodiment.
  • the industrial stream processing system 1 of Fig. 3 comprises a total of seven computing resources 31 to 37 distributed over a first site 21, a second site 22 and a third site 23.
  • the first site 21 includes three computing resources 31, 32, 33.
  • the first computing resource 31 is an embedded computer installed at the gas turbine 5 and configured to acquire data from a sensor (not shown) installed in the gas turbine 5 and form the data stream 11, which is representative of data acquired from the sensor installed in the gas turbine 5.
  • the second computing resource 32 is an embedded computer installed at the gas turbine 6 and configured to acquire data from a sensor (not shown) installed in the gas turbine 6 and form the data stream 12, which is representative of data acquired from the sensor installed in the gas turbine 6.
  • the third computing resource 33 is a workstation configured to act as a gateway proxy for relaying network traffic from the first site 21 via the communication link 41 to the third site 23.
  • the second site 22 includes a computing resource 37, which is, for example, a workstation.
  • the workstation 37 receives the data stream 11, 12, after the data stream 11, 12 has been processed by the computational tasks 61, 62 executed on the computing resources 31-36, via the second communication link 42 and presents the received data stream 11, 12 in a user interface on a display device or the like.
  • the third site 23 includes three computing resources 34, 35, 36.
  • the fourth computing resource 34 and the sixth computing resource 36 are workstations configured to act as respective gateway proxies for relaying network traffic from the third site 23 via respective communication links 41, 42 to the first site 21 and the second site 22, respectively.
  • the fifth computing resource 35 in the present example, is a server farm 35, which is conceptually visualized as one fifth computing resource 35, but may also be embodied as a plurality of server computers, as a cloud or the like. ,
  • the third site 23 also includes a registry device 2 and a computation placement control device 3. Although not shown in Fig. 3, the computation placement control device 3 is communicably connected to the registry device 2 and each of the computing resources 31-37.
  • a plurality of computation definitions 51, 52, ... each comprising a computational task 61, 62, ... and a corresponding requirements specification 71, 72, ... are registered in a registration database (not shown) implemented by the registry device 2.
  • a plurality of capacity specifications 81, 82, ... relating to the plurality of computing resources 31-37 and a plurality of capacity specifications 91, 92 relating to the plurality of communication links 41, 42 are not pre-stored in the computation placement control device 2, but are also registered in the registry database implemented by the registry device 2.
  • the registry device 2 is connected to an assistance device 4 and is configured to allow the assistance device 4, or an operator thereof to add, modify and delete entries of the registration database implemented by the registry device 2, such as the computation definitions 51, 52, ... and/or the capacity specifications 81, 82, ..., 91, 92, ....
  • the registry device 2 Upon detecting a change of an entry 51, 52, 81, 82, 91, 92 of the registration database, the registry device 2 notifies the computation placement control device 3 of the change.
  • the computation placement control device 3 executes
  • the computation placement control device 3 performs processing for determining a suitable placement of each of the computational tasks 61, 62, ..., placing each of the computational tasks 61, 62, ... on one of the computing resources 31-37, and causing each of the computing resources 31-37 to execute a respective computational task 61, 62, placed thereon.
  • the industrial data stream processing system 1 may advantageously be programmable in a simple manner.
  • an operator may cause deployment of a respective computational task 61, 62, ... on the industrial data stream processing system 1 by using the assistance device 4 to register a corresponding computation specification 51, 52 with the registry device 2.
  • the computation placement control device 3 may deploy the newly registered computational task
  • a respective requirements specification 71, 72, ... may specify one of a required computing platform, a required latency, a required upstream bandwidth, a required downstream bandwidth, a required computational power, an expected computational load, an allowable computational cost, an allowable bandwidth- related cost in relation to a corresponding computational task 61, 62.
  • a respective capacity specification 81, 82 of a respective computing resource 31-37 may specify at least one of a computing platform, a computational power, and a computational cost of the computing resource 31-37.
  • a respective capacity specification 91, 92 of a respective communication link 41, 42 may specify at least one of a latency, a bandwidth and a bandwidth-related cost of the communication link 41, 42.
  • the computation placement controller 3 determines the respective computing resources 31-37 to execute the respective computational tasks 61, 62 such that the requirements specified by the plurality of requirements specifications 71, 72 are met with corresponding capacities specified by the plurality capacity specifications 81, 82, 91, 92.
  • the computation placement controller 3 determines respective computing resources 31-37 to execute the respective computational tasks 51, 52 such that a total cost parameter associated with executing the plurality of computational tasks 51, 52 with the number of data streams 11, 12 is reduced.
  • computational task 61 may be a data preprocessing task.
  • the corresponding requirements specification 71 may specify that a required downstream bandwidth of the computational task 61 is lower than a required upstream bandwidth, or in other words, the requirements specification 71 may state that execution of the preprocessing task 61 with a data stream 11, 12 reduces the bandwidth of the data stream 11, 12.
  • the capacity specification 91 relating to the first communication link 41 may specify a bandwidth-related cost of the communication link 41, 42.
  • a cost of transmission incurred due to transmission of data over the communication link 41 may contribute to a total cost associated with executing the plurality of computational tasks 61, 62 with the number of data streams 11, 12.
  • the computation placement controller 3 may determine a respective instance of the data preprocessing task 61 to be placed on and executed by a respective embedded computer 31, 32.
  • the computation placement controller 3 may determine that the data preprocessing task 61 is to be placed on and executed by the proxy gateway 31 and/or on the server farm 35 and not by the respective embedded computer 31, 32. Thereby, it is ensured that requirements specified by the requirements specifications 71, 72, ... are met with corresponding capacities specified by the capacity specifications 81, 82.
  • computational task 62 may be a visualization task that is configured to generate graphical visualization data from time-series data.
  • the corresponding requirements specification 72 may specify that a required downstream bandwidth of the computational task 61 is higher than a required upstream bandwidth, or in other words, the requirements specification 71 may state that execution of the preprocessing task 61 with a data stream 11, 12 increases the bandwidth of the data stream 11, 12.
  • the computation placement controller 3 may determine the visualization task 62 to be placed on and executed by the workstation 37.
  • the computation placement controller 3 may determine the data preprocessing task 61 to be placed on and executed by the server farm 35 and not by the workstation 37.
  • the embedded computer 31, the server farm 35 and the workstation 37 may implement a variety of computing platforms .
  • data preprocessing task 61 may be implemented using machine code for an embedded processor such as an ARM processor of the embedded computer 31.
  • the computation placement control device 3 may cause the embedded computer 31 to natively execute the data preprocessing task 61.
  • the computation placement control device 3 may install a simulation unit configured to simulate an embedded environment of the embedded computer 31 on the server farm 35 and may cause the server farm 35 to launch the 5 simulation unit and use the simulator to execute the computational task 61 therein.
  • a computing platform, such as a processor type and operating system, of computing engine 31 may be specified in the corresponding capacity specification 81, and a computing 10 platform of computing engine 35 may be specified in a corresponding capacity specification (not shown) of the computing engine 35.
  • a computing platform on which computational task 61 may or may not be executed natively may or be specified by the corresponding requirements
  • visualization task 62 may be implemented using platform independent code, such as Java code.
  • platform independent code such as Java code.
  • the computation placement control device 3 may communication with the respective computing engine 34, 35, 36,
  • JRE Java Runtime Environment
  • the computation placement control 25 device 3 may install a corresponding JRE on the respective computing engine 34, 35, 36, 37 and subsequently launch the JRE on the computing engine 34, 35, 36, 37 and cause the launched JRE to execute the placed visualization task 62.
  • 30 placement control device 3 communicates with the computing units 31-37 to receive information about an actual computation load experienced by a respective computing unit 31-37. Likewise, the computation placement control device 3 communicates with at least one of computing units 33, 34 and with at least one of computing units 36, 37 to receive information about an actual transmission load experienced by the communication links 41 and 42.
  • the computation placement control device 3 may monitor and respond to loads in the industrial stream processing system 1.
  • the computation placement control device 3 may repeat steps S20 to S40 shown in Fig. 2 in consideration of the monitored actual loads. For example, the computation placement control device 3 may decide to move one of the computational tasks 61, 62 from a computing engine 31-37 for which the monitored actual load has exceeded the predetermined threshold for the predetermined amount of time, to a different computing engine 31-37 for which the monitored actual load has not exceeded the predetermined threshold.
  • a predetermined threshold such as 90%
  • a predetermined amount of time such as 120 seconds
  • an industrial stream processing system 1 including a registry device 2 and a computation placement device 3 which may advantageously automate the deployment, placement and/or execution of computational tasks registered in the registry device over a plurality of computing engines.
  • a cost and effort of designing, operating and modifying and industrial stream processing system may be significantly reduced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A system for processing data streams flowing from a first site to a second site comprises: multiple computing resources distributed over multiple computing sites linked by-communication links; a registry configured to store multiple computation definitions each including a computational task and a requirements specification thereof; and a control device configured to, for each computational task: determine one of the computing resources to execute the computational task based on the plurality of requirement specifications included in the plurality of computation definitions registered in the registry and on multiple capacity specifications of the plurality of computing resources and/or the communication links; place the computational task on the determined computing resource; and cause the determined computing resource to execute the placed computational task. Ease and flexibility of performing industrial data stream processing are increased. A corresponding method and a computer program product are also proposed.

Description

INDUSTRIAL DATA STREAM PROCESSING SYSTEM AND METHOD
The present invention relates to the field of industrial sensor data processing, and, more particularly, to an industrial data stream processing system and method.
In recent years, server and/or cloud-based stream processing systems are increasingly used to acquire sensor data at an industrial facility, process the sensor data using a server farm or the like, and forward the processed sensor data stream to an operator for monitoring, visualization and the like. The server farm, the industrial facility and the operator accessing the processed sensor data may each be located at different sites remote from each other.
Processing the data stream may involve a number of computational tasks, such as aggregating, sorting, trending, analyzing, FFT (Fast Fourier Transform) , visualization and the like, which may be deployed not only on the server farm, but also on various computing resources available at each of the sites. A suitable placement of each computational task in terms of speed and/or cost associated with processing and transfer of the data stream may depend on computational capacities available at each of the sites, bandwidth available between the sites and the like.
Therefore, traditionally, an architecture of a stream processing system is manually designed so as to fulfil a specific requirement for a specific job at hand.
It is one object of the present invention to increase the flexibility of performing industrial data stream processing.
Accordingly, an industrial data stream processing system for processing a number of data streams flowing from a first site to a second site by executing a number of computational tasks with the number of data streams comprises: a plurality of computing resources distributed over a plurality of computing sites linked by a number of communication links; a registry device configured to register a plurality of computation definitions each including a computational task and a requirements specification of the computational task; and a computation placement control device configured to, for each of the computational tasks, perform the steps of: a) determining one of the computing resources to execute the computational task based on the plurality of requirement specifications included in the plurality of computation definitions registered in the registry device and based on a plurality of capacity specifications of the plurality of computing resources and/or the number of communication links; b) placing the computational task on the determined computing resource; and c) causing the determined computing resource to execute the placed computational task.
The proposed system advantageously enables automated determination of a suitable placement of each of the computational tasks registered in the registry device on the plurality of distributed computing resources.
A data stream may refer to a sequence of digitally encoded signals used to represent information in transmission. More particularly, the digital encoded signals may comprise a plurality of data packets.
A computational task, as used herein, may refer to a sequence of instructions configured to, when executed on a computing resource, cause the computing resource to process a data stream .
A computing resource, as used herein, may refer to any computational device capable of processing a data stream when a corresponding computational task is executed thereon. "Processing a data stream", or "executing a computational task with the data stream" , may refer to one or more of : receiving the data stream, or packets thereof, as input, decoding the information represented by the data stream, performing processing on the information represented by the data stream so as to compute processed information, encoding the processed information, and outputting the data stream, or packets thereof, representing the processed information.
The processing performed on information represented by a data stream in response to a computational task being executed with the data stream may include, as non- limiting examples, one or more of: filtering, shaping, sorting, ordering, indexing, qualifying, transforming, such as computing a Fast Fourier Transform (FFT) , averaging, smoothing, trending, weighting, application of a filter, analyzing, visualizing or generating visualization data, and the like.
A site may refer to a physical location. In particular, a first site may refer to a site where information to be represented by a respective data stream is acquired. More specifically, the first site may be a site where an industrial facility is installed from which raw sensor data is acquired. A second site may refer to a site where the data stream is received and processed information is consumed. More particularly, the second site may refer to a site where remote operation, monitoring, analysis and the like is performed with respect to the industrial facility. A computing site may refer to a site where a number of computing resources is arranged. The plurality of computing sites may include the first site and/or the second site.
It is noted that the phrase "a number" shall be construed as a number of one or more . That is, raw data may be acquired at the first site, may be transmitted as a data stream from the first site, via the plurality of computing resources, to the second site. While passing through each of the computing resources, the data stream may be processed by a number of computational tasks placed on a respective computing resource. Processing the data stream may include altering the information represented by the data stream. Said altering may in particular comprise altering a bandwidth required for transmission of the data stream to a next computing resource .
A data stream may pass directly from one of the computing resource to another of the computing resources located at a same computing site.
In order to pass from one of the computing resources located at one site to another of the computing resources located at another site, a data stream may be transmitted over a communication link linking the one site and the other site.
It will be appreciated that a communication link may be characterized by having a predetermined capacity. Herein, the capacity may refer, for example, to a maximum bandwidth and/or a certain cost associated with transmission of a data stream having a certain bandwidth thereover. Likewise, a computing resource may be characterized by having a predetermined capacity. Herein, the capacity may refer, for example, to a maximum processing power and/or by a certain cost associated with processing a certain processing load thereon.
Therefore, a suitable placement of a respective of the plurality of computational tasks on the plurality of computing resources may depend on the specific requirements of the respective computational task and the specific capacities of the plurality of computing resources and/or the number of communication links. The registry device may implement a registry database. Registering a computational task and a requirements specification thereof in the registry device may comprise storing the computational task and the requirements specification thereof in the registry database. The registry device may be configured to enable computational tasks and requirements specifications thereof to be registered or stored, changed, and deleted in the registry database by communicating with the registry device using an application programming interface or API, a user interface or UI, or the like .
A requirements specification may refer to information or data specifying requirements with respect to a computation device on which the corresponding computational may be placed and/or with respect to a communication link over which an incoming data stream input to the computational task and an outgoing data stream output by the computational task may be transmitted. A respective of the requirements may be related to a minimum capacity of the computing resource and/or communication link and/or may be related to a maximum load placed on the respective computing resource and/or communication link.
A respective requirements specification may be registered in the registry device in association with a corresponding computational task.
Likewise, a capacity specification may refer to information or data specifying capacities of a respective computing resource and/or a respective communication links.
A respective capacity specification may be pre- stored in the computation placement control device. Thereby, the computation placement control device may possess knowledge about a hardware configuration of the industrial data stream processing system. The computation placement control device may be communicably connected to the registry device and to each of the computing resources .
By executing steps a) , b) and c)‘ for each of the computational tasks registered in the registry device, the computation placement control device may automatically create a working deployment of the plurality of computational tasks on the computing resources of the industrial data stream processing system in consideration of the respective requirements specifications and capacity specifications.
Thereby, the proposed industrial data stream processing system may reduce a burden of an operator tasked with designing computational tasks to be executed on the data streams.
Specifically, an operator may concentrate on programming a required computational task and specifying the requirements of the computational task. The operator may register the computational task and its requirements specifications with the registry device. The computation placement control device may handle proper placement of ‘the computational task on one of the computing resources. The operator may be relieved from the burden of considering the requirements of other computational tasks and the available capacities and determining a suitable placement of the computational task on one of the plurality of the computing resources available in the industrial data stream processing system.
Thereby, ease and flexibility of creating computational tasks for industrial data stream processing may be enhanced.
Placing a computational task on a determined computing resource may include one or more of transmitting the computational task to storing the computational task on and/or installing the determined computation resource on the determined computing resource. Causing the determined computing resource to execute the placed computational task may include transmitting an instruction to the determined computing resource.
In response to receiving the instruction, the determined computing resource may, for example, start executing the placed computational task in a continuous loop polling for data packets to arrive, or may register the placed computational task for event -based execution. For example, a scheduler unit comprised by the computation resource may spawn or execute the computational task in response to arrival of a data packet of a respective data stream.
According to one embodiment, the registry device is further configured to register the plurality of capacity specifications of the plurality of computing resources and/or the plurality of communication links.
That is, not only the requirements specifications, but also the capacity specifications may be registered or stored, changed and deleted in the registry database.
Thus, advantageously, flexibility in reconfiguring the hardware, or the computing resources and their arrangement, within the industrial data stream processing system is provided.
According to a further embodiment, the computation placement control device is configured to repeat the steps a) , b) and c) in response to a change of the registry device.
A change of the registry device may refer to a change of information registered in the registry device, such as change of a computational task, a requirements specification and/or a capacity specification registered in the registry database.
In particular, repeating steps a) , b) and c) in response to a change of the registry device may include repeating steps a) , b) and c) for each of the computational tasks registered in the registry device after the change of the registry device.
Therein, for example, an already placed computational task may be moved from its current computing resource to a different computing resource.
Thereby, the industrial data stream processing system may advantageously enable dynamic reconfiguration whenever a computational task is registered, changed, added or deleted and/or whenever a computing resource or a computation link is added, changed or deleted.
According to a further embodiment, the plurality of computing resources is a plurality of heterogenous computing resources implementing a variety of computing platforms.
Herein, a computing platform may refer to one or more of : a hardware architecture of the computing resource, an operating system of the computing resource, a number of runtime environments available on the computing resource, and the like .
Thereby, advantageously, different kinds of computing platforms may be involved in processing the number of data streams. For example, a computational task may be placed on one or more of a PLC installed at an industrial facility at the first site, a personal computer used by an operator at the second site, a high-performance computer installed in a server farm at a third site, etc. The computation placement control device may decide which computational task to place on which computing platform in consideration of the requirement specifications, the capacity specifications, the computing platforms and the like. -
According to a further embodiment, if a respective of the computational tasks is native to the computing platform of the determined computing resource, step c) includes natively executing the placed computational task on the determined computing resource .
"Native/natively" , herein, may refer to a case wherein instructions comprised by the computational task are directly compatible with properties of the determined computing resource. For example, the instructions may correspond to an instruction set of a processing unit of the computing resource and may use an application programming interface of an operating system of the computing resource.
Native execution may advantageously maximize leverage of the performance offered by the determined computing resource.
According to a further embodiment, if a respective of the computational tasks is non-native to the computing platform of the determined computing resource, step b) includes installing, on the determined computing resource, a runtime environment configured to enable execution of the computational task by the determined computing resource, and step c) includes launching the runtime environment on the determined computing resource and causing the determined computing resource to execute the placed computational task using the runtime environment.
A runtime environment may comprise instructions native to the determined computing resource and configured to enable execution of non-native instructions comprised by the computational task.
For example, the computational task may comprise platform- independent instructions, such as Java bytecode instructions, Python instructions and the like. A corresponding runtime environment may comprise a Java Runtime Environment, a Python interpreter, and the like.
Alternatively, the computational task may comprise platform- dependent instructions native to a different one of the computing platforms present in the industrial data stream processing system. A corresponding runtime environment may comprise an emulator emulating the different one computing platform on the computing platform of the determined computing resource .
Thereby, flexibility of placement of the computational tasks on the plurality of computing resources may be further increased.
According to a further embodiment, a respective requirements specification specifies at least one of a required computing platform, a required latency, a required upstream bandwidth, a required downstream bandwidth, a required computational power, an expected computational load, an allowable computational cost, an allowable bandwidth-related cost; a respective capacity specification of a respective computing resource specifies at least one of a computing platform, a computational power; and a computational cost of the computing resource; and a respective capacity specification of a respective communication link specifies at least one of a latency, a bandwidth and a bandwidth-related cost of the communication link.
Specifically, herein, "latency" may refer to a time allowed for processing and/or transmission. "Upstream bandwidth" may refer to a bandwidth of an incoming data stream used as an input to the computational task; and "downstream bandwidth" may refer to a bandwidth of an outgoing data stream provided as an output of the computational task. "Bandwidth" may refer to an amount of data to be transmitted per time unit and may be specified in units of byte per second or the like. A
"computational power" may refer to a processing power specified in oscillations per second, floating point operations per second or the like. "Computational load" may refer to an amount of operations executed per time unit, an amount of memory allocated or the like. "Bandwidth-related cost" may refer to a cost associated with transmitting a predetermined amount of data over a respective communication link used to transmit the incoming and/or the outgoing data stream. "Computational cost" may refer to a cost associated with performing a predetermined amount of processing, specified as a number of operations, as an amount of reserved processing time or reserved memory the like. Herein, a "cost" may refer to any parameter related to execution of the plurality of computational tasks that is desired to be minimized.
The computation placement control device may advantageously be configured to consider the above-mentioned requirement specifications of the plurality of computational tasks and the above-mentioned capacity specifications of the plurality of computing resources and/or the number of communication links when determining a suitable placement of each of the plurality of computational tasks.
According to a further embodiment, step a) includes determining the respective computing resources to execute the respective computational tasks such that the requirements specified by the plurality of computation definitions are met with corresponding capacities specified by the plurality capacity specifications.
Thereby, a case where overload, stalling, delays etc. is experienced on one or more of the computing resources and/or one or more of the number of communication links may be advantageously avoided.
According to a further embodiment, step a) includes determining the respective computing resources to execute the respective computational tasks such that a total cost parameter associated with executing the plurality of computational tasks with the number of data streams is reduced.
The total cost parameter may be one of a total runtime, a total latericy, and/or a total cost incurred while processing a predetermined portion of each of the number of data streams .
For example, the computation placement control device may be configured to calculate the total cost parameter for a plurality of candidate deployments and determine a deployment in which the desired parameter is reduced. In particular, the computation placement control device may employ a method such as iterating through all candidate deployments, using a method of steepest descent, or the like.
A deployment, herein, may refer to a specific placement of each of the plurality of computational tasks.
That is, the industrial data stream processing system may advantageously be capable of automatically determining an optimized deployment.
According to a further embodiment, the computation placement control device is configured to monitor an actual load experienced by each of the plurality of computing resources and/or an actual load experienced by each of the plurality of communication links.
For example, a respective computing resource may be configured to measure a load experienced by the computing resource and/or a load experienced by a communication link coupled to the computing resource and may be configured to transmit the measured loads to the computation placement control device.
Herein, load may refer to a relative load such as a percental utilization of an available processing power of a respective computing resource and/or a percental utilization of an available bandwidth of a respective communication link, and/or to an absolute load such as a number of operations carried out per time out by a processing unit of a respective computing resource and/or an amount of data transmitted per time unit over a respective communication link.
Thereby, for example, an overload situation a bottleneck and the like may favorably be detected.
According to a further embodiment, wherein step a) includes determining the respective computing resources to execute the respective computational tasks further based on the monitored actual loads .
That is, for example the industrial data stream processing system may be configured to determine an optimum placement of the computational tasks by trial and error by adapting the deployment until all of the monitored loads are within an acceptable range .
According to a further embodiment, the computation placement control device is configured to repeat steps a) , b) and c) in response to a monitored actual load exceeding a predetermined threshold for a predetermined amount of time .
That is, the industrial data stream processing system may advantageously be able to adapt itself to changing loads experienced by the plurality of computing resources and/or the number of communication links. In response to a change in the monitored loads, the industrial data stream processing system may dynamically reconfigure the placement of the respective computing resources. Overloads and bottlenecks may be favorably avoided.
According to a further embodiment, a first site is a site where data of the number of data streams is acquired from an industrial facility, the second site is a site where data of the number of data streams is presented in a user interface, and a third site is a data center including a number of computing resources; and the plurality of computing sites includes the third site and at least one of the first site and the second site.
The industrial facility may be a gas turbine, a wind turbine, a pdwer plant, a steel mill, or any other type of industrial facility in which at least one sensor provides a data stream of sensor data. Herein, a programmable logic controller, an industrial PC or the like may be located at the first site and may be configured to acquire the sensor data. The programmable logic controller, industrial PC or the like may constitute a computing resource that may be used to execute one or more of the plurality of computational tasks. For example, raw sensor data preprocessing may be advantageously executed on the programmable logic controller or the industrial PC at the first site.
The second site may be an office where a personal computer, such as a laptop PC, a desktop PC or a workstation, of an operator is located. The user interface may be displayed to the operator of the personal computer, for example in a web client executed on the personal computer. The personal computer may comprise a computing resource that may be used to execute one or more of the plurality of computational tasks. For example, visualization processing may be advantageously executed on the personal computer at the second site.
The third site comprising the data center may be located remotely from the first site and/or the second site. The data center may provide a high processing capability, however, a cost of transferring data to/and from the data center and/or a performing computation on the data center may likewise be high.
According to the present embodiment, the industrial data stream processing system may determine, for each of the computational tasks, whether to place the respective computational task on a computing resource of the first site, the second site or the third site. Herein, for example, the system may advantageously reduce a cost incurred for transferring data streams to the third site and processing data streams by computing resources of the third site, while at the same time preventing an overload from occurring on the computing resources of the first site and the second site. The system may flexibly respond to changing loads experienced in the system. The system may flexibly respond to changes of its configuration, such as the registration of additional computational tasks, changes or deletion of existing computational tasks, and the addition, change or removal of computing resources and/or communication links, by redeploying the plurality of computational in response to such changes.
The respective entity, e.g. the registry device and the computation placement control device, may be implemented in hardware and/or in software. If said entity is implemented in hardware, it may be embodied as a device, e.g. as a computer or as a processor or as a part of a system, e.g. a computer system. If said entity is implemented in software it may be embodied as a computer program product, as a function, as a routine, as a program code or as an executable object.
Any embodiment of the first aspect may be combined with any embodiment of the first aspect to obtain another embodiment of the first aspect.
According to a further aspect, an industrial data stream processing method for processing a number of data streams flowing from a first site to a second site by executing a number of computational tasks with the number of data streams using a plurality of computing resources distributed over a plurality of computing sites linked by a number of communication links comprises: registering, in a registry device, a plurality of computation definitions each including a computational task and a requirements specification of the computational task; for each of the computational tasks, performing, using a computation placement control device, the steps of : a) determining one of the computing resources to , execute the computational task based on the plurality of requirement specifications included in the plurality of computation definitions registered in the registry device and based on a plurality of capacity specifications of the plurality of computing resources and/or the number of communication links; b) placing the computational task on the determined computing resource; and c) causing the determined computing resource to execute the placed computational task.
The embodiments and features described with reference to the proposed industrial data stream processing system apply mutatis mutandis to the proposed industrial data stream processing method.
According to a further aspect, a computer program product comprises a program code for executing the above-described industrial data stream processing method when run on at least , one computer .
A computer program product, such as a computer program means, may be embodied as a memory card, USB stick, CD-ROM, DVD or as a file which may be downloaded from a server in a network. For example, such a file may be provided by transferring the file comprising the computer program product from a wireless communication network.
Further possible implementations or alternative solutions of the invention also encompass combinations - that are not explicitly mentioned herein - of features described above or below with regard to the embodiments. The person skilled in the art may also add individual or isolated aspects and features to the most basic form of the invention. Further embodiments, features and advantages of the present invention will become apparent from the subsequent description and dependent claims, taken in conjunction with the accompanying drawings, in which:
Fig. 1 shows a block diagram of an industrial data stream processing system according to a first exemplary embodiment;
Fig. 2 shows a flow chart of an industrial data stream processing method according to the first exemplary embodiment; and
Fig. 3 shows a block diagram of an industrial data stream processing system according to a second exemplary embodiment.
In the Figures, like reference numerals designate like or functionally equivalent elements, unless otherwise indicated.
Fig. 1 shows a block diagram of an industrial data stream processing system 1 according to the first exemplary embodiment .
The industrial data stream processing system 1 of Fig. 1 comprises a first computing resource 31 located at a first computing site (first site) 21 and a second computing resource 32 located at a second computing site (second site) 22. The first site 21 and the second site 22 are linked by a communication link 41. A data stream 11 is shown flowing from the first site 21 through the first computing resource 31, the communication link 41 and the second computing resource 31 to the second site 22.
The industrial data stream processing system 1 further comprises a registry device 2 and a computation placement control device 3. A plurality of computation definitions 51, 52 are registered in the registry device 2. Each computation definition 51, 52 includes a respective computational task 61, 62 and a corresponding requirements specification 71, 72. A capacity specification 81 of the first computing resource 31, a capacity specification 82 of the second computing resource 32 and a capacity specification 91 of the communication link 41 are pre-stored in the computation placement control device
3.
The computation placement control device 3 is communicably connected to the registry device 2 and to each of the computing resources 31, 32.
The industrial data stream processing system 1 shown in Fig. 1 is configured to execute the industrial data stream processing method visualized in Fig. 2, which will now be described with reference to Fig. 2 and Fig. 1.
In step SI, the computational task 61 and the requirements specification 71 of the computational task 61 are combined to form the computation definition 61. Likewise, the computational task 62 and the requirements specification 72 thereof are combined to form the computation definition 62. The computation definitions 61 and 62 are registered in the registry device 2.
Fig. 2 schematically shows step S2 being repeated for each of the computational tasks 61, 62. That is, in step S2, the computation placement control device 3 determines a respective one of the computing resources 31, 32 to execute each of the computational tasks 61, 62. In doing so, for each of the computational tasks 61, 62, the computation placement control device 3 considers the requirements specifications 71, 72 of both computational tasks 61, 62, the capacity specifications 81, 82 of both computing resources 31, 32 and the capacity specification 91 of the communication link 41.
In this way, a suitable deployment (distribution, placement) of the computational tasks 61, 62 over the computing resources 31, 32 is determined. For example, the computation placement control device 3 may determine the computing device 31 as a computing device to execute the computational task 61, and may determine the computing device 31 as a computing device 62
Fig. 2 schematically shows steps S3 and S4 being repeated for each of the computational tasks 61, 62. That is, in a first iteration, in step S3, the computation placement control device 3 places the first computational task 61 on the first computing resource 31, and in step S4, the computation placement control device 3 causes the first computing resource 31 to execute the placed first computational task 61. Likewise, in the second iteration, in step S3, the computation placement control device 3 places the second computational task 62 on the second computing resource 32, and in step S4 , the computation placement control device 3 causes the second computing resource 32 to execute the placed second computational task 62.
In this way, the data stream 11 is processed by executing the computational task 61 with the data stream 11 on the first computing resource 31 at the first site 21. After that, the data stream 11 flows through the communication link 41 and arrives at the second site 22. At the second site 22, the data stream 11 is processed by executing the computational task 62 with the data stream 11 on the second computing resource 32. Thereby, the data stream 11 is processed using the plurality of computing resources 31, 32 distributed over the plurality of computing sites 21, 22.
It is noted that a result of the determination of step S3 depends on the specific contents of the registered requirements specifications 71, 72 and the pre-stored capacity specifications 81, 82, 91. Depending on what is registered in the registry device 2, for example, the computation placement control device 3 may also determine the first computing resource 31 as a computing resource to execute both the first and second computational tasks 61, 62 and may determine that no computational task is to be executed on the second computing resource 32. The computation placement control device 3 may also determine the second computing resource 32 as a computing resource to execute the first and second computational tasks 61, 62 and may determine that no computational task is to be executed on the first computing resource 31.
Merely as an example, if a bandwidth of the communication link 41 indicated by the capacity specification 91 is low and the requirements specification 71 indicates that the computational task 61 has a high input bandwidth but a low output bandwidth, the computation placement controller 3 may determine the computational task 61 to be favorably executed by computing resource 31 so as to reduce a bandwidth transmitted over the communication link 41.
However, again, merely as an example, if a bandwidth of the communication link 41 indicated by the capacity specification 91 is high and the requirements specification 71 indicates that the computational task 61 has a high input bandwidth but also causes a high computational load, and if the capacity specification 82 indicates that the second computing resource 32 has a high computational power, the computation placement controller 3 may determine the computational task 61 to be favorably executed by the second computing resource 32 so as to avoid an overload situation on the first computing resource 31.
Fig. 3 shows a block diagram of an industrial data stream 1 processing system according to a second exemplary embodiment. The industrial stream processing system 1 of Fig. 3 comprises a total of seven computing resources 31 to 37 distributed over a first site 21, a second site 22 and a third site 23.
At the first site 21, two industrial facilities 5, 6, such as gas turbines and the like, are installed. The first site 21 includes three computing resources 31, 32, 33. The first computing resource 31 is an embedded computer installed at the gas turbine 5 and configured to acquire data from a sensor (not shown) installed in the gas turbine 5 and form the data stream 11, which is representative of data acquired from the sensor installed in the gas turbine 5. The second computing resource 32 is an embedded computer installed at the gas turbine 6 and configured to acquire data from a sensor (not shown) installed in the gas turbine 6 and form the data stream 12, which is representative of data acquired from the sensor installed in the gas turbine 6. The third computing resource 33 is a workstation configured to act as a gateway proxy for relaying network traffic from the first site 21 via the communication link 41 to the third site 23.
The second site 22 includes a computing resource 37, which is, for example, a workstation. The workstation 37 receives the data stream 11, 12, after the data stream 11, 12 has been processed by the computational tasks 61, 62 executed on the computing resources 31-36, via the second communication link 42 and presents the received data stream 11, 12 in a user interface on a display device or the like.
The third site 23 includes three computing resources 34, 35, 36. The fourth computing resource 34 and the sixth computing resource 36 are workstations configured to act as respective gateway proxies for relaying network traffic from the third site 23 via respective communication links 41, 42 to the first site 21 and the second site 22, respectively. The fifth computing resource 35, in the present example, is a server farm 35, which is conceptually visualized as one fifth computing resource 35, but may also be embodied as a plurality of server computers, as a cloud or the like. ,
Further, the third site 23 also includes a registry device 2 and a computation placement control device 3. Although not shown in Fig. 3, the computation placement control device 3 is communicably connected to the registry device 2 and each of the computing resources 31-37.
Similar to the first exemplary embodiment, also in the second exemplary embodiment, a plurality of computation definitions 51, 52, ... each comprising a computational task 61, 62, ... and a corresponding requirements specification 71, 72, ... are registered in a registration database (not shown) implemented by the registry device 2.
Unlike to the first exemplary embodiment, in the registry device 2 according to the second exemplary embodiment, also a plurality of capacity specifications 81, 82, ... relating to the plurality of computing resources 31-37 and a plurality of capacity specifications 91, 92 relating to the plurality of communication links 41, 42 are not pre-stored in the computation placement control device 2, but are also registered in the registry database implemented by the registry device 2.
The registry device 2 is connected to an assistance device 4 and is configured to allow the assistance device 4, or an operator thereof to add, modify and delete entries of the registration database implemented by the registry device 2, such as the computation definitions 51, 52, ... and/or the capacity specifications 81, 82, ..., 91, 92, .... Upon detecting a change of an entry 51, 52, 81, 82, 91, 92 of the registration database, the registry device 2 notifies the computation placement control device 3 of the change.
In response to being notified by the registry device 2 of the change, the computation placement control device 3 executes
(in case of a follow-up change: repeats execution of) the steps S2, S3 and S4 of the method shown in Fig. 2, which has been described in connection with the first exemplary embodiment, for each of the computational tasks 61, 62, registered in the registry device 4 after the change.
In other words, whenever the registry device 2 detects a change, the computation placement control device 3 performs processing for determining a suitable placement of each of the computational tasks 61, 62, ..., placing each of the computational tasks 61, 62, ... on one of the computing resources 31-37, and causing each of the computing resources 31-37 to execute a respective computational task 61, 62, placed thereon.
That is, ‘the industrial data stream processing system 1 may advantageously be programmable in a simple manner.
Specifically, an operator may cause deployment of a respective computational task 61, 62, ... on the industrial data stream processing system 1 by using the assistance device 4 to register a corresponding computation specification 51, 52 with the registry device 2. The computation placement control device 3 may deploy the newly registered computational task
61, 62, ... on a suitable one of the computing resources 31-37 and may further reconfigure the industrial data stream processing system 1 (such as move an already deployed computational task 61, 62, ... to a different computing resource 31-37) as suitably needed. Various further developments of the second exemplary embodiment will now be described with reference to Fig. 3.
According to one further development, a respective requirements specification 71, 72, ... may specify one of a required computing platform, a required latency, a required upstream bandwidth, a required downstream bandwidth, a required computational power, an expected computational load, an allowable computational cost, an allowable bandwidth- related cost in relation to a corresponding computational task 61, 62. A respective capacity specification 81, 82 of a respective computing resource 31-37 may specify at least one of a computing platform, a computational power, and a computational cost of the computing resource 31-37. A respective capacity specification 91, 92 of a respective communication link 41, 42 may specify at least one of a latency, a bandwidth and a bandwidth-related cost of the communication link 41, 42.
According to one further development, the computation placement controller 3 determines the respective computing resources 31-37 to execute the respective computational tasks 61, 62 such that the requirements specified by the plurality of requirements specifications 71, 72 are met with corresponding capacities specified by the plurality capacity specifications 81, 82, 91, 92.
According to one further development, the computation placement controller 3 determines respective computing resources 31-37 to execute the respective computational tasks 51, 52 such that a total cost parameter associated with executing the plurality of computational tasks 51, 52 with the number of data streams 11, 12 is reduced.
Specifically, as one example, computational task 61 may be a data preprocessing task. The corresponding requirements specification 71 may specify that a required downstream bandwidth of the computational task 61 is lower than a required upstream bandwidth, or in other words, the requirements specification 71 may state that execution of the preprocessing task 61 with a data stream 11, 12 reduces the bandwidth of the data stream 11, 12.
The capacity specification 91 relating to the first communication link 41 may specify a bandwidth-related cost of the communication link 41, 42.
A cost of transmission incurred due to transmission of data over the communication link 41 may contribute to a total cost associated with executing the plurality of computational tasks 61, 62 with the number of data streams 11, 12. In order to reduce the cost of transmission, the computation placement controller 3 may determine a respective instance of the data preprocessing task 61 to be placed on and executed by a respective embedded computer 31, 32.
However, if a sum of a required computational power of the computational task 61 specified by the corresponding requirements specification 71 and required computation powers of further computational tasks (not shown) determined to be executed on the embedded computers 31, 32 exceeds (is not met by) a computational power specified by the capacity specifications 81, 82 of the embedded computers 31, 32, the computation placement controller 3 may determine that the data preprocessing task 61 is to be placed on and executed by the proxy gateway 31 and/or on the server farm 35 and not by the respective embedded computer 31, 32. Thereby, it is ensured that requirements specified by the requirements specifications 71, 72, ... are met with corresponding capacities specified by the capacity specifications 81, 82.
Further, according to an example, computational task 62 may be a visualization task that is configured to generate graphical visualization data from time-series data. The corresponding requirements specification 72 may specify that a required downstream bandwidth of the computational task 61 is higher than a required upstream bandwidth, or in other words, the requirements specification 71 may state that execution of the preprocessing task 61 with a data stream 11, 12 increases the bandwidth of the data stream 11, 12.
In order to reduce a cost of transmission incurred due to transmission of data over the communication link 42 and/or to avoid incurring a cost associated with processing computational tasks on the server farm 35, the computation placement controller 3 may determine the visualization task 62 to be placed on and executed by the workstation 37.
However, if a sum of an expected load caused by the computational task 62 specified by the corresponding requirements specification 72 and expected loads of further computational tasks (not shown) determined to be executed on the workstation 37 exceeds an acceptable load level specified by a capacity specification (not shown) of the workstation 37, the computation placement controller 3 may determine the data preprocessing task 61 to be placed on and executed by the server farm 35 and not by the workstation 37.
According to one preferred development, the embedded computer 31, the server farm 35 and the workstation 37 may implement a variety of computing platforms .
For example, data preprocessing task 61 may be implemented using machine code for an embedded processor such as an ARM processor of the embedded computer 31. In a case where the computation placement control device 3 decides to place the data preprocessing task 61 on the embedded computer 31, the computation placement control device 3 may cause the embedded computer 31 to natively execute the data preprocessing task 61. However, in a case where the computation placement control device 3 decides to place the data preprocessing task 61 on the server farm 35, the computation placement control device 3 may install a simulation unit configured to simulate an embedded environment of the embedded computer 31 on the server farm 35 and may cause the server farm 35 to launch the 5 simulation unit and use the simulator to execute the computational task 61 therein.
Herein a computing platform, such as a processor type and operating system, of computing engine 31 may be specified in the corresponding capacity specification 81, and a computing 10 platform of computing engine 35 may be specified in a corresponding capacity specification (not shown) of the computing engine 35. A computing platform on which computational task 61 may or may not be executed natively may or be specified by the corresponding requirements
15 specification 71.
As a further example, visualization task 62 may be implemented using platform independent code, such as Java code. When the computation placement control device 3 decides to place , visualization task 62 on one of the computing engines 34, 35,
20 36, 37, the computation placement control device 3 may communication with the respective computing engine 34, 35, 36,
37 and determine whether a Java Runtime Environment, JRE, is available on the determined computing engine 34, 35, 36, 37.
If no JRE is available, the computation placement control 25 device 3 may install a corresponding JRE on the respective computing engine 34, 35, 36, 37 and subsequently launch the JRE on the computing engine 34, 35, 36, 37 and cause the launched JRE to execute the placed visualization task 62.
According to one further development, the computation
30 placement control device 3 communicates with the computing units 31-37 to receive information about an actual computation load experienced by a respective computing unit 31-37. Likewise, the computation placement control device 3 communicates with at least one of computing units 33, 34 and with at least one of computing units 36, 37 to receive information about an actual transmission load experienced by the communication links 41 and 42.
That is, in a situation where it may not be easy to determine a load expected to be experienced by the computation devices 31-37 and/or the communication links 41, 42 in advance based on the requirements specifications 31, 32, ... and the capacity specifications 81, 82, ..., 91, 92, ... registered in the registry device, the computation placement control device 3 may monitor and respond to loads in the industrial stream processing system 1.
For example, when one of the monitored actual loads exceeds a predetermined threshold, such as 90%, for a predetermined amount of time, such as 120 seconds, the computation placement control device 3 may repeat steps S20 to S40 shown in Fig. 2 in consideration of the monitored actual loads. For example, the computation placement control device 3 may decide to move one of the computational tasks 61, 62 from a computing engine 31-37 for which the monitored actual load has exceeded the predetermined threshold for the predetermined amount of time, to a different computing engine 31-37 for which the monitored actual load has not exceeded the predetermined threshold.
Although the present invention has been described in accordance with preferred embodiments, it is obvious for the person skilled in the art that modifications are possible in all embodiments. Any of the described preferred developments of the second exemplary embodiment may be combined with each other and/or with the first exemplary embodiment. Numbers, such as the number of computing engines 31-37, communication links 41, 42, computation definitions 51, 52, capacity specifications 81, 82, 91,92, illustrated in Fig. 1 and fig. 3, are merely examples and it is understood that in an actual uses case, higher numbers of respective entities may be used to advantage .
It is understood that the present disclosure relates to an industrial stream processing system 1 including a registry device 2 and a computation placement device 3 which may advantageously automate the deployment, placement and/or execution of computational tasks registered in the registry device over a plurality of computing engines. A cost and effort of designing, operating and modifying and industrial stream processing system may be significantly reduced.
I
Reference Numerals:
1 industrial data stream processing system
2 registry device
3 computation placement control device
4 assistance device
5, 6 industrial facilities
11, 12 data streams
21, 22, 23 sites
31-37 computing resources
41, 42 communication link
51, 52 computation definitions
61, 62 computational tasks
71, 72 requirements specifications
81, 82 computing resource capacity specifications
91, 92 communication link capacity specifications S1-S4 method steps

Claims

Patent claims
1. An industrial data stream processing system (1) for processing a number of data streams (11, 12) flowing from a first site (21) to a second site (22) by executing a number of computational tasks (61, 62) with the number of data streams
(11, 12), the system (1) comprising:
a plurality of computing resources (31-37) distributed over a plurality of computing sites (21-23) linked by a number of communication links (41, 42);
a registry device (2) configured to register a plurality of computation definitions (51, 52) each including a computational task (61, 62) and a requirements specification
(71, 72) of the computational task (61, 62);
a computation placement control device (3) configured to, for each of the computational tasks (61, 62) , perform the steps of:
a) determining one of the computing resources (31-37) to execute the computational task (61, 62) based on the plurality of requirement specifications (71, 72) included in the plurality of computation definitions (51, 52) registered in the registry device (2) and based on a plurality of capacity specifications (81, 82, 91, 92) of the plurality of computing resources (31-37) and/or the number of communication links (41, 42) ;
b) placing the computational task (61, 62) on the determined computing resource (31-37); and
c) causing the determined computing resource (31-37) to execute the placed computational task (61, 62) .
2. The system of claim 1, wherein the registry device (2) is further configured to register the plurality of capacity specifications (81, 82, 91, 92) of the plurality of computing resources (31-37) and/or the plurality of communication links (41, 42) .
3. The system! of claim 1 or 2, wherein the computation placement control device (3) is configured to repeat the steps a) , b) and c) in response to a change of the registry device (2) .
4. The system of any of claims 1 - 3, wherein the plurality of computing resources (31-37) is a plurality of heterogenous computing resources (31-37) implementing a variety of computing platforms.
5. The system of claim 4, wherein, if a respective of the computational tasks (61, 62) is native to the computing platform of the determined computing resource (31-37) , step c) includes natively executing the placed computational task (61, 62) on the determined computing resource (31-37) .
6. The system of claim 4 or 5 , wherein, if a respective of the computational tasks (61, 62) is non-native to the computing platform of the determined computing resource (31-37) , step b) includes installing, on the determined computing resource (31- 37) , a runtime environment configured to enable execution of the computational task (61, 62) by the determined computing resource (31-37) , and step c) includes launching the runtime environment on the determined computing resource (31-37) and causing the determined computing resource (31-37) to execute the placed computational task (61, 62) using the runtime environment .
7. The system of any of claims 1 - 6, wherein
a respective requirements specification (71, 72) specifies at least one of a required computing platform, a required latency, a required upstream bandwidth, a required downstream bandwidth, a required computational power, an expected computational load, an allowable computational cost, an allowable bandwidth-related cost,
a respective capacity specification (81, 82) of a respective computing resource specifies at least one 0f a computing platform, a computational power, and a computational cost of the computing resource; and
a respective capacity specification (91, 92) of a respective communication link (41, 42) specifies at least one of a latency, a bandwidth and a bandwidth-related cost of the communication link (41, 42) .
8. The system of any of claims 1 - 7, wherein step a) includes determining the respective computing resources (31-37) to execute the respective computational tasks (61, 62) such that the requirements specified by the plurality of requirements specifications (71, 72) are met with corresponding capacities specified by the plurality capacity specifications (81, 82, 91, 92) .
9. The system of any of claims 1 - 8, wherein step a) includes determining the respective computing resources (31-37) to execute the respective computational tasks (51, 52) such that a total cost parameter associated with executing the plurality of computational tasks (51, 52) with the number of data streams (11, 12) is reduced.
10. The system of any of claims 1 - 9, wherein the computation placement control device (3) is configured to monitor an actual load experienced by each of the plurality of computing resources (31-37) and/or an actual load experienced by each of the plurality of communication links (41, 42) .
11. The system of claim 10, wherein step a) includes determining the respective computing resources (31-37) to execute the respective computational tasks (51, 52) further based on the monitored actual loads.
12. The system of claim 10 or 11, wherein the computation placement control device (3) is configured to repeat steps a), b) and !c) in response to a monitored actual load exceeding a predetermined threshold for a predetermined amount of time.
13. The system of any of claims 1 - 12, wherein a first site
(21) is a site where data of the number of data streams (11, 12) is acquired from an industrial facility (5, 6), the second site (22) is a site where data of the number of data streams (11, 12) is presented in a user interface, and a third site
(23) is a data center including a number of computing resources (34, 35, 36); and the plurality of computing sites
(21, 22, 23) includes the third site (23) and at least one of the first site (21) and the second site (23) .
14. An industrial data stream processing method for processing a number of data streams (11, 12) flowing from a first site
(21) to a second site (22) by executing a number of computational tasks (61, 62) with the number of data streams
(11, 12) using a plurality of computing resources (31-37) distributed over a plurality of computing sites (21-23) linked by a number of communication links (41, 42) ; the method comprising :
registering (SI) , in a registry device (2) , a plurality of computation definitions (51, 52) each including a computational task (61, 62) and a requirements specification
(71, 72) of the computational task (61, 62) ;
for each of the computational tasks (61, 62) , performing, using a computation placement control device (3) , the steps of :
a) determining (S2) one of the computing resources (31-37) to execute the computational task (61, 62) based on the plurality of requirement specifications (71, 72) included in the plurality of computation definitions (51, 52) registered in the registry device (2) and based on a plurality of capacity specifications (81, 82, 91, 92) of the plurality of computing resources (31-37) and/or the number of communication links (41, 42) ;
b) placing (S3) the computational task (51, 52) on the determined computing resource (31-37) ; and
c) causing (S4) the determined computing resource (31-37) to execute the placed computational task (61, 62) .
15. A computer program product comprising a program code for executing the method of claim 14 when run on at least one computer .
t
PCT/RU2018/000394 2018-06-14 2018-06-14 Industrial data stream processing system and method Ceased WO2019240607A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/RU2018/000394 WO2019240607A1 (en) 2018-06-14 2018-06-14 Industrial data stream processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/RU2018/000394 WO2019240607A1 (en) 2018-06-14 2018-06-14 Industrial data stream processing system and method

Publications (1)

Publication Number Publication Date
WO2019240607A1 true WO2019240607A1 (en) 2019-12-19

Family

ID=63080452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2018/000394 Ceased WO2019240607A1 (en) 2018-06-14 2018-06-14 Industrial data stream processing system and method

Country Status (1)

Country Link
WO (1) WO2019240607A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015977A1 (en) * 2002-07-19 2004-01-22 International Business Machines Corporation Employing a resource broker in managing workloads of a peer-to-peer computing environment
US20130031545A1 (en) * 2011-07-28 2013-01-31 International Business Machines Corporation System and method for improving the performance of high performance computing applications on cloud using integrated load balancing
US8412822B1 (en) * 2004-01-27 2013-04-02 At&T Intellectual Property Ii, L.P. Optimized job scheduling and execution in a distributed computing grid
US20140173612A1 (en) * 2012-12-13 2014-06-19 Telefonaktiebolaget L M Ericsson (Publ) Energy Conservation and Hardware Usage Management for Data Centers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015977A1 (en) * 2002-07-19 2004-01-22 International Business Machines Corporation Employing a resource broker in managing workloads of a peer-to-peer computing environment
US8412822B1 (en) * 2004-01-27 2013-04-02 At&T Intellectual Property Ii, L.P. Optimized job scheduling and execution in a distributed computing grid
US20130031545A1 (en) * 2011-07-28 2013-01-31 International Business Machines Corporation System and method for improving the performance of high performance computing applications on cloud using integrated load balancing
US20140173612A1 (en) * 2012-12-13 2014-06-19 Telefonaktiebolaget L M Ericsson (Publ) Energy Conservation and Hardware Usage Management for Data Centers

Similar Documents

Publication Publication Date Title
US11237862B2 (en) Virtualized network function deployment
KR102612312B1 (en) Electronic apparatus and controlling method thereof
US10416976B2 (en) Deployment device, deployment method, and recording medium
CN110971478B (en) Pressure measurement method, device and computing device for cloud platform service performance
CN110109684B (en) Block chain link point management proxy service installation method, electronic device and storage medium
EP3468095A1 (en) Transaction selection device for selecting blockchain transactions
Jansen et al. Continuum: Automate infrastructure deployment and benchmarking in the compute continuum
CN106533772A (en) Cloud simulation service method
WO2018170308A1 (en) System and method for producing api-throttle and back-pressure avoidance among clients using distributed asynchronous components
KR101822093B1 (en) Device and method for building cloud system
JP6475966B2 (en) Network design apparatus and program
EP3398304B1 (en) Network service requests
WO2019240607A1 (en) Industrial data stream processing system and method
CN105224453A (en) The automatic test approach of system compatibility and device
EP3336705A1 (en) Certification process for cloud platform
KR102213046B1 (en) Design support device, design support method, and program stored on a recording medium
KR20210063610A (en) Microservice-based artificial inteligence device for providing artificial inteligence service and method thereof
Yamato A study for environmental adaptation of IoT devices
US11709751B2 (en) Installation device and installation method
CN113783960B (en) Intelligent substation equipment data processing method and related equipment
KR101478017B1 (en) Method and system for processing simulation data
CN110018906A (en) Dispatching method, server and scheduling system
KR102443301B1 (en) Adaptive data processing system and method for various data processing
US20160125667A1 (en) Analytic sequencing for run-time execution of aircraft data
US11448521B2 (en) System and method for performing measurements in a modular application environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18749599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18749599

Country of ref document: EP

Kind code of ref document: A1