[go: up one dir, main page]

HK40011781B - Systems and methods for aggregating, filtering, and presenting streaming data - Google Patents

Systems and methods for aggregating, filtering, and presenting streaming data Download PDF

Info

Publication number
HK40011781B
HK40011781B HK62020001323.8A HK62020001323A HK40011781B HK 40011781 B HK40011781 B HK 40011781B HK 62020001323 A HK62020001323 A HK 62020001323A HK 40011781 B HK40011781 B HK 40011781B
Authority
HK
Hong Kong
Prior art keywords
data
client
service
query
snapshot
Prior art date
Application number
HK62020001323.8A
Other languages
Chinese (zh)
Other versions
HK40011781A (en
Inventor
I‧斯拉文
M‧A‧莱格
R‧阿尔珀特
J‧V‧汤姆
Original Assignee
摩根大通国家银行
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 摩根大通国家银行 filed Critical 摩根大通国家银行
Publication of HK40011781A publication Critical patent/HK40011781A/en
Publication of HK40011781B publication Critical patent/HK40011781B/en

Links

Description

System and method for aggregating, filtering, and presenting streaming data
Technical Field
The present disclosure relates generally to systems and methods for aggregating, filtering, and presenting streaming data.
Background
Both public and private clouds have become increasingly popular computing environments. However, these environments do not support streaming data due to variable latency between the host system and the remote operating system. Furthermore, the cost of pushing terabytes of streaming market data into various clouds can result in very high "transmission" fees from the provider, thereby reducing the value claims of using the cloud for data driven applications.
Disclosure of Invention
Systems and methods for aggregating, filtering, and presenting streaming data are disclosed. In one embodiment, a method for presenting streaming data may include: (1) Receiving a query from a client at a web service layer for a server comprising at least one computer processor, wherein the query comprises a plurality of parameters; (2) A data cache layer for the server receives streaming data from at least one predefined streaming data source; (3) The data cache layer combines the streaming data for each of a plurality of parameters; (4) the data cache layer aggregates the merged data; (5) The data cache layer generates a snapshot of the merged data by running a query against the merged data simultaneously; and (6) outputting the snapshot to the client.
In one embodiment, the parameters may include a specific descriptor for at least one of securities and investments.
In one embodiment, the query may also include an identification of the source of the streaming data.
In one embodiment, the stream data may include market data.
In one embodiment, the web services layer may output a snapshot delayed by a predetermined amount of time, wherein the time period is based on one or more rules associated with the streaming data.
In one embodiment, the method may further comprise: the rights service layer for the server verifies that the client is authorized to access information responsive to the query.
In one embodiment, the method may further comprise: the client is authenticated based on at least one client credential received from the client.
In one embodiment, the snapshot may be accurate for at least one of securities and investments for a particular time.
In one embodiment, the snapshot may include appropriate status for at least one of securities and investments for a particular time.
According to another embodiment, a system for presenting streaming data may include: a plurality of streaming data sources; a data loader for each stream data source, the data loader receiving stream data from the stream data source; a data cache layer that receives streaming data from the data loader; and a web services layer including at least one computer processor and in communication with the data cache layer. The web services layer may receive a query from a client, wherein the query includes a plurality of parameters; the data cache layer may combine the streaming data for each of a plurality of parameters; the data cache layer may aggregate the merged data; the data cache layer may generate a snapshot of the merged data by running a query against the merged data simultaneously; and the snapshot may be output to the client.
In one embodiment, the parameters may include a specific descriptor for at least one of securities and investments.
In one embodiment, the query may include an identification of the source of the streaming data.
In one embodiment, the stream data may include market data.
In one embodiment, the web services layer may output a snapshot that is delayed for a predetermined amount of time.
In one embodiment, the time period may be based on one or more rules associated with the stream data.
In one embodiment, the query may be received from at least one of a cloud application and a local application.
In one embodiment, the system may further include a rights service layer for the server that verifies that the client is authorized to access information responsive to the query. The rights service layer may also authenticate the client based on at least one client credential received from the client.
In one embodiment, the snapshot may be accurate for at least one of securities and investments for a particular time.
In one embodiment, the snapshot may represent an appropriate state for at least one of a security and an investment for a particular time.
Drawings
For a more complete understanding of the present invention, its objects and advantages, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 depicts a system for aggregating, filtering, and presenting streaming data, according to one embodiment;
FIG. 2 depicts a method for aggregating, filtering, and presenting streaming data, according to one embodiment;
FIG. 3 depicts a provisioning policy flow according to an embodiment; and
FIG. 4 depicts an exemplary process flow according to one embodiment.
Detailed Description
Several embodiments of the present invention and their advantages may be understood by reference to fig. 1-4.
Embodiments relate to systems and methods for aggregating, filtering, and presenting streaming data.
In one embodiment, data from internal and external sources may be pushed to a high-speed data aggregation engine. The client may make a simple web service request specifying its credentials, information source, and list of requested information. For example, in a financial institution, a client may submit a list of securities and fields. The client then receives a single response with the information required by the client. This reduces or eliminates the high-speed flow management that all applications must support and the infrastructure and development required.
In one embodiment, any suitable aggregation engine may be used as needed and/or desired. In one embodiment, the aggregation engine may allow for structured query language ("SQL") style content filtering of individual fields within the data payload, allowing for near real-time analysis of multiple products.
Embodiments may provide some or all of the following: (1) Increasing the price advantage of cloud-based applications by lower data transfer costs; (2) The technical cost is reduced in a large scale by reducing the infrastructure requirements and shortening the development period of the data driving application program; (3) Due to the simplification of APIs and integration with spreadsheets, more alternative developer teams; (4) Advanced content filtering across disparate sources allows development of past cost prohibitive real-time application functions (features); (5) Facilitating distributed analysis by fusing external and internal data; and (6) capable of providing aging or delaying, delivery of real-time data to individuals who do not need the data to reduce the cost of the data. Other benefits may also be provided.
In the financial industry, because market data and messaging involves large amounts of data, clients are expected to do significant work associated with processing payloads from financial information distribution platforms (ticker plants). They often have to deal with the hardware and software challenges associated with large amounts of information, consolidate them for the benefit of the user, aggregate across many threads and systems to support portfolio management, etc. Embodiments address these and other challenges while giving the end user and developer an interface to filter.
Referring to FIG. 1, a system for aggregating, filtering, and presenting streaming data is disclosed according to one embodiment. System 100 may include a plurality of client access points 110, a web services layer 120, a data caching layer 130, a data loader 140 1 、140 2 、......140 n Data source 150 1 、150 2 、......150 n . The data source 150 may receive data, such as internal and external market data, industry news, etc., from one or more streaming data sources.
Although three data loaders 140 and data sources 150 are shown in fig. 1, it should be appreciated that a greater or lesser number of data loaders 140 and data sources 150 may be provided as needed and/or desired.
The data loader 140 may receive data from a data source 150. In one embodiment, each data loader 140 may "feed" streaming data from one or more data sources 150 into the data cache layer 130. Each data loader 140 may also be in communication with the web services layer 120. For example, each data loader 140 may communicate with the control plane service 126, and the control plane service 126 may instruct one or more data loaders 140 to create new subscriptions, retry failures, and so forth.
The data cache layer 130 may service requests from the web services layer 120. In one embodiment, the data cache layer 130 may serve as both a database and a message bus.
In one embodiment, the data cache layer 130 may also filter the data and may provide a "snapshot" of the requested data.
One example of a suitable data cache layer 130 is the Advanced Message Processing System (AMPS) from 60East Technologies.
The web services layer 120 may interface with clients using one or more client access points 110. In one embodiment, the web services layer may provide services such as configuration management service 122, monitoring service 124, control plane layer 126, and rights service 128. Other services may be provided as needed and/or desired.
In one embodiment, configuration management service 122 may provide configuration data to one or more data loaders 140, such as which data sources 150 to connect to, how to connect to data sources 150, and so forth. In one embodiment, configuration management service 122 may provide runtime configuration information, such as connection information (e.g., host: port) for data loader 140, web services layer 120, rights service 128, which data feeds to load, which web connections to support (http or https), SSL certificate locations, connection and thread pool sizing, logging intervals, environment (dev/test/prod), and the like.
In one embodiment, the monitoring service 124 may monitor the status of one or more data sources 150 and may reroute the request if the data sources become unavailable.
The control plane service 150 may interface with one or more data loaders 140. In one embodiment, the control plane service 150 may pass instructions between services, allowing for both automatic instructions (e.g., monitoring service 124 notices failure of the data loader and instructs web services layer 120 to switch servers within data aggregation layer 130) or manual instructions (e.g., an operations team wants the service to be taken offline).
In one embodiment, rights service 128 may verify that the client is allowed to access the requested data.
In one embodiment, client access point 110 may include, for example, a cloud application, a local application, a user interface, an API, and the like. In one embodiment, the client access point 110 may provide, for example, a spreadsheet as its output to the client.
In one embodiment, the system 100 may include the ability to provide a "subscription" for clients to receive updates. In one embodiment, the update to the complex query may be smaller in size than the resubmitting the query and receiving the complete result set multiple times per second.
Referring to fig. 2, a method for aggregating, filtering, and presenting streaming data is disclosed in accordance with one embodiment.
In step 210, the client may request information of interest via an interface such as a cloud application, a local application, an upload, and the like. In one embodiment, the client may also provide credentials and identify the information source and request information list. In another embodiment, machine learning may be used to identify sources based on past queries.
In one embodiment, the client may request information using a web services protocol. The web services protocol may use a syntax like SQL, for example, to filter information. One example query that may be submitted is "(/ BID// ASK > = (0.05 x/trdprc_1))", which means "where the value of BID divided by ASK is greater than or equal to 5% of the value of trdprc_1".
In a financial system, a query may include identification of stocks, industries, areas/regions of interest, etc. The query may be used in a filter (filter).
In one embodiment, the request may be a subscription (e.g., to a stock, industry, etc.). In one embodiment, the request may specify a period or condition for each snapshot to be provided.
In step 220, the client may be authorized to access the requested data. In one embodiment, this may involve a permission check to determine whether the client is allowed to access the requested information.
In one embodiment, the client may be further authenticated based on credentials that may be provided as part of the request.
In step 230, the request may be provided to a data cache layer, which may filter data received from the streaming data source in step 240. The data cache layer may also "snapshot" the data, i.e., provide the data state at a particular moment in time.
In one embodiment, the business logic may reduce the number of data fields presented to the client.
In step 250, the filtered data may be combined. For example, filtered data from more than one source may be aggregated. For example, in one minute, the price of a stock may change from 70.00 to 70.05, back to 69.90, and up to 70.01. For a consolidated source, the client would receive only 70.01, as this is the appropriate price state at the end of the minute. However, this value is not the last value because it is not always an appropriate data point, especially for datasets with multiple fields, such as Bid and Ask.
In step 260, a data snapshot of the data may be generated. In one embodiment, the snapshot may be for a collection of securities, and may be accurate across the collection at that point in time. For example, queries may be run simultaneously, rather than sequentially, across already aggregated data (e.g., securities of interest).
In step 270, the snapshot may be output to the client via a web application, a local application, an API, a spreadsheet, or the like.
In one embodiment, the delivery of the snapshot may be delayed by a predetermined amount of time in order to "age" the data. For example, the timing for data delivery may determine whether the data may be delivered free of charge or whether there is a fee for the data. Not all customers need real-time data; for example, an investment manager may not need real-time data when meeting potential customers.
Thus, in one embodiment, the system may determine how long to delay outputting the snapshot, which may be based on rules associated with the data and/or the data source, and the system may delay the snapshot by at least that amount of time. In one embodiment, an auditable track of the age/life (age) of the data in the snapshot may be maintained.
Embodiments disclosed herein may separate the mechanisms of the various processes performed by multiple business lines from what is relevant to solving the problem. The content (e.g., various shares (lot) in a customer portfolio, changes caused by transactions, transfers, additions, withdrawals, etc.) can be viewed as generic, internally published data, anonymized, and provided to one or more systems/methods for aggregating, filtering, and presenting streaming data. Examples of such systems/methods are disclosed in U.S. patent application Ser. No. 15/378,501, the disclosure of which is incorporated herein by reference in its entirety.
In embodiments, market data from various venues may be consolidated within one or more systems/methods for aggregating, filtering, and presenting stream data, and may be fused with portfolio information. As the data goes through multiple fusion phases, each share is priced for real-time market data and then aggregated into a real-time portfolio view.
In an embodiment, a customer may use a custom rules engine to subscribe to "alerts" related to a business. For example, when a particular stock reaches a price threshold, when the concentration risk of a particular stay violates a predetermined level, when an estimate of a share or portfolio rises/falls by a percentage, the system may send a notification to the client, and so on.
Referring to FIG. 3, a policy flow is provided according to one embodiment. Policy flows may include customers, financial institution interfaces, gateway services, one or more graphical user interfaces, peripheral gateways, complex Event Processor (CEP) engines, internal data platforms (e.g., internal data platforms that may provide secure and reliable messaging between publishers and consumers, access may be provided using the OpenMAMA API), one or more market data feeds (e.g., NASDAQ Basic, which carries NASDAQ and NYSE market data, business feed handlers may be used; OPRA, pink Sheets, tradeWeb, etc.), anonymizer, data injectors, security master devices, portfolio management systems, daily activity monitors (e.g., daily activities, order management systems, and cash flow systems), alert engines, and notification platforms.
In one embodiment, the CEP engine can process data for applications requiring market data and one or more of: (1) in a configuration that is incapable of supporting high-speed streaming data; (2) Only a few prices per day are needed but these prices are required with very little lag; (3) A large number of symbols need to be submitted at a time (as opposed to stream subscriptions that all employ a single symbol); (4) Aggregation of market and analytics Data from multiple sources (e.g., reuters RMDS, direct Feeds, internal Data (International Data), LOB market analytics); or (5) need to be able to filter data about dynamic queries taking into account the values of one or more fields, or market data that has been delayed by a determined time can be used to reduce its cost.
In one embodiment, the anonymizer component may strip the identity of the customer from the portfolio content prior to injection into the CEP engine. The reverse process allows the process to look up the ID associated with the portfolio of the particular customer they are interested in.
The data injector may provide a single interface to one or more technical assets. It may map reference data (e.g., symbology) between systems and communicate with internal data platforms (e.g., to send portfolio information) and CEP engines (e.g., to ensure that tools of interest are on a watch list).
In one embodiment, the portfolio management system may be the source of portfolio content data for a customer's hand-held stock. It can publish daily updated portfolios of interest, publish portfolios on demand, and maintain "official" day prices for all shares/portfolios.
In one embodiment, the daily activity monitor may monitor activity such as that originating from an order management system, cash flow engine, or the like. It may issue updates into the CEP engine as portfolio changes since the beginning of the day to reflect real-time valuation changes.
The alert engine may translate the client indication from the financial institution website into a subscription for the CEP engine and other backend systems. An alarm lasting for several days, listening for a signal from the CEP engine that satisfies a condition, performing a desired action, disabling an active alarm, etc.
FIG. 4 depicts an exemplary process flow according to one embodiment. In one embodiment, a data source layer may receive market data (e.g., market data streams) and may provide the data to a data loader layer, where the data may be loaded into a data cache layer using one or more data loaders. The CEP engine can access data from the data cache layer and can provide configuration management, data filtering processes, monitoring services, control plane services, entitlement services, and the like. In one embodiment, the CEP engine can provide services to customers via a customer access layer.
Example
Non-limiting examples are provided below.
Data set 1: equity tools from direct feed:
<snapSymbol>JPM.N</snapSymbol>
<snapStatus>ok</snapStatus>
<wBidPrice>87.790000</wBidPrice>
<wBidSize>18</wBidSize>
<wAskPrice>87.800000</wAskPrice>
<wAskSize>4</wAskSize>
<wInstrumentType>Stock</wInstrumentType>
<wOpenPrice>90.360000</wOpenPrice>
<wAdjPrevClose>90.070000</wAdjPrevClose>
<wPrevClosePrice>90.070000</wPrevClosePrice>
<wBidOpen>90.340000</wBidOpen>
<wAskOpen>90.400000</wAskOpen>
tool price is the average of wBidPrice and wtaskprice.
Data set 2: commodity coming from direct feed future:
<snapSymbol>CLJ7.NYM</snapSymbol>
<snapStatus>ok</snapStatus>
<wIssueSymbol>CLJ7.NYM</wIssueSymbol>
<wBidHigh>48.620000</wBidHigh>
<wAskLow>47.400000</wAskLow>
<wAskPrice>47.410000</wAskPrice>
<wAskSize>5</wAskSize>
<wBidPrice>47.390000</wBidPrice>
<wBidSize>52</wBidSize>
<wBidLow>47.340000</wBidLow>
<wAskHigh>48.750000</wAskHigh>
<wOpenPrice>47.930000</wOpenPrice>
<wOpenTime>2017-03-21 17:45:27.229603</wOpenTime>
<wCfiCode>FCMXSX</wCfiCode>
<wCurrency>USD</wCurrency>
<wEntryStatus>0</wEntryStatus>
applicable to the same price rule
Data set 3: bond instrument from vendor feed:
<snapSymbol>ISIN_USN82008AK46_MODEL.FUSE</snapSymbol>
<snapStatus>ok</snapStatus>
<CLIENT_OFFER>0.000000</CLIENT_OFFER>
<CLIENT_BID>0.000000</CLIENT_BID>
<POSITION>0.000000</POSITION>
<COUPON_RATE>2.000000</COUPON_RATE>
<STRING2>HG-Industrials/Defense/Chemicals</STRING2>
<MKT_PRICE_2>1000000.000000</MKT_PRICE_2>
<MKT_PRICE_1>1000000.000000</MKT_PRICE_1>
<MATURDATE>15Sep 2023</MATURDATE>
<ISIN_BENCHMARK>US912828W556</ISIN_BENCHMARK>
<CURVE_TYPE>EMEA</CURVE_TYPE>
<UPDATE_TM>12:31:15.000</UPDATE_TM>
<ITEM_ID/>
the PRICE is in field MKT_PRICE_1
The data sets need not be present in the same CEP instance. For example, data set 1 and data set 3 may exist on the same CEP instance that created the portfolio view. The data set 2 may be on another example due to the volume requirements of ingestion.
An example portfolio design is provided. Portfolios can be stored in their own tables called
For example, portfolio pricing can use the following pseudocode:
in other words, a single value representing the real-time price of the portfolio can be returned, which will change as the data changes.
In one embodiment, a four-level table may be defined for two instances.
The first layer may receive data from a market data publisher. For example, there may be one table for each data feed/source for both portfoliopring and Commodity Futures. On Portfolio printing, these may be named "Equipment" and "Bond", while on Commodity Futures, it may be named "Comm". These tables are not recorded nor duplicated transactions, and the "shape" of the data in these tables is anything defined by the data source.
At the next level is a copy table, which may be referred to as "allprints", which may act as a normalized union of price data from all sources. Data may be normalized and published from Comm, equipment and Bond tables to AllPries. In an embodiment, this information may be combined in order to reduce the maximum data rate.
The first step in the action chain may be to parse the underlying data and extract the value of interest into the variable. The extraction of values may be designated as a projection expression. For example, when processing a message from the property, the row < Value > price= (/ price/wBidPrice +/price/wbiskprice)/2 </Value > calculates the average of/wBidPrice and/wbiskprice and places the result in the variable { price }.
The next step in the action is to publish a new message containing { { symbol } }, and normalized { price }, and a "source" field indicating the source of the pricing data. This may be published to the allpies table and may also be formatted as JSON.
At this point, the incoming market data has been normalized and converted into a single allpies table. Merging and replication can be used to minimize the total amount of messages that a portfolio pring instance must handle while providing portfolio pring with a complete view of all securities prices required to price a portfolio.
Next, the portfolios can be modeled as a table "Portfolio", one row per Portfolio according to securities. Each message in Portfolio may have a Portfolio ID, a security, and a weight for that security in the Portfolio. The table is basically input data for the portfolio of interest and the system can dynamically recalculate the portfolio price as the content of the portfolio is adjusted, added and/or removed.
The calculation is done in the next layer (polymerization). This can be modeled as two views. The first view is the connection between Portfolio and allpies, called "PricedPortfolio". There may be one message for each message in Portfolio, but the current price of each security may be contained due to the connection (Join).
The top PricedPortfolio may be the second view of calculating the portfolio price, "PricedPortfolioAgg". "this essentially converts the pseudocode (above) into a view, where each portfolio contains a message with the sum of the portfolio ID, the securities count in the portfolio, and the securities/price/weight in the portfolio.
Thus, in an embodiment, the system may receive portfolio data (e.g., by pushing) and then receive queries/subscriptions to PricedPortfio and PricedPortfolioagg to see dynamically updated portfolio prices as the underlying market data changes.
In an embodiment, the official day valuations for each portfolio may initially be submitted to the portafolio ref table as part of the daily loading. This allows the PricedPortfoliagg view to include calculation of the start of day gain/loss that is not achieved by comparing the current estimate with the reference data submitted into the PortfoliaoRef table. In one embodiment, a percentage change from the daily portfolio estimate can be calculated.
In an embodiment, the alert engine may establish a subscription to a CEP engine that maps human-recognizable concepts to the underlying data structure. Arithmetic expressions and formulas in the SQL language can be used to define when an alert engine should receive event notifications. Once the data is received on a particular subscription, the alert engine will map it back to the desired action (e.g., send a message (such as email or SMS), pop-up window on the advisor's user interface, etc.) and can perform that action. It may be further configured to keep an active alert or disable itself by terminating the subscription.
Non-limiting examples of alerts include portfolio returns/losses above/below a dollar amount (amounto) or percentage, value of a particular share exceeding a particular threshold, etc.
A consultant on behalf of multiple customers may maintain alerts covering multiple portfolios of interest to learn the share of their customers as a whole.
At the organizational level, all holding strands may be analyzed in their entirety in the CEP engine to facilitate hedging (hedging), cross-account stock loans, risk management, and the like.
In one embodiment, cash may be represented in a portfolio as multiple units of base currency (e.g., dollars). The market data for currency may be static (e.g., dollars for dollar-priced accounts), or representative of the current foreign exchange rate.
In another embodiment, a similar mechanism may be used for hypothesis modeling of portfolio changes. A separate CEP engine or side table(s) may be used to store the final representation of the hypothetical content of the portfolio, while the same aggregation mechanism may be used to price the portfolio.
In one embodiment, a separate table in the CEP engine can store stocks to sector (sector) relationships that can be used to build dynamic risk models to help customers manage risk. Multiple risk models may be defined to facilitate different risk preferences.
In one embodiment, individual subscriptions may be established on a table containing aggregated views, which may then be extracted and saved in a database. The history may be used as an audit trail or to train a machine learning system to generate automated investment advice to customers and consultants.
In one embodiment, the CEP engine may be used for real-time financing (margin) management, as it knows the current valuation status of the portfolio shares and the availability of money facilities.
Although several embodiments have been disclosed, it should be appreciated that these embodiments are not mutually exclusive.
Hereinafter, general aspects of embodiments of the systems and methods of the present invention will be described.
The system of the invention or portions of the system of the invention may be in the form of a "processing machine", such as a general purpose computer. As used herein, the term "processing machine" shall be understood to include at least one processor that uses at least one memory. At least one memory stores a set of instructions. The instructions may be stored permanently or temporarily in a memory or memories of a processing machine. The processor executes instructions stored in the memory or memories in order to process the data. The instruction set may include various instructions to perform a particular task or tasks, such as those described above. Such a set of instructions for performing a particular task may be characterized as a program, a software program, or simply software.
In one embodiment, the processing machine may be a special purpose processor.
As described above, a processing machine executes instructions stored in a memory or memories to process data. For example, the data processing may be in response to a command of one or more users of the processing machine, in response to a prior process, in response to a request of another processing machine, and/or any other input.
As described above, the processing machine for implementing the present invention may be a general-purpose computer. However, the processing machines described above may also utilize any of a variety of other technologies, including special purpose computers, computer systems including, for example, microcomputers, mini-or mainframe computers, programmed microprocessors, microcontrollers, peripheral integrated circuit elements, CSICs (customer specific integrated circuits) or ASICs (application specific integrated circuits) or other integrated circuits, logic circuits, digital signal processors, programmable logic devices (e.g., FPGA, PLD, PLA or PALs), or any other devices or arrangements of devices capable of implementing the steps of the processes of the present invention.
The processing machine used to implement the present invention may use a suitable operating system. Accordingly, embodiments of the present invention may include a processing machine running the following operating system: iOS operating system, OS X operating system, android operating system, microsoft Windows TM Operating system, unix operating system, linux operating system, xennix operating system, IBM AIX TM Operating system, hewlett-Packard UX TM Operating system, novell Netware TM Operating System, sun Microsystems Solaris TM Operating system, OS/2 TM Operating system, beOS TM Operating system, macintosh operating system, apache operating system, openStep TM An operating system or other operating system or platform.
It should be appreciated that the processor and/or memory of the processing machine need not be physically located in the same geographic location in order to practice the method of the present invention as described above. That is, each of the processors and memory used by the processing machine may be located in geographically disparate locations and connected to communicate in any suitable manner. In addition, it should be understood that each of the processor and/or memory may be comprised of different pieces of physical equipment. Thus, it is not necessary that the processor be a single piece of equipment in one location and that the memory be another single piece of equipment in another location. That is, it is contemplated that the processor may be two pieces of equipment in two different physical locations. The two different pieces of equipment may be connected in any suitable manner. In addition, the memory may include two or more memory portions in two or more physical locations.
For further explanation, the processes described above are performed by various components and various memories. However, it should be understood that according to another embodiment of the present invention, the processing performed by two different components as described above may be performed by a single component. Furthermore, the processing performed by one different component as described above may be performed by two different components. In a similar manner, according to another embodiment of the invention, memory storage performed by two different memory portions as described above may be performed by a single memory portion. Furthermore, the memory storage performed by one different memory portion as described above may be performed by two memory portions.
Further, various techniques may be used to provide communication between various processors and/or memories, as well as to allow the processors and/or memories of the present invention to communicate with any other entity; i.e. for example in order to obtain further instructions or to access and use remote memory storage. For example, these techniques for providing such communication may include a network, the Internet, an intranet, an extranet, a LAN, an Ethernet, wireless communication via a cellular tower or satellite, or any client server system providing communication. Such communication techniques may use any suitable protocol, such as TCP/IP, UDP, or OSI.
As described above, an instruction set can be used in the process of the present invention. The instruction set may be in the form of a program or software. For example, the software may be in the form of system software or application software. For example, the software may also be in the form of a collection of separate programs, program modules within a larger program, or portions of program modules. The software used may also include modular programming in the form of object-oriented programming. The software tells the processing machine how to process the data being processed.
Furthermore, it is to be understood that the instructions or sets of instructions used in the embodiments and operations of the present invention may be in a suitable form so that the processing machine may read the instructions. For example, the instructions forming the program may be in the form of a suitable programming language, which is converted to machine language or object code to allow a processor or processors to read the instructions. That is, written lines of programming code or source code in a particular programming language are converted to machine language using a compiler, assembler, or interpreter. Machine language is a binary coded machine instruction that is specific to a particular type of processing machine, i.e., a particular type of computer, for example. The computer knows the machine language.
Any suitable programming language may be used in accordance with various embodiments of the invention. Illustratively, for example, the programming languages used may include assembly language, ada, APL, basic, C, C ++, COBOL, dBase, forth, fortran, java, modula-2, pascal, prolog, REXX, visual Basic, and/or JavaScript. Furthermore, a single type of instruction or a single programming language need not be used in connection with the operation of the system and method of the present invention. Rather, any number of different programming languages may be utilized as needed and/or desired.
Further, the instructions and/or data used in the practice of the present invention may use any compression or encryption techniques or algorithms as desired. The encryption module may be used to encrypt data. Further, for example, a suitable decryption module may be used to decrypt the file or other data.
As described above, the present invention may illustratively be embodied in the form of a processing machine (which includes a computer or computer system, for example, including at least one memory). It should be appreciated that the set of instructions (i.e., software, for example) that enable the computer operating system to perform the operations described above may be embodied on any of a variety of one or more media, as desired. Furthermore, data processed by the instruction set may also be embodied on any of a variety of one or more media. That is, the particular medium used to store the instruction set and/or data used in the present invention, i.e., the memory in the processing machine, may take any of a variety of physical forms or transmissions, for example. Illustratively, the medium may be of the form: paper, paper transparencies, compact discs, DVDs, integrated circuits, hard disks, floppy disks, optical disks, magnetic tapes, RAM, ROM, PROM, EPROM, wires, cables, optical fibers, communication channels, satellite transmissions, memory cards, SIM cards or other remote transmissions, and any other medium or data source readable by the processor of the present invention.
Furthermore, the memory or memories used in the processing machine embodying the invention may be in any of a variety of forms to allow the memory to hold instructions, data, or other information as desired. Thus, the memory may be in the form of a database for holding data. For example, the database may use any desired file arrangement, such as a flat file arrangement or a relational database arrangement.
In the system and method of the present invention, various "user interfaces" may be utilized to allow a user to interface with a processing machine or machines for implementing the present invention. As used herein, a user interface includes any hardware, software, or combination of hardware and software used by a processing machine that allows a user to interact with the processing machine. For example, the user interface may be in the form of a dialog screen. The user interface may also include any of the following: a mouse, touch screen, keyboard, keypad, voice reader, voice recognizer, dialog screen, menu box, list, check box, toggle switch, button, or any other device that allows a user to receive information regarding the operation of a processing machine when the processing machine processes a set of instructions and/or to provide information to the processing machine. Thus, a user interface is any device that provides communication between a user and a processing machine. The information provided to the processing machine by the user through the user interface may be in the form of, for example, commands, data selections, or some other input.
As described above, a processing machine executing a set of instructions utilizes a user interface to cause the processing machine to process data of a user. User interfaces are typically used by processing machines to interact with users to convey information or receive information from users. However, it should be understood that according to some embodiments of the system and method of the present invention, a human user does not actually have to interact with the user interface used by the processing machine of the present invention. Rather, it is also contemplated that the user interface of the present invention may interact with another processing machine, other than a human user, i.e., transmit and receive information. Thus, other processing machines may be characterized as users. Furthermore, it is contemplated that the user interface used in the systems and methods of the present invention may interact partially with another processing machine or machines while also interacting partially with a human user.
Those skilled in the art will readily appreciate that the present invention is susceptible to a wide variety of uses and applications. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications and equivalent arrangements, will be apparent from or reasonably suggested by the present invention and the foregoing description thereof, without departing from the substance or scope of the present invention.
Thus, while the present invention has been described in detail herein with respect to the exemplary embodiments thereof, it should be understood that this disclosure is only illustrative and exemplary of the present invention and is for the purpose of providing a practical disclosure of the present invention. Thus, the foregoing disclosure is not intended to be interpreted to exclude or limit the invention to exclude or otherwise exclude any other such embodiments, adaptations, variations, modifications or equivalent arrangements.

Claims (20)

1. A method for presenting streaming data, comprising:
receiving a query from a client at a web service layer for a server comprising at least one computer processor, wherein the query comprises a plurality of parameters, and wherein the web service layer further comprises:
control plane services;
a rights service;
monitoring a service; and
configuring a management service;
a data cache layer for the server receives stream data from at least one predefined stream data source via a corresponding data loader, wherein:
the control plane service is configured to provide instructions to the data loader, wherein the instructions include one or more of: creating a new subscription and retrying the fault;
the configuration management service providing configuration data to the data loader, wherein the configuration data includes one or more of information about a data source to be connected to and how to connect to the specified data source;
the monitoring service monitors the status of the at least one predefined streaming data source and reroutes client queries if the data source becomes unavailable; and is also provided with
The rights service verifies that a client is allowed to access the stream data corresponding to the query;
the data cache layer merging the stream data for each of the plurality of parameters;
the data cache layer aggregates the merged data;
the data cache layer generates a snapshot of the merged data by running the query against the merged data simultaneously; and
outputting the snapshot to the client.
2. The method of claim 1, wherein the parameters include specific descriptors for at least one of securities and investments.
3. The method of claim 1, wherein the query further comprises an identification of a streaming data source.
4. The method of claim 1, wherein the stream data comprises market data.
5. The method of claim 1, wherein the web services layer outputs the snapshot delayed by a predetermined amount of time, wherein the period of time is based on one or more rules associated with the stream data.
6. The method of claim 1, further comprising:
the rights service layer for the server verifies that the client is authorized to access information responsive to the query.
7. The method of claim 1, further comprising:
the client is authenticated based on at least one client credential received from the client.
8. The method of claim 2, wherein the snapshot is accurate for at least one of the securities and the investments for a particular time.
9. The method of claim 2, wherein the snapshot includes an appropriate status for the at least one of the securities and the investments for a particular time.
10. A system for presenting streaming data, comprising:
a plurality of streaming data sources;
a data loader for each stream data source, the data loader receiving stream data from the stream data source;
a data cache layer that receives the stream data from the data loader; and
a web services layer comprising at least one computer processor and in communication with the data cache layer, and further comprising:
a control plane service in communication with the data loader;
a rights service;
monitoring a service; and
configuring a management service;
wherein:
the web service layer receives a query from a client, wherein the query includes a plurality of parameters;
the data cache layer merging the stream data for each of the plurality of parameters;
the data cache layer aggregates the merged data;
the data cache layer generates a snapshot of the merged data by running the query against the merged data simultaneously;
the snapshot is output to the client;
the control plane service provides instructions to the data loader, wherein the instructions include one or more of: creating a new subscription and retrying the fault;
the configuration management service providing configuration data to the data loader, wherein the configuration data includes one or more of information about a data source to be connected to and how to connect to the specified data source;
the monitoring service monitors the status of the at least one predefined streaming data source and reroutes client queries if the data source becomes unavailable; and is also provided with
The rights service verifies that a client is allowed to access the stream data corresponding to the query.
11. The system of claim 10, wherein the parameters include a specific descriptor for at least one of securities and investments.
12. The system of claim 10, wherein the query further comprises an identification of a streaming data source.
13. The system of claim 10, wherein the stream data comprises market data.
14. The system of claim 10, wherein the web services layer outputs the snapshot delayed by a predetermined amount of time.
15. The system of claim 14, wherein the time period is based on one or more rules associated with the stream data.
16. The system of claim 10, wherein the query is received from at least one of a cloud application and a local application.
17. The system of claim 10, further comprising:
a rights service layer for a server that verifies that the client is authorized to access information responsive to the query.
18. The system of claim 17, wherein the rights service layer further authenticates the client based on at least one client credential received from the client.
19. The system of claim 11, wherein the snapshot is accurate for the at least one of the securities and the investments for a particular time.
20. The system of claim 11, wherein the snapshot includes an appropriate status for the at least one of the securities and the investments for a particular time.
HK62020001323.8A 2016-12-14 2017-12-13 Systems and methods for aggregating, filtering, and presenting streaming data HK40011781B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/378,501 2016-12-14
US62/534,749 2017-07-20

Publications (2)

Publication Number Publication Date
HK40011781A HK40011781A (en) 2020-07-17
HK40011781B true HK40011781B (en) 2023-10-06

Family

ID=

Similar Documents

Publication Publication Date Title
US11601498B2 (en) Reconciliation of data stored on permissioned database storage across independent computing nodes
US10482534B2 (en) Method and system for aggregating and managing data from disparate sources in consolidated storage
CN110249322B (en) System and method for aggregating, filtering, and presenting streaming data
CA2746384A1 (en) Trading system
WO2022073116A1 (en) Systems and methods for predicting operational events
US12014290B2 (en) Projecting data trends using customized modeling
CA3133429C (en) Systems and methods for predicting operational events
US10915968B1 (en) System and method for proactively managing alerts
CA3133404A1 (en) Systems and methods for predicting operational events
US20180211312A1 (en) Systems and Methods for Intraday Facility Monitoring
US11995160B2 (en) Graphical user interface and console management, modeling, and analysis system
CA3133284A1 (en) Systems and methods for predicting operational events
CA3133416A1 (en) Systems and methods for predicting operational events
US10007950B2 (en) Integrating multiple trading platforms with a central trade processing system
HK40011781B (en) Systems and methods for aggregating, filtering, and presenting streaming data
HK40011781A (en) Systems and methods for aggregating, filtering, and presenting streaming data
Jeyaraman Integrating LLMs into Financial Systems
Šesták et al. Assignment of master’s thesis
US20150317738A1 (en) Computerized method and system for secure communication, and method and system for matching customers with options for investment