HK1177272B - Solver-based visualization framework - Google Patents
Solver-based visualization framework Download PDFInfo
- Publication number
- HK1177272B HK1177272B HK13104065.7A HK13104065A HK1177272B HK 1177272 B HK1177272 B HK 1177272B HK 13104065 A HK13104065 A HK 13104065A HK 1177272 B HK1177272 B HK 1177272B
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- view
- model
- visual
- property
- Prior art date
Links
Description
Background
The most effective way to convey information to a person is often done visually. Accordingly, millions of people process a wide variety of visual items in order to convey or receive information, and in order to collaborate. Such visual items may include, for example, conceptual diagrams, engineering drawings, bill of materials, three-dimensional models depicting various structures such as buildings or molecular structures, training materials, installation instructions with illustrations, planning drawings, and so forth.
More recently, these visual items have been electronically built using, for example, Computer Aided Design (CAD) and solid modeling applications. These applications often allow authors to attach data and constraints to geometric shapes. For example, an application for building a bill of materials may allow attributes such as part number and supplier to be associated with each part, the maximum angle between two components, and so forth. An application building an electronic version of a roundabout may have tools for specifying a minimum gap between seats, and so on.
Such applications have contributed significantly to advances in design and technology. However, any given application has limitations with respect to the type of information that may be visually conveyed, how that information is visually conveyed, or the scope of data and behavior that may be attributed to the various visual representations. If the application is to be modified to exceed these limits, a computer programmer will typically author a new application that extends the capabilities of the application, or provide an entirely new application. Likewise, there are limits to how many users (in addition to the actual author of the model) can manipulate the model to test various scenarios.
Disclosure of Invention
Embodiments described herein relate to a visualization framework in which solvers may be used to determine properties of view components. In some cases, a solver may be explicitly written using a relational structure, such as a dependency tree. In some cases, a solver may be written implicitly based on property-setters with solvers calling other property-setters with solvers. This may allow authors to create and modify view composition more quickly.
This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Brief Description of Drawings
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of various embodiments will be rendered by reference to the appended drawings. Understanding that these drawings depict only example embodiments and are not therefore to be considered to be limiting of its scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 illustrates an environment in which the principles of the present invention may be used, including constructing a data-driven composition framework that depends on view composition of input data;
FIG. 2 illustrates a plumbing environment representing one example of the environment of FIG. 1;
FIG. 3 schematically illustrates an embodiment of a data portion of the pipeline of FIG. 2;
FIG. 4 schematically illustrates an embodiment of an analytical method portion of the pipeline of FIG. 2;
FIG. 5 schematically illustrates an embodiment of a view portion of the duct of FIG. 2;
FIG. 6 schematically illustrates a data stream object capable of enumerating all or a subset of elements of a data stream;
FIG. 7 shows a view-synthesized presentation that may be built by the pipeline of FIG. 2;
FIG. 8 illustrates a flow diagram of a method for generating view synthesis using the pipeline environment of FIG. 2;
FIG. 9 illustrates a flow diagram of a method for regenerating a view composition in response to user interaction with the view composition using the pipeline environment of FIG. 2;
FIG. 10 schematically illustrates, in further detail, solvers of the analytical method portion of FIG. 4, including a collection of specialized solvers;
FIG. 11 shows a flow diagram of the solver of FIG. 10 for solving for unknown model parameters by coordinating the actions of a set of specialized solvers;
FIG. 12 schematically illustrates a resolver environment that may represent an example of the resolver of FIG. 10;
FIG. 13 illustrates a flow diagram of a method for solving a model analysis method using the solver environment of FIG. 12;
FIG. 14 illustrates a flow diagram of a method for solving for model variables using the solver environment of FIG. 10;
FIG. 15 schematically illustrates an embodiment of a resolver environment;
FIG. 16 shows a flow diagram of a method that may be performed by the resolver environment shown in FIG. 15;
FIG. 17 shows a presentation of integrated view composition expanding the example of FIG. 7;
FIG. 18A illustrates a view composition in which multiple visual items are adorned with visual cues that represent that the corresponding visual items can interact with performing scrolling;
FIG. 18B illustrates the view composition of FIG. 18A after a scroll interaction;
FIG. 19A illustrates view synthesis in which magnified interactivity is allowed;
FIG. 19B illustrates the view composition of FIG. 19A after zooming in on the interactivity;
FIG. 20 illustrates view composition in the form of a United states map with height representation data for each state, some state visual items including visual cues indicating possible interactivity with the state visual item;
FIG. 21 illustrates view synthesis in the form of a combined interrelated pie and histogram including visual cues illustrating possible interactivity;
FIG. 22 illustrates view composition in the form of a hurricane route progression diagram, wherein various visual cues identify interactivity;
FIG. 23 shows a flow diagram of a method for providing interactivity in view synthesis;
FIG. 24 illustrates a user interface in which various visual items may be integrated and merged;
FIG. 25 shows a first stage of integration, in which only the spiral is shown as the shape onto which the data is to be mapped;
FIG. 26 shows a second stage of integration in which the spiral of FIG. 25 is bound to a series of data;
FIG. 27 shows a third stage of integration in which the spiral of FIG. 25 is bound to two data series;
FIG. 28 shows the final stage of integration, where the spiral of FIG. 25 is tied to the graph shown.
FIG. 29 illustrates a visualization of a shelf layout and represents only one of a myriad of applications to which the principles described herein may be applied;
FIG. 30 shows a visualization of urban planning to which the principles described herein may also be applied;
FIG. 31 illustrates a conventional visualization of education in comparison to children to which the principles of the present invention may be applied to create a more dynamic learning environment;
FIG. 32 illustrates a conventional visualization of comparative population density to which the principles of the present invention may be applied to create a more dynamic learning environment;
FIG. 33 shows a visualization of a view component applied to a first parameter target;
FIG. 34 illustrates a visualization of the view component shown in FIG. 33 applied to a second parameter target; and
FIG. 35 illustrates a classification environment in which the classification component of FIG. 2 may operate;
FIG. 36 illustrates an example of a classification of the member items of FIG. 35;
FIGS. 37A to 37C show three examples of classification of related categories;
FIG. 38 is a diagram showing a member item including a plurality of attributes;
FIG. 39 illustrates a domain-specific classification and represents one example of the domain-specific classification of FIG. 35;
FIG. 40 shows a flow diagram of a method for navigating and using analytics;
FIG. 41 shows a flow diagram of a method for searching using analytics; and
FIG. 42 illustrates a computing system that represents an environment in which the composition framework of FIG. 1 (or portions thereof) may be implemented.
Detailed Description
Fig. 1 shows a visual synthetic environment 100 using data-driven analytics and visualization of the analysis results. The environment 100 (also referred to below as a "pipe") includes a composition framework 110, which composition framework 110 executes logic that is executed independently of the problem domain of the view construction 130. For example, the same composition framework 110 may be used to compose interactive view compositions for city planning, molecular models, grocery store shelf layouts, machine performance or assembly analysis, or other domain-specific presentations. As described below, analytics can be used to search and explore in various scenarios. First, however, the basic composition frame 110 will be described in detail.
The synthesis framework 110 performs an analysis 121 using domain-specific data 120 that is taxonomically organized in a domain-specific manner to construct a domain-specific actual view construct 130 (also referred to herein as "view synthesis"). Thus, by changing the domain-specific data 120, any number of different domains of view composition can be constructed using the same composition framework 110, rather than having to re-encode the composition framework 110 itself. As such, by changing data rather than re-encoding and re-compiling, the composition framework 110 of the pipeline 100 can potentially be adapted to an unlimited number of problem domains, or at least to a wide variety of problem domains. The view composition 130 may then be provided as instructions to an appropriate 2-D or 3-D rendering module. The architecture described herein also allows for pre-existing view synthesis to be conveniently incorporated as a building block into new view synthesis. In one embodiment, multiple view synthesis may be included in the integrated view synthesis to facilitate comparisons between two possible solutions to the model.
FIG. 2 illustrates an example architecture of a composition framework 110 in the form of a pipeline environment 200. The plumbing environment 200 includes the plumbing 201 itself, among others. The pipeline 201 includes a data portion 210, an analytics portion 220, and a views portion 230, which are described in detail with reference to figures 3 through 5, and the accompanying description, respectively, that follow. Now, at a general level, the data portion 210 of the pipeline 201 can accept a variety of different types of data, which is provided to the analytics portion 220 of the pipeline 201 in a canonical form. The analytics portion 220 binds the data to various model parameters and solves for the unknowns in the model parameters using model analytics. The various parameter values are then provided to the view portion 230, which view portion 230 uses these values of the model parameters to construct the composite view. The pipeline environment 200 also includes an authoring component 240 that allows an author or other user of the pipeline 201 to formulate and/or select data to provide to the pipeline 201. For example, the authoring component 240 may be used to provide data to each of the data portion 210 (represented by the input data 211), the analytics portion 220 (represented by the analytics data 221), and the view portion 230 (represented by the view data 231). The various data 211, 221, and 231 represent examples of the domain-specific data 120 of FIG. 1, which are described in more detail below. The authoring component 240 supports providing various data including, for example, data patterns, actual data for use by the model, locations or ranges of possible locations for data brought in from external sources, visual (graphical or animated) objects, user interface interactions that can be performed on screens, modeling statements (e.g., views, equations, constraints), bindings, and so forth.
In one embodiment, the authoring component is only a part of the functionality provided by the overall manager component (not shown in FIG. 2, but represented by composition framework 110 of FIG. 1). The manager is a general director that controls and sequences the operation of all other components, such as data connectors, solvers, viewers, etc., in response to events, such as user interaction events, external data events, and events from any of the other components, such as solvers, operating systems, etc.
The authoring component 240 also includes a search tool 242 that allows searches to be performed. The search tool 242 can take advantage of the data-driven analysis capabilities of the pipeline 201 in order to perform complex search operations. For example, in some cases, one or more parameters of the search may need to be solved first in order to complete the search.
As an example, assume that the data-driven analytics are city maps, and some are capable of solving for typical noise levels at specific coordinates in a city. In this case, a person who is searching for real estate can perform a search not only for typical search parameters such as square footage, price range, number of rooms, etc., but also for analysis strength parameters. For example, although there are unlimited ways in which principles may be applied, several different examples of how a flexible search for real estate may be implemented will now be provided.
As part of specifying search parameters, the user may indicate that any one or more of the following or other search terms are desired:
1) only those houses that experience an average noise level below a certain upper limit;
2) only those work units and gyms that arrive at the user each day on monday through thursday have houses that accumulate 300 minutes commute time or less,
3) only those houses that experience less than a certain threshold traffic flow on any road within a fifth mile and are predicted to remain below that traffic flow level for the next 10 years,
4) only those houses that are not in the shadow of a mountain at any time after 9:15 am in the year,
5) only those houses that have enough of the trees originally at the property that, within 10 years of time, the trees will cover at least 50% of the area of the roof,
6) and so on.
Such real estate searches cannot be easily accomplished by using conventional techniques. In each of these examples, the requested house search data may not exist, however, the principles described herein may allow for the generation of search data for various custom parameters at runtime while the search is being performed. Likewise, the search may also utilize any search data that is solved in advance. This allows the user to be provided with various search and exploration capabilities and opens up entirely new ways of exploring the problem to be solved. More on the underlying data-driven analytics mechanisms that support these types of searches will now be described.
Traditionally, the lifecycle of interactive view synthesis applications involves two critical times: authoring time and usage time. At authoring time, the functionality of the interactive view synthesis application is coded by the programmer to provide the desired domain-specific interactive view synthesis. For example, an author of an in-house design application (e.g., typically a computer programmer) may write code for an application that allows a user to perform a limited set of actions specific to the in-house design.
In use, a user (e.g., perhaps an owner or professional room designer) may use the application to perform any one or more of a limited set of actions hard-coded into the application. In the in-room design application example, the user may specify the dimensions of the virtual room being displayed, add furniture and other in-room design components to the room, perhaps rotate the view to obtain various angles of the room, set the color of each item, and so on. However, unless the user is a programmer that does not mind reverse engineering and modifying the in-house design application, the user is limited to a limited set of actions allowed by the application author. For example, unless provided by an application, a user would not be able to use the application to automatically figure out which window layout would minimize environmental noise, how a room layout behaves according to the "weather" rule, or minimize solar contribution.
However, in the pipeline environment 200 of FIG. 2, the authoring component 240 is used to provide data to the existing pipeline 201, in which existing pipeline 201 it is this data that drives the entire process from defining the input data, to defining the analytical model, to defining how the results of the analysis are visualized in view synthesis. Thus, the pipeline 201 can be adapted to any of a variety of fields and problems without having to perform any code writing. Only the data provided to the pipeline 201 needs to be changed in order to apply the pipeline 201 to visualize different view composition from different problem domains or perhaps to adjust the problem solution to existing domains. The pipeline environment 200 may also include a classification component 260, which classification component 260 organizes, classifies, and associates the provided data with the pipeline 201. The classification component 260 may be domain sensitive. As such, each of the indoor design domain, highway design domain, architectural domain, and "geomantic" domain may have different classifications that may be used to navigate through the data. This may be useful because, as described below, there may be a large amount of data available for screening due to the synthetic activity in which the data sets available to the pipeline environment 200 may continue to increase.
Furthermore, since data can change at the time of use (i.e., runtime) as well as at the time of authoring, the model can be modified and/or extended at runtime. As such, there is little, if any, difference between authoring the model and running the model. Since all authoring involves editing the data item, and since the software runs all of its behavior from the data, every change to the data immediately affects the behavior without requiring recoding and recompilation.
The conduit environment 200 also includes a user interaction response module 250 that detects when a user interacts with the displayed view composition and then determines what operations to perform in response. For example, certain types of interactions may not require changes in the data provided to the pipeline 201, and thus no changes to view synthesis are required. Other types of interactions may change one or more of the data 211, 221, or 231. In this case, this new or modified data may result in new input data being provided to the data portion 210, may require reanalysis of the input data by the analytics portion 220, and/or may require re-visualization of view composition by the views portion 230.
Thus, the pipeline 201 can be used to extend data-driven analysis visualization to perhaps an unlimited number of problem areas, or at least a wide variety of problem areas. Furthermore, view composition can be changed to address various issues without necessarily being a programmer. Each of the data portion 210, the analytics portion 220, and the view portion 230 of the pipeline 201 will now be described in that order with reference to the data portion 300 of FIG. 3, the analytics portion 400 of FIG. 4, and the view portion 500 of FIG. 5. In addition, the classification of data is domain specific, thus allowing the organization of data to be more intuitive for those data operating in the domain. As is apparent from fig. 3 to 5, the pipeline 201 may be constructed as a series of conversion components, where each of these components 1) receives some appropriate input data, 2) performs some action in response to the input data (such as performing a conversion on the input data), and 3) outputs data that then serves as input data for the next conversion component.
The pipe 201 may be implemented on a client, on a server, or even distributed between a client and a server, without limitation. For example, the pipeline 201 may be implemented on a server and provide rendering instructions as output. The browser on the client side can perform rendering only in accordance with rendering instructions received from the server. In another aspect, the pipeline 201 can be included on a client on which authoring and/or use is performed. Even if the pipe 201 is completely on the client, the pipe 201 can still search data sources external to the client for appropriate information (e.g., models, connectors, normalizers, patterns, and others). There are also embodiments that provide for a hybrid of these two methods. For example, in one such hybrid approach, the models are hosted on a server, but a web browser module is dynamically loaded on the client in order to have some of the models' interaction and viewing logic run on the client (thus allowing richer and faster interactions and views).
FIG. 3 shows only one of many possible embodiments of the data portion 300 of the pipeline 201 of FIG. 2. One of the functions of the data section 300 is to provide data in a canonical format that conforms to a schema understood by the analytics section 400 of the pipeline discussed with reference to FIG. 4. The data portion includes a data access component 310 that accesses heterogeneous data 301. Input data 301 can be "heterogeneous" in the sense that the data can (but need not) be presented to data access component 310 in a canonical form. In fact, data portion 300 is structured such that heterogeneous data can be in various formats. Examples of different kinds of domain data that may be accessed and manipulated by the model include text and XML documents, tables, lists, hierarchies (trees), SQL database query results, BI (business intelligence) cube query results, graphical information such as 2D graphics and 3D visual models in various formats, and combinations thereof (i.e., composites). Further, the kind of data that can be accessed can be declaratively extended by providing a definition (e.g., schema) of the data to be accessed. Thus, the data portion 300 allows for a variety of heterogeneous inputs to the model, yet supports runtime, declarative extensions of accessible data types.
In one embodiment, the data access portion 300 includes a number of connectors for obtaining data from a number of different data sources. Since one of the main functions of a connector is to put the corresponding data in a canonical form, such a connector is often referred to as a "normalizer" hereinafter and in the drawings. Each normalizer may understand the specific Application Programming Interface (API) of its corresponding data source. The normalizer may also include corresponding logic for interfacing with the corresponding API to read and/or write data from and/or to the data source. As such, the normalizer acts as a bridge between an external data source and the memory image of the data.
The data access component 310 evaluates the input data 301. If the input data is already canonical and, therefore, can be processed by the analytics portion 400, the input data may be provided directly as canonical data 340 for input to the analytics portion 400.
However, if the input data 301 is not canonical, the appropriate data normalization component 330 can convert the input data 301 to a canonical format. The data normalization component 330 is actually a collection of data normalization components 330, each data normalization component 330 being capable of converting input data having a particular characteristic into a canonical form. The collection of normalized components 330 is shown to include four normalized components 331, 332, 333, and 334. However, ellipses 335 indicate that there may be other numbers of normalized components as well, perhaps even fewer than the four shown.
The input data 301 may even include an identification of the normalizer itself and the associated data characteristics. The data portion 300 can then register the relevant data features and provide the set of data normalization components 330 with the normalization components, where the normalization components can be added to the available normalization components. If input data having those relevant characteristics is later received, data section 310 can assign the input data to the relevant canonicalization components. Normalized components may also be dynamically looked up from external sources, such as from a defined component library on the web. For example, if the schema of a given data source is known, but the required normalizer does not exist, the normalizer can be located from an external component library, assuming that such a library can be found and contains the required component therein. The pipeline may also parse data whose schema is still unknown and compare the results of the parsing to schema information in a library of known components to attempt to make a dynamic determination of the type of the data and thus locate the required normalizer components.
Alternatively, rather than the input data including all of the normalized components, the input data may instead provide a conversion definition that defines the normalized conversion. The set 330 may then be configured to convert the conversion definition into a corresponding normalized component that implements the conversion along with zero or more standard default normalized conversions. This represents an example of a situation in which the data portion 300 consumes input data and does not provide corresponding normalized data down the pipeline. However, in perhaps most cases, the input data 301 results in the generation of corresponding normalized data 340.
In one embodiment, the data access component 310 may be configured to assign the input data to the data normalization component based on a file type and/or format type of the input data. Other features may include, for example, the source of the input data. Default canonicalization components may be assigned to input data for corresponding canonicalization components that are not specified. The default normalization component can apply a set of rules to attempt to normalize the input data. If the default normalization component is not capable of normalizing the data, the default normalization component can trigger the authoring component 240 of FIG. 2 to prompt the user to provide a schema definition of the input data. If a schema definition does not already exist, the authoring component 240 may present a schema definition assistant to assist the author in generating a corresponding schema definition that may be used to convert the input data into a canonical form. Once the data is in canonical form, the schema that accompanies the data provides enough description of the data so that the rest of the pipeline 201 does not need new code to interpret the data. Rather, the conduit 201 includes code that is capable of interpreting data according to any schema expressible through an accessible schema declaration language.
One type of data is a data stream object, such as data stream object 600 shown and described with reference to FIG. 6. The data stream object 600 includes an enumeration module 601 that is capable of enumerating a range within the data stream 602 that is associated with the data stream object. The range may include virtually the entire range of the data stream 602. On the other hand, data stream 602 may be a "pseudo-infinite" or "partially pseudo-infinite" data stream. In this description and in the claims, a "pseudo-infinite" data stream is a data stream that is too difficult to completely match the volatile memory of the computing system that is processing the data stream object 600. A "partially pseudo-infinite" data stream is defined as a data stream that will occupy at least one-half of the volatile memory of the computing system that is processing the data stream object. The enumeration module 601 may enumerate only a portion of the data stream, such as, for example, the data portion 210, the analytics portion 220, or the view portion 230 of FIG. 2, in response to a request from an external module. The enumeration module 601 may require enumeration to begin with the first element of the data stream. On the other hand, the enumeration module 601 may allow enumeration to begin from any other portion of the data stream (i.e., may allow enumeration of intermediate streams) without first enumerating from the beginning of the data stream.
The data stream object 600 is capable of identifying a requested portion of a data stream based on attributes associated with the requested member elements of the data stream. In this case, the data stream object may include logic to process the request and identify all member elements of the data stream that have the requested attributes. For example, the request may be for all elements of a city that are visible down from an altitude of 1000 feet at a particular longitude and latitude coordinate. On the other hand, the data stream object can also respond to express requests for member elements.
As an example, a pseudo-infinite data stream may be a description of a real or fictitious city, including a description of all items within the city, including a description of each imaginable level of detail for each of these items. This information may be too much to be represented in memory all at once. As such, the data stream object 600 may provide only the relevant information needed to present the current view. For example, if a city is being viewed at an altitude of 100 miles, perhaps only the city's boundary information is relevant. If a portion of a city is being viewed at an altitude of 10 miles, perhaps only information about larger objects (such as airports, parking lots, reservoirs, etc., and only in the case of an in-view situation) is provided. If a portion of a city is being viewed at an altitude of one mile, information needed to present streets and buildings within the view may be provided by the data stream object. If a portion of a city is being viewed at an altitude of one thousand miles, it may be possible to provide the information needed to present more details of the street (i.e., number of lanes, name of street, arrows, etc.), as well as more details about the building (texture and window location, etc.). Naturally, when this zooming-in operation occurs, the information relates to a much smaller geographical area. In a one hundred foot view, the data stream may provide information about the shrub, manhole, gutter, ladder, vending machine, pedestrian crossing, and the like. For conventional computers, it may be difficult to keep this amount of data in memory for the entire city.
As another example, a pseudo-infinite data stream may be literally infinite, as in the case of a fractal. Fractals are mathematically defined so that the repeating shapes are always visible regardless of the magnification in a portion of the fractals. It is always possible to zoom further, infinitely. The same principle applies to minification. An unlimited amount of information is required to literally represent such fractal-like geometries. However, a data stream object may have the notion of how fractals are mathematically defined. If a data stream object is required to produce a fractal at a particular level of detail, the data stream object may compute data applicable to that particular level of detail and provide that data.
In this manner, the pseudo-infinite data object is able to generate the requested data range, whether by evaluating expressions that define details and/or by accessing data external to the data stream object. Such ranges are examples of hierarchical data provided to the analytics portion 400 of the pipeline 201. In one embodiment, the protocol used to communicate with the data stream object may be the same regardless of the size of the data stream itself. In this manner, the author may test the model for data stream objects associated with smaller data stream objects. Then, once someone else is confident about the model, the author can simply switch the data stream object to a data stream object associated with an infinite data stream, a pseudo-infinite data stream, or a partially pseudo-infinite data stream without changing the model itself.
Regardless of the form of the specification data 340, the specification data 340 is provided as output data from the data portion 300 and as input data to the analytics portion 400. The specification data may include fields that include various data types. For example, a field may include simple data types such as integers, floating point numbers, strings, vectors, arrays, collections, hierarchies, text, XML documents, tables, lists, SQL database query results, BI (business intelligence) cube query results, graphical information such as 2D graphics and 3D visual models in various formats, or even complex combinations of these various data types. As another advantage, the normalization process can normalize various input data. In addition, the types of input data that the data portion 300 can accept are also extensible. This is helpful in case of combining multiple models as discussed later in this specification.
FIG. 4 shows an analytics portion 400 that represents an example of the analytics portion 220 of the pipeline 201 of FIG. 2. The data portion 300 provides the normalized data 401 to the data-model binding component 410. Normalized data 401 may have any normalized form, and any number of parameters, the form and number of which may even differ from one piece of input data to another. For purposes of discussion, however, specification data 401 has fields 402A through 402H, which may be collectively referred to herein as "fields 402".
The analytical method portion 400, on the other hand, includes a number of model parameters 411. The type and number of model parameters may vary from model to model. However, for purposes of discussing specific examples, model parameters 411, including model parameters 411A, 411B, 411C, and 411D, will be discussed. In one embodiment, the identification of model parameters and the analytical relationships between the model parameters may be declaratively defined without the use of imperative encoding.
The data-model binding component 410 mediates between the normalized data fields 402 and the model parameters, thereby providing binding between the fields and parameters. In this case, data field 402B is bound to model parameters 411A, as represented by arrow 403A. In other words, the value of data field 402B is used to populate model parameters 411A. Also, in this example, data field 402E is bound to model parameters 411B (as represented by arrow 403B), while data field 402H is bound to model parameters 411C (as represented by arrow 403C).
Data fields 402A, 402C, 402D, 402F, and 402G are not shown as being bound to any of the model parameters. This will emphasize that not all data fields in the input data need always be used as model parameters. In one embodiment, one or more of these data fields can be used to provide instructions to the data-model binding component 410 regarding which fields from the normalized data (for which normalized data, or perhaps any future similar normalized data) are to be bound to which model parameters. This represents an example of the types of analytics data 221 that may be provided to the analytics portion 220 of FIG. 2. The definition of which data fields from the normalized data are bound to which model parameters can be formed in many ways. For example, the binding may be 1) explicitly set by the author at the time of authoring, 2) explicitly set by the user at the time of use (subject to any restrictions imposed by the author), 3) automatically bound by the authoring component 240 based on algorithmic heuristics, and/or 4) prompted by the authoring component to the author and/or the user to specify the binding when it is determined that the binding cannot be algorithmically bound. As such, the binding may also be resolved as part of the model logic itself.
The ability of the author to define which data fields are mapped to which model parameters gives the author great flexibility in being able to use symbols for which the author defines model parameters are comfortable to use. For example, if a model parameter represents Pressure, the author may name the model parameter as "Pressure" or "P" or any other notation of interest to the author. The author may even rename the model parameters, which in one embodiment may cause the data model binding component 410 to automatically update to allow a binding of a model parameter previously bound to an old name to be changed to a model parameter bound to a new name, thereby preserving the desired binding. This mechanism for binding also allows the binding to be declaratively changed at runtime.
The model parameters 411D are shown with an asterisk to emphasize that the data-model binding component 410 does not assign values to the model parameters 411D. Therefore, model parameters 411D remain unknown. In other words, no value is assigned to the model parameter 411D.
The modeling component 420 performs a number of functions. First, the modeling component 420 defines the analytical relationships between the model parameters 411. Analytical relationship 421 is classified into three general categories, including equation 431, rule 432, and constraint 433. However, the list of solvers is extensible. For example, in one embodiment, one or more simulations may be included as part of analyzing the relationships, assuming that a corresponding simulation engine is provided and registered as a solver.
The term "equation" as used herein is consistent with that term used in the field of mathematics.
The term "rule" as used herein denotes a conditional statement in which one or more actions (the result of the conditional statement (i.e., "then") portion) are taken if one or more conditions (the condition (i.e., "if") portion of the conditional statement) are satisfied. If one or more model parameters are expressed in a conditional statement or one or more model parameters are expressed in a result statement, a rule is applied to the model parameters.
The term "constraint," as used herein, means that a constraint is applied to one or more model parameters. For example, in a city planning model, a particular house element may be limited to being placed on a map location having a subset of the total possible partition destinations. The bridge elements may be limited below a certain maximum length or a certain number of lanes.
Authors familiar with the model may provide expressions for the equations, rules, and constraints applicable to the model. In the case of simulation, the author may provide an appropriate simulation engine that provides the appropriate simulation relationships between the model parameters. The modeling component 420 can provide a mechanism for an actor to provide natural symbolic expressions of equations, rules, and constraints. For example, the author of a thermodynamic correlation model may simply copy and paste equations from a thermodynamic textbook. The ability to bind model parameters to data fields allows an author to use whatever notation the author is familiar with (as is used in textbooks on which the author relies) or the exact notation the author wishes to use.
Before solving, the modeling component 420 also identifies which model parameters (i.e., hereinafter referred to as "output model variables" (in the singular), or "output model variables" (in the plural), or "output model variable(s)" (in the case of one or more output model variables)) are to be solved. The output model variable(s) may be unknown parameters, or they may be known model parameters, where the values of the known model parameters change in the solution operation. In the example of FIG. 4, after the data-model binding operation, model parameters 411A, 411B, and 411C are known, while model parameters 411D are unknown. Thus, the unknown model parameter 411D may be one of the output model variables. Alternatively or additionally, one or more of the known model parameters 411A, 411B, and 411C may also be output model variables. Solver 440 then solves for the output model variable(s), if possible. In one embodiment described below, solver 440 is able to solve for various output model variables, even within a single model, as long as sufficient input model variables are provided to allow the solving operation to be performed. The input model variables may be, for example, known model parameters whose values do not change during the solving operation. For example, in FIG. 4, if the model parameters 411A and 411D are input model variables, the solver may instead solve for output model variables 411B and 411C. In one embodiment, the solver may output any one of a number of different data types for a single model parameter. For example, regardless of whether the operands are integers, floating points, vectors, or matrices, certain equation operations (e.g., addition, subtraction, etc.) are applicable.
While not a preferred embodiment in any way, there is one embodiment in which solver 440 is implemented by a spreadsheet program in which there are multiple spreadsheets, whether those multiple spreadsheets are located in multiple worksheets in a single spreadsheet file and/or whether those multiple spreadsheets are located in different spreadsheet files. In a typical spreadsheet, there are multiple cells. Each cell may have a literal value or an associated expression that may be parsed into a literal value. If the expression may depend on values from other cells, the values may also have been resolved by corresponding expressions associated with the other cells.
When solving in one direction, the spreadsheet is valid, where the input model parameters and the output model parameters are known. However, one of the advantages attributed to solver 440 is that different solution directions are allowed depending on which model parameters are identified as input parameters and which model parameters are identified as output model parameters, with the solution direction changing from one solution direction to the next, perhaps as the identity of the input model parameters and/or the output model parameters change. This can be solved in the spreadsheet program by assigning a different spreadsheet for each possible solution direction. For example, if there are twenty possible solution directions, there may be a total of 20 spreadsheets, one for each solution direction.
Each spreadsheet has appropriately linked cells with appropriate expressions for solving in that direction. In this embodiment, a macro or other executable program, either internal or external to the spreadsheet program, may perform the functions described by modeling component 420 to determine which parameters are input parameters and which parameters are to be solved, select the appropriate spreadsheet given the direction of the solution, and populate the appropriate spreadsheet fields for the selected direction of the solution. Once the appropriate input parameter fields are populated, the spreadsheet will solve for the output model parameters and populate the values to the appropriate output model parameter fields using the linked expressions in the spreadsheet. Recall, however, that a spreadsheet implementation is not a preferred embodiment for implementing the principles described herein. For example, if there are hundreds of possible solution directions, there may be hundreds in embodiments where one spreadsheet is dedicated to each solution direction. Thus, if analytics were to change in this spreadsheet embodiment, this spreadsheet embodiment would involve manually screening each spreadsheet to see how the linked analytical expressions should be changed. This description will be further from a particular spreadsheet embodiment and again proceed to a general discussion of the functionality of solver 400.
In one embodiment, even in the case where solver 440 is unable to solve for a particular output model variable, solver 400 may still present a partial solution to that output model variable, even if a complete solution to the actual numerical result (or any type of data being solved) is not possible. This allows the pipeline to facilitate incremental development by prompting the author what information is needed to get a complete solution. This also helps to eliminate the distinction between authoring and usage, as at least part of the solution is available throughout the various authoring stages. As an abstract example, assume that the analytical model includes the equation a ═ b + c + d. Now assume that a, c and d are output model variables and b is an input model variable with a known value of 5 (in this case, an integer). During the solving process, the solver 440 can solve for only one of the output model variables, "d", and assign a value of 6 (integer) to the model parameter called "d", but the solver 440 cannot solve for "c". Because "a" depends on "c," the model parameters called "a" are still unknown and are not solved. In this case, instead of assigning an integer value to "a", the solver may solve the partial solution and output the string value "c + 11" to the model parameter "a". As mentioned above, this may be particularly helpful when a domain expert is authoring an analytical model, and is essentially used to provide partial information about the content of the model parameters "a", and will be used to prompt the author for the need to provide some further model analysis that allows the solution of the "c" model parameters. This partial solution result may be output in view synthesis in some way to allow the domain expert to see the partial result.
Resolver 440 is shown in simplified form in fig. 4. However, resolver 440 may direct the operation of a plurality of constituent resolvers, as described with reference to fig. 10 through 16. In FIG. 4, the modeling component 420 then makes the model parameters 411 (including the now known and solved output model variables) available as output to be provided to the view portion 500 of FIG. 5.
Fig. 5 shows a view portion 500 representing an example of the view portion 230 of fig. 2. The view portion 500 receives model parameters 411 from the analytics portion 400 of FIG. 4. The view portion also includes a view component repository 520 that contains a collection of view components. For example, view component repository 520 in this example is shown to include view components 521 through 524, but view component repository 520 may contain any number of view components. Each view component may include zero or more input parameters. For example, view component 521 does not include any input parameters. However, view component 522 includes two input parameters 542A and 542B. The view component 523 includes an input parameter 543 and the view component 524 includes an input parameter 544. That is, this is only one example. The input parameters may, but need not, affect the manner in which the visual item is presented. The fact that the view component 521 does not include any input parameters emphasizes that there may be views generated without reference to any model parameters. Consider a view that includes only constant fixed (built-in) data. Such a view may for example constitute reference information for a user. Alternatively, consider a view that provides only one way to browse a catalog, so that items can be selected therefrom for import into the model.
Each view component 521 through 524 includes or is associated with corresponding logic that, when executed by the view synthesis component 540 using corresponding view component input parameters (if any), causes the corresponding view item to be placed in the virtual space 550. The virtual item may be a static image or object or may be a dynamic animated virtual item or object. For example, each of the view components 521 through 524 is associated with corresponding logic 531 through 534 that, when executed, causes the corresponding virtual items 551 through 554, respectively, to be presented in the virtual space 550. The virtual items are shown as simple shapes. However, virtual items may be quite complex in form, perhaps even including animations. In this description, when a view item is rendered in virtual space, this means that the view composition component has authored enough instructions that, when provided to the rendering engine, the rendering engine is able to display the view item at a specified location on the display and in a specified manner.
The view components 521 through 524 may even be provided as view data to the view portion 500 using, for example, the authoring component 240 of FIG. 2. As described above with reference to fig. 6, the view data may even be a pseudo-infinite or partially pseudo-infinite range of data streams provided by the data stream object. For example, the authoring component 240 may provide a selector that allows the author to select from a plurality of geometries or possibly construct other geometries. The author may also specify the type of input parameters for each view component, and some of the input parameters may be default input parameters used by view portion 500. The view data may also be provided to the logic associated with each view component 521 through 524 and/or the logic may also include some default functionality provided by the view portion 500 itself.
The view portion 500 includes a model-view binding component 510 configured to bind at least some model parameters to corresponding input parameters of view components 521 through 524. For example, model parameters 411A are bound to input parameters 542A of view component 522, as indicated by arrow 511A. Model parameters 411B are bound to input parameters 542B of view component 522, as indicated by arrow 511B. Likewise, model parameters 411D are bound to input parameters 543 and 544 of view components 523 and 524, respectively, as indicated by arrows 511C. Model parameters 411C are not shown as being bound to any corresponding view component parameters, emphasizing that not all model parameters need be used by the view portion of the pipeline, even if those model parameters are necessary in the analytics portion. Also, model parameters 411D are shown as two different input parameters bound to a view component, meaning that the model parameters can be bound to multiple view component parameters. In one embodiment, the definition of the binding between the model parameters and the view component parameters may be formulated by: 1) explicitly set by the author at the time of authoring, 2) explicitly set by the user at the time of use (subject to any restrictions imposed by the author), 3) automatically bound by the authoring component 240 based on algorithmic heuristics, and/or 4) prompt the author and/or the user to specify a binding when it is determined that a binding cannot be algorithmically bound.
Again, although not preferred, some or all of view portion 500 may be implemented via a spreadsheet. For example, a single spreadsheet may serve as the basis for one or more view components, the corresponding input parameters of which are represented in the corresponding spreadsheet cells. The associated view construction logic of the view component may be represented at least in part using a linked expression within the spreadsheet program. The presentation of the corresponding visual item may then be completed using the presentation functionality of the spreadsheet program, the macro, or some other executable program. However, as mentioned above, this spreadsheet-based implementation is not the preferred embodiment, and as such, this description will return to the more general embodiment of view portion 500.
As mentioned above, the view items may include animations. As a simple example, consider, for example, a bar graph plotting historical and projected revenue, advertising costs, and profit per sales area for a company at a given point in time (e.g., a given calendar quarter). A histogram may be plotted for each calendar quarter over a desired span of time. Now imagine you draw one of these charts, say the earliest time in a time span, and then replace it with the chart for the next time span every half second (e.g., the next quarter). The result will be that as the animation progresses, the height of the bar lines representing the profit, sales and advertising fees for each region changes. In this example, the chart for each time period is a "cell" in the animation, where the cell shows the moment between movements, where the sequentially shown set of cells mimics the movements. Conventional animation models allow animation to be generated over time using built-in hard-coded chart types.
In contrast, however, by using the pipeline 201, any type of visual item may be animated, and the animation may be driven by changing any one or any combination of the parameters of the visual components. Returning to the histogram example above, imagine that we animate at the cost of advertising, rather than time. Each "cell" in this animation is a bar graph showing sales and profit over time for a given value of advertising fee. Thus, as advertising fees change, the bar lines grow and shrink in response to the change in advertising fees.
The strength of animated data displays is that they make it very obvious what parameters are most sensitive to changes in other parameters, because you see immediately how quickly and to what extent the value of each parameter changes in response to changes in the animation parameters.
The pipe 201 is also distinctive in its animation capabilities due to the following features:
first, the order of steps of an animation variable may be calculated by an analytical method of the model, as opposed to a fixed order of steps that is only within a predefined range. For example, in the example of changing the advertising fee as an animation variable, imagine that what is specified would be "animate through advertising fee with 5% increase in advertising fee for each step" or "where advertising fee is 10% of the total fee for that step". A much more complex example is "animating through advertising fees, where advertising fees are optimized to maximize the rate of change of sales over time. In other words, the solver will determine a set of steps for advertisement spending over time (i.e., for each successive time period, such as a quarter) to maximize the growth rate of sales. Here, the user presumably wants to see not only how fast sales can be increased by changing advertising fees, but also to know the amount of quarterly advertising fees to achieve this increase (the sequence of values can be plotted as part of the composite view).
Second, any type of visual item can be animated, not just a traditional data chart. For example, consider a Computer Aided Design (CAD) model of a jet engine that is 1) to be animated with wind speed parameters, 2) where the rotational speed of the turbine is a function of wind speed, and 3) where the temperature of the turbine bearings is a function of wind speed. Jet engines have limitations on how fast the turbine can rotate before the turbine blades are not damaged or the bearings overheat. Thus, in this animation, we expect that as wind speed changes, the color of the turbine blades and bearings should change from blue (safe) to red (critical). The values of "safe" and "critical" turbine RPM and bearing temperature may be calculated by the model based on the physical characteristics of those components. Now, as the animation changes the wind speed within a defined range, we see that both the turbine blades and the bearings change color. It is now interesting to note which reaches the critical first, whether either is subject to a sudden (runaway) reaching the critical. These types of effects are difficult to discern by viewing a chart or viewing a sequence of graphics, but become immediately apparent in animation. This is just one example of animating any visual item (CAD model) by any parameter (wind speed), the animation still affects any other parameter (turbine RPM and bearing temperature). Any parameter of any visual item may be animated according to any desired parameter to serve as an animation variable.
Third, the pipeline 201 may be stopped in the middle so that data and parameters may be modified by the user and then the animation restarted or resumed. As such, for example, in the jet engine example, if runaway heating is seen to begin at a given wind speed, the user may stop the animation when runaway, modify certain engine design criteria, such as the type of bearing or bearing surface material, and then continue the animation to see the effect of the change.
As with other functions discussed herein, animations may be defined by the author and/or manipulated by the user to test various situations. For example, the model may be authored to allow certain visual items to be animated by the user according to user-selected parameters and/or within user-selected data ranges of animation variables (including specifying that the calculated range should be a desired capability). Such animations may also be displayed side-by-side, as in other "what-if" comparison displays. For example, a user may compare time-animated sales and profit animations over time in two situations with different popularity rates or different advertising fee slopes in the future. In the jet engine example, the user may compare the animation of the engine before and after changing the bearing design.
At this point, a specific example of how view synthesis may actually be built using a synthesis framework will be described with reference to FIG. 7, which shows a view synthesized 3-D presentation 700 that includes a room layout 701 (with furniture disposed within the room) and includes a "geometer" 702. This example is provided merely to illustrate how the principles described herein may be applied to any arbitrary view synthesis, regardless of domain. Thus, the example of FIG. 7, as well as any other example view compositions described herein, should be viewed strictly as merely an example that allows the abstract concepts to be more fully understood by reference to non-limiting specific examples, and not as limiting the broader aspects of the invention. The principles described herein may be applied to construct an infinite variety of view compositions. Nevertheless, a broader abstract principle may be clarified with reference to specific examples.
FIG. 8 shows a flow diagram of a method 800 for generating a view construct. The method 800 may be performed by the plumbing environment 200 of FIG. 2, and thus will be described with frequent reference to the plumbing environment 200 of FIG. 2, as well as with reference to FIGS. 3 through 5, each of which shows specific portions of the plumbing of FIG. 2. Although method 800 may be performed to construct any view synthesis, method 800 will be described with reference to view synthesis 700 of FIG. 7. Some of the actions of method 800 may be performed by data section 210 of FIG. 2 and listed under the heading "data" in the left column of FIG. 8. Other actions of method 800 may be performed by the analytics portion 220 of FIG. 2 and listed under the heading "analytics" in the second left column of FIG. 8. Other actions of the method may be performed by view portion 230 of FIG. 2 and listed under the heading "View" in the second column on the right of FIG. 8. One of the actions may be performed by the rendering module and listed under the heading "other" in the right column of fig. 8.
Referring to FIG. 8, the data portion accesses input data that at least collectively affects what visual items are to be displayed or how a given one or more of the visual items are to be displayed (act 811). For example, referring to FIG. 7, the input data may include a view component for each item of furniture. For example, each of a bed, a chair, a plant, a table, a flower, and even the room itself may be represented by a corresponding view component. The view component may have input parameters appropriate for the view component. For example, if animation is used, some input parameters may affect the flow of the animation. Some parameters may affect the display of the visual item, while some parameters may not.
For example, the room itself may be a view component. Some input parameters may include the size of the room, the orientation of the room, the wall color, the wall texture, the floor color, the floor type, the floor texture, the location and power of the light sources within the room, and so forth. It is also possible that room parameters that are not necessarily reflected in this view synthesis, but may be reflected in other views and uses of the room components. For example, a room parameter may have a location of the room in degrees, minutes, and seconds longitude and latitude. The room parameters may also include an identification of the author of the room component, as well as an average rent for the room.
The various components within the room may also be represented by corresponding parameterized view components. For example, each plant may be configured with input parameters specifying pot style, pot color, pot size, plant color, plant elasticity, plant dependence on sunlight, daily water uptake by the plant, daily oxygen production by the plant, plant location, and the like. Also, depending on the nature of what is being displayed, some of these parameters may affect how the display is presented, while others may not.
The "geomantic" meter 702 may also be a view component. The meter may include input parameters such as the diameter, the number of wedges (widgets) included in the diameter of the meter, the color of the text, and the like. The various wedges of the "geomantic" meter may also be viewing elements. In this case, the input parameters to the view component may be a title (e.g., water, mountain, thunder, wind, fire, ground, lake, day), perhaps a graphic, tone, etc. that appears in a wedge.
The analytics portion binds the input data to the model parameters (act 821), determines output model variables (act 822), and solves for the output model variables using model-specific analytical relationships between the model parameters (act 823). The binding operation of act 821 has been discussed above, which basically allows the author to flexibly define model analytical method equations, rules and constraints using symbols that are comfortable to the author of the model. The more complex solvers described with reference to fig. 10-16 may be directed to solving for output model variables (act 823).
The identification of the output model variables may differ from one solution operation to the next. Even though the model parameters may remain the same, the identification of which model parameters are output model variables will depend on the availability of data to be bound to the particular model parameters. This has significant implications in allowing a user to perform "what-if analysis" in a given view synthesis.
For example, in the "wind and water" room example of FIG. 7, assume that a user purchases a new chair in their living room. The user may provide the design of the room as data to the pipeline. This may be facilitated by prompting the user with an authoring component to enter the room dimensions, perhaps also providing a selection tool that allows the user to select virtual furniture to drag and drop into the virtual room where the actual furniture is placed in the actual room. The user may then select a piece of furniture that may be edited to have the characteristics of the new chair purchased by the user. The user may then drag and drop the chair into the room. The "wind and water" meter 702 will be automatically updated. In this case, the position and other attributes of the chair will be the input model variables, while the "geomantic" score will be the output model variables. As the user drags the virtual chair to various locations, the "geomantic" score of the "geomantic" meter will be updated, and as such, the user may test the "geomantic" results of placing the virtual chair in various locations. To avoid the user having to drag and drop the chair to every possible location to see which location provides the best "geomancy," the user may obtain local visual cues (such as, for example, gradient lines or arrows) that tell the user to move the chair from its current location in a particular direction so that "geomancy" is better or worse, and how much better or worse.
However, the user may also perform other operations not heard in conventional view synthesis. The user may actually change the output model variables. For example, the user may indicate a desired "geomantic" score in a "geomantic" meter and retain the position of the virtual chair as an output model variable. The solver will then solve for the output model variables and provide one or more suggested positions of the chair that will at least reach the specified "geomantic" score. The user may choose to have multiple parameters as output model variables and the system may provide multiple solutions to these output model variables. This is facilitated by the complex solver described in more detail with reference to figures 10 to 16.
Returning to FIG. 8, once the output model variables are solved, the model parameters are bound to the input parameters of the parameterized view component (act 831). For example, in the "geomantic" example, after solving for an unknown "geomantic" score, the score is tied as an input parameter to the "geomantic" meter view component, or perhaps to an appropriate wedge contained in the "geomantic" meter. Alternatively, if the "geomantic" score is an input model variable, the position of the virtual chair can be solved and provided as an input parameter to the chair view component.
A simplified example will now be presented that illustrates the principle of how a solver rearranges equations and changes the specification of all the driven input and output model variables of an analytical model. The user does not have to rearrange the equations himself. This simplified example may not accurately represent the "geomantic" rule, but illustrates its principles. Suppose the total "wind and water" (FS) (FSroom) of a room is equal to the FS (FSchair) of a chair and the FS (FSplant) of a plant. Suppose FSchair is equal to a constant a times the distance d of the chair from the wall. Suppose FSplant is a constant B. The total FS of the room is FSroom ═ a × d + B. If d is an input model variable, FSrom is an output model variable whose value displayed on the "geomantic" meter changes as the user changes the position of the chair. Now assume that the user clicks on the "geomantic" meter to make it an input model variable and converts d to an unknown output model variable state. In this case, the resolver effectively and internally rewrites the above formula to d ═ fsrom-B)/a. In this case, as the user changes the desired value FSroom on the "geomantic" meter, the view assembly may move the chair around, changing its distance d from the wall.
The view component then perhaps drives the construction of the view item in view synthesis by executing the construction logic associated with the view component using the input parameters (if any) to construct a view of the visual item (act 832). The view construct may then be provided to a rendering module, which then uses the view construct as rendering instructions (act 841).
In one embodiment, the process of building a view is considered a data transformation performed by a solver. That is, for a given type of view (e.g., consider a histogram), there is a model that includes rules, equations, and constraints that generates the view by converting the input data into a displayable output data structure (referred to as a scene graph) that encodes all of the low-level geometry and associated properties required by the rendering software to drive the graphics hardware. In the histogram example, the input data would be, for example, a series of data to be drawn along with attributes of things such as chart titles, axis labels, and the like. The model that generates this pillar line will have rules, equations, and constraints that perform the following operations: 1) counting how many entries the series of data includes to determine how many bars to draw, 2) calculating the range (minimum, maximum) the series of data spans to calculate things such as the scale and starting/final values for each axis, 3) calculating the height of the bars for each data point in the series of data based on the previously calculated scaling factor, 4) counting how many characters there are in the title of the chart to calculate the starting position and size of the title to properly position and center the title in the chart, and so on. In summary, the model is designed to compute, based on input data, a set of geometric shapes that are arranged within a hierarchical data structure of the type "scene graph". In other words, the scene graph is the output variable that the model solves for based on the input data. In this manner, an author can design new kinds of views, custom existing views, and synthesize pre-existing views into a composite drawing using the same framework that the author uses to author, customize, and synthesize any type of model. In this manner, authors who are not programmers may create new views without writing new code.
Returning to FIG. 2, recall that user interaction response module 250 detects when a user interacts with view synthesis and causes the pipeline to respond appropriately. FIG. 9 illustrates a flow chart of a method 900 for responding to user interaction with view composition. In particular, the user interaction response module determines which components of the pipeline should perform further work in order to regenerate the view, and also provides data to the pipeline components that is representative of, or at least dependent on, the user interaction. In one embodiment, this is done by a transition pipeline running in the opposite (up) view/analytics/data direction and parallel to the (down) data/analytics/view pipeline.
The interaction is posted as an event into the upstream pipe. Each transducer in the data/analytics/views pipeline provides an up-converter that processes incoming interaction data. These transformers may be empty (lanes, which are optimized out of the path) or they may perform transform operations on the interaction data to be fed further upstream. This provides positive performance and responsiveness of the conduit in the following respects: 1) interactive behavior that has no effect on up-conversion, such as view manipulation that has no effect on source data, can be handled at the most appropriate (least upstream) point in the pipeline, 2) the intermediate transformer can optimize view update performance by sending heuristically determined updates back downstream before the final last update from the more upstream transformer. For example, upon receiving a data editing interaction, the view level transformer may cause the immediate view to be updated directly into the scene graph of the view (for which it knows how to interpret the edits), with the final full update later coming from the upstream data transformer where the source data was actually edited.
When the semantics of a given view interaction have a non-trivial mapping to the required underlying data edits, the intermediate transformer may provide the required upstream mapping. For example, dragging a point on the graph of the computed result may require that a retrosolution of new values for a number of source data items that feed the computed values on the graph be computed. The solver-level upstream transformer will be able to call the required solution and propagate the required data edits upstream.
FIG. 9 illustrates a flow chart of a method 900 for responding to user interaction with a view construct. After detecting a user interaction with a view-composited presentation on a display (act 901), it is first determined whether the user interaction requires a view to be re-generated (decision block 902). This may be performed by the presentation engine causing an event to be interpreted by the user interaction response component 250 of FIG. 2. If the user interaction does not require the view to be regenerated ("no" in decision block 902), then the pipeline does not perform any further actions to reconstruct the view (act 903), but the rendering engine itself may perform some transformation on the view. Examples of such user interactions may be the user increasing the contrast of the presentation of the view construction or rotating the view construction. Since these actions may be performed by the rendering engine itself, the pipeline need not perform any operations to reconstruct the view in response to this user interaction.
On the other hand, if it is determined that the type of user interaction requires a view construct to be regenerated ("yes" in decision block 902), the view is reconstructed by the pipeline (act 904). This may involve some change to the data provided to the pipeline. For example, in the "wind and water" example, assuming that the user will move the position of the virtual chair within the virtual room, the position parameters of the virtual chair assembly will change accordingly. An event will be raised informing the analytics portion that the corresponding model parameters representing the position of the virtual chair should also change. The analytics component will then re-solve the "geomantic" score, refilling the corresponding input parameters of the "geomantic" meter or wedge, so that the "geomantic" meter is updated with the current "geomantic" score appropriate for the new position of the chair.
User interaction may require that previously known model parameters are now unknown, and that previously unknown parameters are now known. This is one of many possible examples that may require the specification of input and output model variables to be changed so that the previously specified input model variables may become output model variables and vice versa. In this case, the analytics portion will solve for the new output model variables, driving the reconstruction of view synthesis.
This type of reconstruction is helpful when building an alternative view that is synthesized from different data-driven views. However, this is also helpful in storytelling, by causing a transition from one view composition to the next in a view. Traditionally, such story telling is accomplished by: snapshots of multiple different configurations of data within the visualization are taken (meaning that the visualization is also different), and then the snapshots are changed to slides with simple transitions between slides.
However, this approach does not work well for more complex vision, such as visualization of a room, perhaps in the "wind and water" example, or visualization of complex segments of a machine, perhaps such as a tractor. Unlike simple visualizations used in charts, there are a large number of permutations of what can be changed in the real world or other complex visualizations. Further, there may be continuity in possible changes of visual appearance (e.g., a tractor arm may be moved continuously to multiple locations, which arm movement may result in repositioning of many other elements (possibly continuously).
Another approach that may be taken is to script the visualization as a storyboard. The hassle here is that it is difficult to make scripts data-driven, and it is also difficult for scripts to predict and allow many changes to data and visual imagery that a user can make, and the subsequent propagation of changes to visual imagery, in a manner that respects the constraints and other relationships within the data and visual imagery.
On the other hand, when constructed using the pipeline 201 described herein, very complex view synthesis can be constructed. Each view composition may be one of the storyboards of complex scenes. A storyboard can change from one scene to another by changing the view composition in one of various possible ways. For example, the transformation may involve one, some or all of the following transformations:
1) visual image transformation: in this transformation, the data driving view synthesis may remain the same, but the set of view components used to construct the view synthesis may vary. In other words, the view on the data may change. For example, a set of data may include environmental data, including temperature, wind speed, and other data representing a sequence of environmental activities such as lightning strikes. A scenario may express the data in the context of an aircraft traversing the environment. The next scene in the storyboard can tell another airplane that is subjected to exactly the same environmental conditions.
2) Data transformation: here, the view components remain the same. In other words, the views are made the same, but the data may affect the visual properties to change, thereby changing the way the views are presented. For example, the data may change, and/or the binding of the data to the model parameters may change.
3) And (5) transforming a coordinate system. Here, the data and view component groups may remain the same, but the coordinate system changes from one coordinate system to another.
4) And (5) target world transformation. Here, everything may remain the same, but the target virtual world changes. For example, one geometric shape may be superimposed on another geometric shape. Examples of geometries, as well as superimposed geometries, are provided below.
Resolver frame
FIG. 10 illustrates a resolver environment 1000 that may represent an example of the resolver 440 of FIG. 4. The resolver environment 1000 may be implemented in software, hardware, or a combination thereof. The resolver environment 1000 includes a resolver framework 1001 that manages and coordinates the operation of a set of specialized resolvers 1010. Set 1010 is shown to include three special solvers 1011, 1012 and 1013, but ellipses 1014 indicate that there may be other numbers (i.e., more or less than three) of special solvers. Additionally, ellipses 1014 also indicate that the set of specialized solvers 1010 are extensible. As new specialized solvers are discovered and/or developed that can aid in the model analytics, these new specialized solvers can be incorporated into the collection 1010 to supplement existing specialized solvers, or perhaps replace one or more of the existing solvers. For example, fig. 10 shows that a new solver 1015 is registered into the set 1010 using a solver registration module 1021. As one example, the new solver may be a simulation solver that accepts one or more known values and solves for one or more unknown values. Other examples include solvers for systems of linear systems of equations, differential equations, polynomials, integrations, radicators, factorizers, optimizers, and the like. Each resolver may operate in a numerical mode or in a symbolic mode or in a hybrid numerical-symbolic mode. The numerical portion of the solution may drive the rendering downstream of the parameterization. The sign portion of the solution may drive the partial solution presentation.
The set of specialized solvers may include any solver suitable for solving output model variables. The solution of the complex calculus equation can be guaranteed if, for example, the model is to determine the brakes of the bicycle. In this case, a specialized complex calculus solver may be integrated into the collection 1010, perhaps to supplement or replace an existing equation solver. In one embodiment, each solver is designed to solve for one or more output model variables in a particular type of analytical relationship. For example, there may be one or more equation solvers configured to solve the unknowns in an equation. There may be one or more rule solvers configured to apply rules to solve the unknowns. There may be one or more constraint solvers configured to apply the constraints to solve the unknowns. Other types of solvers may be, for example, simulation solvers that perform simulations using input data to build corresponding output data.
The solver framework 1001 is configured to coordinate the processing of one or more or all of the specialized solvers in the set 1010 so that one or more output model variables are solved. The solver framework 1001 is configured to provide the solved values to one or more other external components. For example, referring to FIG. 2, the solver framework 1001 may provide model parameter values to the view portion 230 of the pipeline, such that the solving operation affects the manner in which view components execute to render the view items, or thereby affects other data associated with the view items. As another potential effect of the solution, the model analysis method itself may be changed. For example, as just one of many examples in which this may be accomplished, a model may be authored with a modifiable rule set such that, in a given solution, certain rules and/or constraints that were initially inactive are activated and certain rules and/or constraints that were initially activated become inactive. The equations can also be modified in this way.
FIG. 11 illustrates a flow diagram of a method 1100 in which the solver framework 1001 coordinates processing between specialized solvers of the set 1010. The method 1100 of FIG. 11 will now be described with frequent reference to the resolver environment 1000 of FIG. 10.
The solver framework begins the solution operation by identifying which model parameters are input model variables (act 1101), and which model parameters are output model variables (act 1102), and by identifying model analytics that define the relationships between the model parameters (act 1103). Given this information, the solver framework analyzes the dependencies among the model parameters (act 1104). Even given a fixed set of model parameters, and given a fixed set of model analytics, the dependencies can vary depending on which model parameters are input model variables and which are output model variables. Thus, whenever the identification of which model parameters to use is an input and a solution operation is performed based on model analysis, the system can infer the dependency graph. The user does not have to specify a dependency graph for each solution. By evaluating the dependencies of each solution operation, the solver framework has the flexibility to solve for one set of one or more model variables during one solution operation and another set of one or more model variables in the next solution operation. In the context of fig. 2 through 5, this means greater flexibility for the user to specify what is input and what is output by interfacing with view synthesis.
In some solution operations, the model may not have any output model variables at all. In this case, the solution will verify that all known model parameter values, when considered together, satisfy all relationships expressed by the analysis of the model. In other words, if any one data value is to be erased, changed to an unknown and then solved for, the erased value will be recalculated by the model and will be the same as before. Thus, the loaded model already exists in solved form, and of course, the model with the unknowns and the obtained solution now also exists in solved form. Importantly, a user interacting with a view of the solved model can edit the view, which may have the effect of changing one or more data values, and thus cause a re-solution that will attempt to recalculate the data values of the output model variables so that the new set of data values is consistent with the analytics. Which data values (whether or not the model starts with output model variables) the user can edit are controlled by the author; in fact, this is controlled by the author who defines which variables represent the allowed unknowns.
If there are expressions with one or more unknowns that can be solved independently without first solving other unknowns in other expressions ("yes" in decision block 1105), then these expressions can be solved at any time, even in parallel with other solving steps (act 1106). On the other hand, if there is an expression having an unknown number that cannot be solved without first solving for the unknown number in another expression, the solution dependency has been found. In this case, the expression becomes part of a relational structure (e.g., a dependency tree) that defines a particular order of operations with respect to another expression.
In the case of expressions with solution dependencies on the interconnections of other expressions, the execution order of the specialized solvers is determined based on the analyzed dependencies (act 1107). The solver then executes in the determined order (act 1108). In one example, where the model analysis is expressed as equations, constraints, and rules, the order of execution may be as follows: 1) rewriting equations with dependencies or incompletely solvable as independent expressions as constraints, 2) solving the constraints, 3) solving the equations, and 4) solving the rules. Rule solving may result in the data being updated.
Once the solvers are executed in the specified order, a determination is made as to whether the solution should stop (decision block 1109). If, for example, all of the output model variables are solved, or if it is determined that although not all of the output model variables are solved, the special solver cannot perform any operations to further solve for more output model variables, then the solution process should stop. If the solution process should not end ("NO" of decision block 1109), then the process returns to the analysis of dependencies (act 1104). However, at this point, the identity of the input and output model variables may change due to the output model variable or variables being solved for. On the other hand, if the solution process should end (yes at decision block 1109), then the solution ends (act 1110). However, if the model cannot be fully solved because there are too many output model variables to assign symbolic values to the output model variables that reflect how far the solution can be done, the model may still succeed in generating a partial solution. For example, if a model has the equation a ═ B + C, B is known to be "2", is an input model variable, but C is an output model variable, a is also an output model variable and needs to be solved, the model solver cannot produce a value for a because B is known and C is unknown; thus, instead of a full solution, the solver returns "2 + C" as the value of A. Thus, it is obvious to the author what additional variables need to become known, whether by providing it with a value or by adding further rules/equations/constraints or simulations that can successfully produce the required value from other input data.
This method 1100 may be repeated whenever the solver framework detects a change in the value of any of the known model parameters, and/or whenever the solver framework determines that the identity of the known and unknown model parameters has changed. The solution can be performed in at least two ways. First, if the model can be solved completely in a symbolic way (i.e., if all equations, rules, and constraints can be rewritten by an algorithm so that there is a calculable expression for each unknown), then this is done, computing the model. In other words, a data value for each unknown is generated, and/or the data values that are allowed to adjust are adjusted. As a second possible approach, if the model cannot be solved fully in a symbolic way, it is solved partly in a symbolic way and then it is determined whether one or more numerical methods can be used to reach the required solution. Furthermore, an optimization step is performed in order to determine, even in the first case, whether using a numerical method is a faster way of calculating the required values with respect to performing a symbolic solving method. While the symbol method can be faster, there are cases where: symbolic solving may perform too many item rewrites and/or too many rewrite rule searches, so that it is faster to forego this method and solve using a numerical method.
FIG. 12 shows a resolver environment 1200 that represents an example of the resolver environment 1000 of FIG. 10. In this case, the solver coordination module 1210 operates to receive the input model variables 1201 and coordinate the actions of the forward solver 1221, the sign solver 1222 (or "inverter"), and the digital solver 1223 to generate the model variables 1202 (including the output model variables). The forward solver 1221, the symbol solver 1222, and the digital solver 1223 are examples of solvers that may be in the set of solvers 1010 of fig. 10.
The solver coordination module 1210 maintains a dependency graph of the model analytics with corresponding model variables. For each solution operation, the solver coordination module 1210 can determine which of the model variables are input model variables and which of the model variables are output model variables, and as such, will be solved.
The forward solver 1221 solves the correctly rendered model analytics so as to be forward solvable. For example, if there is only one formula a ═ B + C in the model analysis method, and if B and C are input model variables, then a can be solved using forward solution by inserting the values of B and C into the formula, and determining the resulting value of a.
The notation solver 1222 rewrites the model analysis to be forward solvable. For example, assume that in formula a ═ B + C, variables a and C are input variables, and variable B is an output variable to be solved for. In this case, the model analysis cannot be solved forward without first inverting the model analysis (in this case, inverting the formula). Therefore, the symbol resolver 1222 rewrites the formula a ═ B + C to B ═ a-C. The inverted formula can now be solved forward by the forward solver 1221 so that the input variables a and C are inserted into the formula B-a-C to obtain the value of the variable B.
Some formulas are not mathematically reversible, or at least it has not been found how to reverse some types of formulas. Furthermore, even if the formula is invertible, or it is known how to invert the formula, the sign solver 1222 may simply not be able to invert the formula. Or perhaps less efficient than other solution methods such as numerical solutions, the sign solver 1222 reverses the equations. Thus, in the event that the model analysis is not properly reversible (or because inversion is not possible, unknown, or the sign solver is not enabled), a digital solver 1223 is provided to solve the model analysis using the model analysis.
The solver coordination module 1210 is configured to manage each solution operation. For example, FIG. 13 illustrates a flow diagram of a method 1300 for managing solution operations so that model analytics can be solved. The method 1300 may be managed by the resolver environment 1200 under the direction of the resolver coordination module 1210.
Solver coordination module 1210 identifies which of the model variables of the model analytics are input variables for the particular solution and which of the model variables are output model variables for the particular solution (act 1301). If the input and output model variables are defined by the data-model binder component 410 in FIG. 4, for example, the identity of the input and output model variables can change from one solution operation to the next, even given a constant set of model variables. Thus, the coordination of the solution operations may change from one solution operation to the next. For example, even given a constant set of model analytics, depending on the input model variables, forward solving may be sufficient for one solving operation, inversion and forward solving for the inverse analytics may be sufficient for another solving operation, and perhaps digital solving may be sufficient for yet another solving operation.
However, even the model analytics, if implemented in the context of the analytics portion 220 of the pipeline 201, may change as the model analytics are developed or perhaps in combination with other model analytics as previously described. The solver environment 1200 can account for these changes by identifying the input and output model variables each time there is a change, by accounting for any altered model analytics, and solving for them appropriately.
For each solution, once the input and output model variables are identified (act 1301), the solver coordination module 1210 determines whether a forward solution to the output parameters will be performed given the input model variables without first reversing the model analysis method (decision block 1302). If a forward solution is to be performed (yes in decision block 1302), then forward solver 1221 is caused to forward solve the model analysis (act 1303). This forward solution may be for an overall model analysis, or may be for only a portion of the model analysis. In the latter case, method 1300 may be performed again, but this time with a full set of input model variables including the model variables being solved for in the forward solution.
If it is determined that forward solving for the output parameters is not to be performed for the particular solution, at least without first inverting the model analysis (NO in decision block 1302), then it is determined whether the model analysis will be inverted for the particular solution so that the forward solving can solve for the output parameters (decision block 1304). If the model analytics (or at least a portion of the model analytics) are to be reversed (yes in decision block 1304), the model analytics are reversed by the sign solver (act 1305). Thereafter, the inverted model analysis may be solved using a forward solution (act 1303). Again, if only a portion of the model analysis method is solved in this manner, the method 1300 may be performed again, but with an extended set of input model variables.
If it is determined that the model analysis method is not to be reversed for the particular solution (NO in decision block 1304), then the digital solver may use a numerical method to solve for the output variables (act 1306). Again, if only a portion of the model analysis method is solved in this manner, the method 1300 may be performed again, but with an extended set of input model variables.
Thus, a flexible solver environment 1300 has been described in which various model analytics can be solved, regardless of which model variables are input and which are output from one solving operation to the next.
FIG. 14 shows a flowchart of another method 1400 that may be performed by the resolver framework 1001 shown in FIG. 10. In particular, the solver solves for model variables (act 1401), which may define properties of view components for view synthesis. For example, the solver may use known model variables to solve for unknown model variables. In some cases, the known model variables may be outputs provided by another solver, such as when the solver forms part of a relational structure such as a dependency tree as mentioned above with reference to FIGS. 10 and 11.
The solving operation results in an actual change in the specification data (act 1402). After the solver solves the model variables, the properties of the view components of the view composition are set to the values of the solved model variables (act 1403). For example, the solved model variables may be provided as part of the model parameters 411 as shown in FIG. 5, and the model parameters 411 may be bound to the input parameters 542 of the first view component 520. In some cases, the known model variables used to solve for the solved model variables may define another attribute of the first view component 520. In such a case, the known model variables and the solved model variables may be bound to various input parameters 542 of the first view component 520. In other cases, known model variables may define properties of the second view component 520, such as when the first view component is a child or parent of the second view component. In these other cases, the solved model variables may be bound to the input parameters 542 of the first view component 520, while the known model variables may be bound to the input parameters 542 of the second view component 520.
After the properties of the view component are set to the values of the solved model variables, the view composition including the view component is rendered (act 1404).
There are various environments that may be used with the method 1400, if desired. For example, FIG. 15 illustrates an environment 1500 that can be used with the method 1400 and can be implemented in software, hardware, or a combination. In particular, environment 1500 may include one or more solvers 1501a, 1501b, 1501c that may form part of and/or be invoked by property-setters 1502a, 1502b, 1502 c. Solver 1501 may be configured to solve for unknown model variables (act 1401 of FIG. 14), for example, by using known model variables to solve for unknown model variables. Additionally, attribute-setter 1502 may set the attributes of the view component to the values of the solved model variables (act 1403 of FIG. 14). However, the property-setter 1502 does not necessarily include the solver 1501, and may simply receive known model variables and then set the properties of the view components to the values of the received model variables.
FIG. 16 illustrates a flow diagram of another method 1600 that may be performed by the environment 1500 as shown in FIG. 15. In more detail, a first property-setter 1502a of environment 1500 is invoked (act 1601). If desired, the first property-setter 1502a and other property-setters 1502 may be invoked by the model-view binding component 510 as shown in FIG. 5, and/or the first property-setter 1502a and other property-setters 1502 may form part of the model-view binding component 510.
After being invoked, the first property-setter 1502a sets a first property of the view component for view composition (act 1602). For example, the first attribute-setter 1502a may set the first attribute to the value of the model variable 1503 a. In some cases, the first attribute-setter 1502a may simply receive the known model variables 1503a and then set the first attribute to the value of the received model variables. In other cases, the first attribute-setter 1502a may invoke the solver 1501a, which solver 1501a may solve the model variables 1503a, and then may set the first attribute to the values of the solved model variables.
In addition to setting the first attribute, the first attribute-setter 1502a also calls the second attribute-setter 1502b (act 1603). Second property-setter 1502b then invokes a solver, such as solver 1501b, that can be configured to solve the model variables (act 1604). Specifically, as shown in fig. 15, solver 1501b may be called by a second property-setter 1502b, and/or solver 1501b forms part of the second property-setter 1502b, and when called, solver 1501b may solve for unknown model variables 1503 b. In some cases, the solver 1501b may solve for the unknown model variables 1503b by using known model variables, such as model variables 1503 a. For example, when the first property-setter 1502a calls the second property-setter 1502b (act 1603), the first property-setter 1502a may pass the model variable 1503a to the second property-setter 1502 b. Then, the solver 1501b may solve for the unknown model variables 1503b by using the model variables 1503 a. Of course, the first property-setter 1502a need not pass the model variables 1503a to the second property-setter 1502b, which second property-setter 1502b may access the model variables 1503a in any other suitable manner.
Second attribute-setter 1502b then sets (act 1605) the second attributes of the view component of the view composition to, for example, the values of solved model variables 1503 b. Of course, in some cases, the second attribute-setter 1502b may simply receive the known model variables 1503b, and then the second attribute-setter may set the second attribute to the value of the received model variable. Thus, second attribute-setter 1502b need not invoke a solver (act 1604). After the second property-setter 1502b sets the second property, the view composition including the view component is presented (act 1606).
In some cases, the first attribute and the second attribute may be attributes of a single view component of the view composition. Thus, the property-setter 1502a, 1502b of the single view component may set (acts 1602, 1605) the first and second properties to the values of the model variables 1503a, 1503b, thus allowing the model variables 1503a, 1503b to define the first and second properties.
In other cases, the first attribute may be an attribute of the first view component and the second attribute may be an attribute of the second view component. Accordingly, property-setter 1502a of the first view component may set (act 1602) the first property to the value of model variable 1503a, thus allowing model variable 1503a to define the first property. In addition, property-setter 1502b of the second view component may set (act 1605) the second property to the value of model variable 1503b, thus allowing model variable 1503b to define the second property. If desired, the first view component may be a child or parent of the second view component, and the second view component may be a child or parent of the first view component.
As such, as described above with reference to FIG. 11, the plurality of solvers may be written in an explicit order (act 1107) and then solved according to that order (act 1108). For example, a solver may be explicitly written using a relational structure such as a dependency tree. In addition, as shown with reference to fig. 14 to 16, a plurality of solvers can be implicitly written based on the ability of the property-setter 1502 having a solver 1501 to invoke other property-setters 1502 having solvers 1501.
Synthetic view synthesis
Referring to FIG. 2, the conduit environment 200 also includes a model import mechanism 241, perhaps included as part of the authoring mechanism 240. The model import mechanism 241 provides a user interface or other assistance to the author to allow the author to import at least a portion of the pre-existing analytics-driven model into the current analytics-driven model that the user is building. Thus, when authoring a new analytical model, the author does not have to start all the time from scratch. The import may be the entire analytical driven model or perhaps part of the model. For example, the introduction may result in one or more or all of the following six potential effects.
As a first potential effect of importing, additional model input data may be added to the pipeline. For example, referring to fig. 2, additional data may be added to the input data 211, the analytics data 221, and/or the view data 231. Additional model input data may also include additional connectors added to the data access component 310 of fig. 3 or possibly a different normalization component 330.
As a second potential effect of import, there may be additional or modified bindings between the model input data and the model parameters. For example, referring to FIG. 4, data-model binder 410 may cause additional binding to occur between normalized data 401 and model parameters 411. This may cause the number of known model parameters to increase.
As a third potential effect of the import, there may be additional model parameters to generate a complementary set of model parameters. For example, referring to FIG. 4, model parameters 411 may be augmented due to the import of analytical behavior of the imported model.
As a fourth potential effect of import, there may be additional analytical relationships (e.g., equations, rules, and constraints) added to the model. Additional input data is generated by the first potential effect, additional bindings are generated by the second potential effect, additional model parameters are generated by the third potential effect, and additional analytical relationships are generated by the fourth effect. Any one or more of these additional items may be considered additional data that affects view composition. Further, any one or more of these effects may alter the behavior of resolver 440 of FIG. 4.
As a fifth potential effect of import, there may be additional or different bindings between the model parameters and the input parameters of the view. For example, referring to FIG. 5, the model-view binding component 510 binds the potentially augmented set of model parameters 411 to the potentially augmented set of view components in the view component repository 520.
As a sixth potential effect of import, there may be additional parameterized view components added to the view component repository 520 of FIG. 5, perhaps causing new view items to be added to the view composition.
Thus, by importing all or a portion of another model, data associated with that model is imported. Since view synthesis is data-driven, this means that the imported parts of the model are immediately merged into the current view synthesis.
When a portion of a pre-existing analytics-driven analytics model is imported, the data provided to the pipeline 201 changes, causing the pipeline 201 to regenerate the view composition immediately or in response to some other event. Thus, in the case of a copy and paste operation, which is basically done from an existing model, the resulting composite model can be immediately viewed on the display due to the parsing operation.
As an example of how this functionality is useful, consider the "geomantic" room view composition of FIG. 7. The author of this application may be a "geomantic" expert and may wish to start with only a standard room layout view synthesis model. Thus, by importing a pre-existing room layout model, the "wind and water" expert is now able to see the room layout 701 displayed on the display relatively quickly, if not immediately, as shown in FIG. 7. Moreover, the furniture and room item inventory that may typically accompany a standard room layout view composite model has now become available to the "geomantic" application of FIG. 7.
Now, the "geomantic" expert may wish to import basic pie chart elements as a basis for constructing the "geomantic" chart element 702. However, the "geomantic" expert may specify certain fixed input parameters for the chart element, including perhaps a total of 8 wedges, and perhaps a background image and title for each wedge. Now the "geomantic" expert only needs to specify the analytical relationships that specify how the model parameters are related to each other. In particular, the color, location, and type of furniture or other room items may have an impact on a particular "geomantic" score. An expert may simply note these relationships to thereby interconnect the room layout 701 and the "geomantic" score through analysis. This ability to collaborate, built on the job of others, creates tremendous creativity in creating applications that solve problems and allow visual analysis. This is in stark contrast to systems that may allow a user to visually program a unidirectional data stream using a fixed dependency graph. These systems can perform a one-way solution, initially programmed from input to output direction. Given an interactive session with a user, the principles described herein allow for solving in a variety of ways, depending on what is known and what is unknown at any time.
Visual interaction
Until this point the view composition process has been described as a single view composition of a presentation. For example, FIG. 7 illustrates a single view composition generated from a set of input data. However, the principles described herein may be extended to an example of integrated view synthesis in which multiple component view synthesis is included. This may be useful in many different situations.
For example, given a single set of input data, there may be multiple possible solutions when the solver mechanism is solving for output model variables. The component view compositions may each represent one of a plurality of possible solutions, where another component view composition may represent another possible solution.
In another example, a user may simply wish to retain a previous view composition generated using a particular set of input data and then modify that input data to try a new situation to generate a new view composition. The user may also wish to retain the second view composition as well and try a third possible scenario by changing the input data again. The user may then view these three situations simultaneously, perhaps by side-by-side comparison, to obtain information that may be difficult to obtain by viewing only one view composition at a time.
Fig. 17 shows an integrated view composition 1700 extending from the "geomantic" example of fig. 7. In this integrated view synthesis, the first view synthesis 700 of fig. 7 is again represented using elements 701 and 702, identical to fig. 7. Here, however, there is a second view composition that is highlighted. The second view composition is similar to the first view composition in that there are two elements, the room display and the "geomantic" score. However, the input data synthesized for the second view is different from the input data synthesized for the first view. For example, in this case, the location data for multiple items of furniture will differ, causing their location in the second-view synthesized room layout 1701 to differ from the location in the first-view synthesized room layout 701. However, in contrast to the first view synthetic "geomantic" meter 702, different locations of various items of furniture correlate to different "geomantic" scores in the second view synthetic "geomantic" meter 1702.
The integrated view composition may also include a comparison element that visually represents a comparison of values of at least one parameter across some of all previously created and currently displayed view compositions. For example, in FIG. 13, there may be a bar graph showing the cost and lead time of each displayed view composition. Such a comparison element may be an additional view component in view component repository 520. Perhaps, the comparison view element is only rendered when a multiple view composition is displayed. In this case, the comparison view synthesis input parameters may be mapped to model parameters for different solution iterations of the model. For example, the comparison view composition input parameters may be mapped to cost parameters resulting for the generation of the first and second view compositions of FIG. 17 and to delivery parameters resulting for the generation of the first and second view compositions.
Referring to FIG. 17, there is also a selection mechanism 1710 that allows the user to visually emphasize a selected subset of all available previously-constructed view compositions. The selection mechanism 1710 is shown to include three possible view constructs 1711, 1712, and 1713 shown in thumbnail form or in some other unobtrusive manner. Each thumbnail view composition 1711 through 1713 includes a corresponding checkbox 1721 through 1723. The user may check a checkbox corresponding to any view composition that is to be visually emphasized. In this case, checkboxes 1721 and 1723 are checked, causing a larger version of the corresponding view configuration to be displayed.
The integrated view synthesis, or even any single view synthesis of the integrated view synthesis, may have a mechanism for a user to interact with the view synthesis to specify what model parameters should be considered unknown to trigger another solution by the analysis solver mechanism. For example, in the room display 1701 of FIG. 17, a particular item of furniture may be right-clicked, a particular parameter (e.g., location) may be right-clicked, and a drop-down menu may appear that allows the user to specify that the parameter should be treated as an unknown. The user may then right-click on the harmony percentage (e.g., 95% of the "wind and water" score 1702), and a slider (or text box of other user input mechanism) may appear that allows the user to specify a different harmony percentage. This will cause a re-solution as this will cause a change in the identity of the known and unknown parameters, and items whose location is designated as unknown furniture may appear at the new location.
Interactive visual cues
In one embodiment, the integrated view synthesis may also include visual cues or clues associated with the visual item. Visual cues give the user some visual indication: 1) an associated visual with which it can interact, 2) what types of interactions are possible with the visual, 3) what the result will be if a particular interaction is made with the visual, and/or 4) whether interaction with one or more other visual is necessary to achieve the result.
As mentioned above with reference to fig. 5, the view portion 500 includes a view repository 520, the view repository 520 including a plurality of view components 521, 522, 523, 524. Some of the view components 522, 523, and 524 are driven by values populated from the corresponding input parameters (parameters 542A and 542B of view component 522, parameter 543 of view component 523, and parameter 544 of view component 524). The data provided to the input parameters drives the execution logic of the corresponding view component so that the data controls the construction of the visual item. For example, the structure of visual item 552 can depend on the data provided to input parameters 542A and/or 542B.
In one embodiment, one or more of the input parameters for a given view component may be interactivity parameters. The interactivity parameters may define whether a visual item is to be presented with interactivity. Alternatively, or in addition, the interactivity parameters may cause a particular type of interactivity to be applied to a corresponding visual item that is built as a result of executing the execution logic of the corresponding view component. In the following, a view component containing at least one such interactivity parameter will be referred to as a "visually cued interactive" view component. In addition to enabling interactivity, the data provided to the interactivity parameters may also define how the corresponding visual item is visually cued when presented to visually inform the user that the visual item is interactive and, potentially, the type of interactivity. One, some, or even all of the presented visual items may be interactive and visually cued in this manner.
There are many different types of interactivity that may be provided by a visual item. One is referred to herein as navigation interactivity. As a user navigates to interact with a visual item, the subset of view components used to construct the visual item for presentation may change.
For example, one type of navigation interaction is referred to herein as "jurisdictional interactivity". The jurisdictional interactivity changes the subset of visual items presented such that at least some visual items displayed immediately prior to the navigation interactivity are also displayed immediately after the navigation interactivity. For example, referring to FIG. 5, a virtual space 500 is shown including four visual items 551, 552, 553, and 554. If the visual item 552 is jurisdictionally interactive, the user can interact with the visual item 552 so that the visual items 551 and 554 are no longer in the virtual space 550 to be rendered. However, the visual items 552 and 553 may remain in the virtual space 550. Alternatively or additionally, further visual items may be built into the visual space.
One type of jurisdictional interactivity is scrolling interactivity that changes the scope of the view components used to generate visual items for presentation. FIGS. 18A and 18B show examples of scrolling interactivity. FIG. 18A shows a virtual space 1800A prior to scrolling interactivity. Here, the view composition component (see FIG. 5) constructs six visuals 1811, 1812, 1813, 1814, 1818, and 1816. Visual item 1811 is adorned with a visual cue (as shown by asterisk 1801) indicating that the visual item has enabled scrolling interactivity to the left. The visual item 1816 is adorned with a visual cue indicating that the visual item has enabled scrolling interactivity to the right (as shown by asterisk 1899). FIG. 18B represents the virtual space 1800B after the user interacts with the visuals 1816 to scroll to the right. Here, the visual item 1811 is removed from the virtual space. In addition, new visuals 1817 are added to the virtual space. In the example of editing interactivity, in this case, scrolling interactivity also results in the visual item 1812 being decorated with visual cues that represent left-scrolling-enabled interactivity. In addition, the visual item 1816 also loses its visual cues and interactivity provided to the visual item 1817. Visual cues and interactive functionality may be enabled and disabled for a given visual item by simply changing the data provided to populate the input parameters of the corresponding view component.
Another type of jurisdictional interactivity is "detail interactivity" that changes the "detail" level of the view component used to generate the visual item for presentation. "zooming in" interactivity may result in the appearance of a finer grained visual item, perhaps with some previous visual items disappearing from view. "zooming out" interactivity may result in the appearance of a coarser grained visual item, perhaps some previous finer grained visual item disappearing. For example, if zoomed in on a map of the visible universe, clusters of the galaxies may begin to appear. If one were to zoom in on this constellation cluster, then a single constellation could begin to shape. A single star may appear if one of those single galaxy is amplified, such as galaxy. If one of those stars, such as the sun, is enlarged, the details of the sun may become more and more apparent, perhaps with planets beginning to appear. If one of those planets, such as the earth, is magnified, large-scale features such as continents may appear. If further zoomed, country borders may appear, towns may appear later, and then streets may be shaped. This may continue until the daughter atomic particles are formed. The coarseness of the granularity of such magnified topology need not be physically related, as in the model of the universe. Each jurisdiction interactivity may be utilized to navigate through other topographies, resulting in disappearance of previous visual items and appearance of new visual items. Again, the use of a pseudo-infinite series of data may facilitate this operation.
Before describing the interactive operations in detail, FIG. 19A shows an example virtual space 1900A. Here, only two visual items 1911 and 1912 start to be displayed. As shown in virtual space 1900B of FIG. 19B, visual item 1911 is enlarged, some visual items 1921 and 1922 are now presented inside visual item 1911, and visual item 1912 is now out of view. This is an example of magnifying interactivity. As an example of reducing interactivity, the virtual space 1900A may represent a state after the interactivity is reduced, starting with the virtual space 1900B of fig. 19B.
The type of interactivity may also be a link interactivity that causes at least one presented frame (and perhaps the entire display) to display a visual item that is completely different from a previously displayed visual item. For example, in the "geomantic" room example in FIG. 7 above, clicking on a particular visual item (e.g., "geomantic" meter 702) may cause a web page about "geomantic" to be displayed instead of the "geomantic" room view. Alternatively, there may be visual items that, if interacted with, may result in the appearance of a completely different room.
Yet another type of interactivity may be external action interactivity that results in some action being taken independent of the visual scene being rendered. For example, interaction may be with visual items that may result in an email being sent or an alert being set, a backup of scheduled data, performance checks being performed, and so forth.
The type of interactivity may also be editing interactivity. Editing interactivity changes data in such a way that one or more input parameter values of one or more view components change. If the input parameters affect the way the visual is constructed, the visual will also change as a result. Changes in the data may also result in changes in the values of the input model parameters, or in changes in the identities of the input model parameters and/or the output model parameters. As such, the editing interactivity applied to the visual item may result in a complete recalculation of the analytics portion 400. Several examples of editing interactivity will now be provided.
Fig. 20 shows a presentation of a sequential us 2000. The U.S. visual project may be built from one view component. However, the constituent states can be built from the corresponding sub-view components. The height of the visual item corresponding to the state represents some parameter of the state (e.g., the demographic consumption of the particular product under evaluation). Here, the new mexico state visual item 2001 has visual cues in the form of upward facing arrows 2011. This prompts the user that the new mexico state visual item 2001 can be interacted with so that the height of the visual item can be changed. Likewise, nevada visual item 2002 and florida visual item 2003 have corresponding downward facing arrows 2012 and 2013, respectively. These arrows may visually prompt the user that an upward adjustment in height of new mexican state arrow 2011, after re-analysis of the model by analytics portion 400, would result in a downward adjustment in height of nevada visual item 2002 and florida visual item 2003. Alternatively or additionally, arrows 2012 and 2013 may indicate that if the heights of the nevada state visual item 2002 and the florida state visual item 2003 are both adjusted downward by the user, then the height of the new mexican state visual item 2001 will be adjusted upward. To determine the consequences of a particular user interaction, the pipeline 201 may consider how the user may interact with the presented visual items and perform a dependency solution on the analysis model to determine what the consequences will be.
Fig. 21 shows a graph 2100 including an associated histogram 2110 and pie chart 2120. The bar graph 2110 includes a plurality of bar lines. One of the strut lines 2111A is shown with an arrow 2111B to indicate that this height can be adjusted vertically. This may result in some adjustment in assigning various wedge shapes in pie chart 2120. The histogram 2110 and the pie chart 2120 may be visually merged. For example, the generated pie chart may include a wedge chart whose thickness depends on the height of the column with which the bar chart is associated.
Fig. 22 shows hurricane map graph 2200 in which multiple routes 2201 of a hurricane 2211 are plotted. Visual cues 2220 indicate that the user may interact with route 2201 to change route 2101 to, for example, route 2202. This allows the user to assess the reality of possible alternatives to the route of a hurricane. Controls 2221, 2222, 2223, and 2224 also allow various parameters of a hurricane to be changed, such as, for example, wind speed, temperature, rotation, and hurricane migration speed.
For example, in the "geomantic" example of FIG. 7, if a particular coordination score is specified as a known input parameter, various positions of furniture may be suggested for the item of furniture (whose position is specified as unknown). For example, perhaps multiple arrows are issued from furniture, suggesting directions to move the furniture to achieve a higher percentage of coordination, different directions to move to maximize the water fraction, and so forth. The view component may also show the shading to which the chair may be moved to increase a particular score. In this manner, the user may use these visual cues to refine the design around the particular parameters that need to be optimized. In another example, perhaps the user wishes to reduce cost. The user may then specify this cost as an unknown to be minimized, resulting in a different set of suggested furniture selections.
FIG. 23 illustrates a flow chart of a method for interacting with a user interface displaying a plurality of visual items. As previously described, the computer renders the data-driven visual item on the display (act 2301). Recall that each data-driven visual item is formed by providing data to a parameterized view component. This data may in turn be acquired by the analytical model in response to the data being provided to the analytics portion 400 of the pipeline 201, or in response to the data being provided to the data portion 300 of the pipeline 201. Additionally, one, some, or all of the visual items have visual cues or other visual emphasis that conveys to the user the type of interaction and/or interactivity that may be with the visual item (act 2302). In one embodiment, the visual cues first represent that only the corresponding visual item is interactive, and until the user selects a visual item (by hovering over the visual item with a pointer), the visual cue is modified or supplemented to express the type of interactivity.
The computing system then detects that a predetermined physical interaction between the user and the visual item has occurred (act 2303). In response, interactivity is enabled or activated for the particular visual item (act 2304). The appropriate response will depend on whether the interactivity is a navigation interactivity, a link interactivity, or an edit interactivity. In the case of editing interactivity, the results will also depend on the analytical relationships between the various visual items as defined by the analytics portion 400 of the pipeline.
Interactivity may be a combination of two or more of navigation, linking, and editing interactivity. For example, in the U.S. example of fig. 20, an upward adjustment of the height of the new mexican state visual item 2001 may result in a downward adjustment of the height of the nevada state visual item 2002 and the florida state visual item 2003 (representing an example of editing interactivity). However, this may also result in the display being split into four frames, each of the three frames zooming in on three states: nevada, new mexico, and florida (examples of jurisdictional interactivity). In addition, the fourth frame may contain survey information (an example of link interactivity) related to consumption preferences of the home in the state of new mexico.
Fig. 24 abstractly illustrates a user interface 2400 that represents another example of an application. In this application example, a convenient user interface is described that allows a user to easily construct a data-driven visual scene using the principles described herein.
The user interface 2400 also includes a first group 2411 of visual items. In the case shown here, the group 2411 includes only one visual item 2411. However, the principles described herein are not limited to only one visual item within the set of visual items 2411. The visual item 2411 has associated data 2412, and as such, is sometimes referred to as a "data visual item" 2411.
Although not required, in this example, the associated data is subdivided into a plurality of data sets. Any number of groups will suffice. However, four data groups 2413A, 2413B, 2413C, and 2413D are shown as being included within the associated data 2412 of fig. 24. In the illustrated associated data 2412, each data set is organized in parallel, each having an associated plurality of data fields. In the illustrated case, each of the illustrated associated data 2412 includes a corresponding field a, b, and c. Fields a, b, and c of data set 2413A will be referred to as data fields 2413Aa, 2413Ab, and 2413Ac, respectively. Fields a, B, and c of data set 2413B will be referred to as data fields 2413Ba, 2413Bb, and 2413Bc, respectively. Fields a, b, and C of data set 2413C will be referred to as data fields 2413Ca, 2413Cb, and 2413Cc, respectively. Finally, fields a, b, and c of data set 2413D will be referred to as data fields 2413Da, 2413Db, and 2413Dc, respectively.
The user interface 2400 also includes a second set 2420 of visual items. In the illustrated case, group 2420 includes three corresponding vision items 2421, 2422, and 2423. However, there is no limit to the number of visuals in the second set 2420, nor is there any requirement that the visuals in the second set be of the same type. In the following, the visuals 2421, 2422, and 2423 may also be referred to as "element visuals". Each of the vision items 2421, 2422, and 2423 can be constructed, for example, by using the input parameters to execute the corresponding logic of the respective view component. Such view components may be those described with reference to fig. 5 for view components 521 through 524. As such, each of the vision items 2421, 2422, and 2423 are shown with input parameters. Such input parameters may be input parameters provided to the view component or may also be input parameters that drive the presentation of the visual item. For example only, each of the visuals 2421, 2422, and 2423 are shown as each having three input parameters. In particular, a vision project 2421 is shown with input parameters 2421a, 2421b, and 2421 c. A vision project 2422 is shown with input parameters 2422a, 2422b, and 2422 c. A vision project 2423 is shown with input parameters 2423a, 2423b, and 2423 c.
The user interface 2400 also includes a user interaction mechanism 2440. The user interaction mechanism 2440 permits a user (via one or more user gestures) to cause data 2412 of a data visual project 2411 to be applied to input parameters of an element visual project 2420. In one embodiment, the user gesture may actually cause the associated data to be bound to the input parameters of the element visual item 2420. Such user gestures may be drag and drop operations, hover operations, drag and click, or any other user gesture or combination of gestures. As such, the user can apply or bind data from the data visual item 2410 to the input parameters of the element visual item, thereby changing the appearance of the element visual item using simple gestures. In one embodiment, this does not even involve the user entering any associated data, and allows a single gesture group to apply and/or bind data to multiple element visual items. Examples of user gestures to apply or bind data from one data visual to multiple element visuals are described further below.
In one embodiment, the user interaction mechanism 2440 permits a user to apply data 2412 from the data visual project 2411 to input parameters of the element visual project 2420 per data group to: 1) one of the respective data sets applies to a set of one or more input parameters for a different one of the second set of visuals, and 2) a same type of single field from each of the plurality of data sets is applied to one of the set of one or more input parameters for a different one of the element visuals.
As an example, referring to fig. 24, a user may use a single set of gestures to simultaneously apply or bind data field 2413Aa of a data visual item 2411 to input parameters 2421b of an element visual item 2421, apply or bind data field 2413Ba of the data visual item 2411 to input parameters 2422b of an element visual item 2422, and apply or bind data field 2413Ca of the data visual item 2411 to input parameters 2423b of the element visual item 2423. If there is a fourth elemental visual item, data field 2413Da may be applied to the corresponding input parameters for the fourth elemental visual item.
Further applications or bindings may be made as a result of this same set of gestures, or perhaps in response to additional sets of gestures. For example, in response to a user gesture, data field 2413Ab of data visual project 2411 may be applied or bound to input parameters 2421a of element visual project 2421, data field 2413Bb of data visual project 2411 may be applied or bound to input parameters 2422a of element visual project 2422, and data field 2413Cb of data visual project 2411 may be applied or bound to input parameters 2423a of element visual project 2423. Additionally, the data field 2413Ac of the data visual project 2411 can be applied or bound to the input parameters 2421c of the element visual project 2421, the data field 2413Bc of the data visual project 2411 can be applied or bound to the input parameters 2422c of the element visual project 2422, and the data field 2413Cc of the data visual project 2411 can be applied or bound to the input parameters 2423c of the element visual project 2423.
The user interface 2400 also potentially includes a third visual item 2430 with an associated attribute 2431. The second user interaction mechanism 2450 permits the user to merge the second set of visual items 2420 into the third visual item 2430 using a set of gestures, such that: 1) the one or more input parameters of each of the second set 2420 of the visual items are set using the associated attribute 2431 of the third visual item 2430, and/or 2) the attribute 2431 of the third visual item 2430 is used to change the one or more input parameters of each of the second set 2420 of the visual items. The gesture used to accomplish this can be a drag-and-drop operation, a hover operation, or any other user gesture, and can even be accomplished using, in whole or in part, the same user gesture used to apply data from the data visual item 2410 to the element visual item 2420.
For example, if the user gestures (i.e., the second set of gestures) used to merge the second and third sets 2420, 2430 at least partially overlap with the user gestures (i.e., the first set of gestures) used to apply or bind data from the data visual item 2411 to the input parameters of the element visual item 2420, then there is at least one common gesture within both the first and second sets of gestures. However, this is not necessary at all. In fact, there may be no common gestures between the first set of gestures and the second set of gestures. As such, different user actions may be used to apply data from data visual project 2411 to element visual project 2420 as compared to merging element visual project 2420 with visual project 2430.
In one embodiment, each data visual item 2411 may be similar to the visual item constructed in FIG. 5 using the view construction modules 521 through 524 of FIG. 5. In this case, the associated data 2412 may be data of input parameters provided to the corresponding view component. This associated data 2412 may even be surfaced as a visual in the visual project itself, allowing the user to have a view of the associated data. The visual item 2430 can also be similar to visual items built using the view construction modules 521-524. In this case, perhaps one or more of the attributes 2431 of the visual item 2430 may be set using the input parameters of the corresponding view component.
Applying the associated data 2412 of the data visual project 2411 to the input parameters of the element visual project 2420 can result in the analytical model being resolved. For example, FIG. 4 depicts an analytical method portion 400 of the pipeline 200. By applying the associated data 2412, there may be data that needs to be recalculated in order to further repopulate the input parameters of the element visual item 2420. Additionally, merging the element vision item 2420 with the third vision item 2430 can also result in a re-solution of the analytics portion 400 to thereby change the input parameters of the vision item 2420 or 2430.
Of course, FIG. 24 is an abstract representation of a user interface. A more specific example of a user interface that permits a user to specify associated data to be applied to a data visual of an element visual and allows the element visual to be merged with another higher level visual will now be presented with reference to fig. 25 through 29.
Fig. 25 shows a user interface 2500 in which a spiral 2530 is presented. Spiral 2530 is a specific example of third visual item 2430. In this case, the properties of spiral 2530 (as an example of property 2431) may be radius of curvature, pitch (or rise angle), color, scribe width, cross-sectional view of the wire, winding length, start angle, and so forth. Each attribute may be, for example, an input attribute that is provided to a spiral view composition element having construction logic that, when executed, renders the spiral. The spiral shape type may be one of a variety of shapes that may be dragged and dropped to the work surface 2501 of the user interface. After being dragged and dropped to the work surface, the input parameters of the spiral are populated with default values, however, these default values may also be changed.
The user may have dragged a separate cube object 2521 onto the work surface. Cube object 2521 is an example of element visual object 2421 of fig. 24. The cube object 2521 generally has six faces of rectangular shape that are parallel or perpendicular to each other. In the case where cube object 2521 is dragged into another portion of the work surface, other than spiral 2530, cube object 2521 may have retained these underlying cube features. However, in this case, the user has made a gesture that the cube object 2521 and the spiral 2530 are to be merged (perhaps by dragging the cube 2521 over a portion of the spiral 2530 using the pointer 2550). In this case, the input parameters of cube 2521 are adjusted so that its mid-line follows a spiral. This may also be a mechanism for defining the type of element visual to be merged with the spiral visual (e.g., cube in this case).
Given this merge operation, FIG. 26 shows another stage 2600 of this particular example of a user interface. Here, the user has acquired a data visual item 2610, in this case, perhaps a spreadsheet document. The spreadsheet document 2610 is an example of the data visual item 2411 of FIG. 24. This spreadsheet document contains a table listing various psychiatric drugs. The data is completely fictional, but it is provided to illustrate this example. The fourth column 2614 lists the names of the drugs, one per row, although in this case the fields are empty because the data is fictional. The third column 2613 lists the categories of drugs corresponding to each row. The first column 2611 lists the start dates (expressed in years) that the drugs corresponding to each row are approved for use as prescriptions. The second column 2612 lists the length of time (in years) that the drug corresponding to each row is approved for use. Start date, duration, category, and name are examples of fields of the data set of associated data 2412 of fig. 24. The rows in the spreadsheet 2610 are examples of data groups for the associated data 2412 of FIG. 24.
Here, the input parameter table 2620 appears to show the user what input parameters have been selected as targets for population. The user has selected the first column 2611 (i.e., the "start date" field) to populate the "position on spiral" input parameter. A cube visual item is generated for each data set (each row in chart 2610). Preview 2630 shows spiral 2530, but with a preview of what the merged version of the multiple cube elements and spiral look like.
Fig. 27 shows a user interface 2700. Here, the user has selected that the various cubes will not be cropped, but will be stacked on top of one another with the spiral (now not visible) serving as the base of the stack. Stacked cubes 2710 represent an example of an element visual item. Note that the properties of the spiral visual item have been inherited by the input parameters of each cube. In addition, data from the data visual item 2610 has been inherited by each cube. For example, each cube in the stack of cubes is constructed using a position on the spiral corresponding to the start date of the corresponding medication. The length of each cube is inherited by the duration of the corresponding medication. In addition, the cube inherits the position and curvature properties from the helix.
Fig. 28 shows a complete visual scene. In this user interface 2800, a color may be assigned to each curved cube according to the category of the corresponding medication. As such, the start date of the corresponding drug defines the start position of the curved cube in the spiral. For example, the user may have used the following gesture to apply the start date data from spreadsheet visual item 2411 to the cube visual item: 1) select the first column of the spreadsheet, and 2) drag the column to the "position on spiral" portion of the input parameter table 2620. In addition, the length of time of the corresponding drug also defines the length of the curved cube in the helix. For example, the user may have used the following gestures to apply the duration data from spreadsheet visual item 2411 to the cube visual item: 1) select the first column of the spreadsheet, and 2) drag the column to the "Length" portion of the input parameters table 2320. Finally, the category of the corresponding drug defines the color of the curved cube in the spiral. For example, the user may have used the following gestures to apply category data from spreadsheet visual item 2411 to a cube visual item: 1) select the first column of the spreadsheet, and 2) drag the column to the "color" portion of the input parameter table 2620. The user can drag and drop different fields of the spreadsheet into the difference portion of the input parameter field to see what visual representation of the data is best in a given situation.
By using the principles described herein, complex geometries can be constructed and composed using other visual elements and geometries. To understand the synthesis of the geometry, four basic concepts will first be described. The basic concepts include 1) data series, 2) shape, 3) dimension sets, and 4) geometry.
First, a data series will be described. A series of data is a wrapper on the data. An example of a data series object is shown and described with reference to FIG. 6. The data series object is not the data itself. However, the data series knows how to enumerate the data. For example, referring to FIG. 6, the enumeration module 601 enumerates series of data corresponding to a data stream object. In addition, the data series has attributes stating: range, quantization, and resolution of the map. Examples of types of data that may be wrapped by a data series include table columns, repeating rows in a table, hierarchies, dimension hierarchies, and the like.
The shape may be a canonical visual item that is built synthetically from a canonical view. Examples of such canonical visual items include, for example, points, cylinder lines, prisms, bubbles, surface patches, image files, 2-or 3-dimensional shapes, and so forth. However, the shape may be a canonical construct from the data. Examples of such canonical constructs from data include tags. Alternatively, the shape may be the result of filling a geometric shape (the term "geometric shape" will be described further below) with the data series and shape.
Canonical shapes carry metadata that potentially allows the "binder-arranger" (described below) of the geometry to be parameterized to handle multiple shapes. The metadata also provides a hint to the "layout assistant" described below. Metadata represents aspects of a shape, such as whether the shape is rectilinear or curvilinear, whether the shape is planar or occupies a volume, whether the shape is symmetric with respect to a dimension, whether the shape needs to be read in a text-meaning (e.g., label) shape, whether the shape can be colored with texture or overlaid, and so forth.
Note that the shape has no absolute dimension, but a proportion in the dimension thereof may be optionally specified. For example, a prism may be designated to have bases L and W, which are some minimum percentage of its height H. The shape may optionally specify constraints on how to present multiple instances of the shape, such as, for example, a minimum distance between two strut lines.
With respect to the set of dimensions, this is different from a coordinate system. The set of dimensions contains how many dimensions there are dimensions of the data to be visualized. The dimensions may be more than the ordinary euclidean axes x, y and Z, and time t. In a dimensional set, some may be euclidean (e.g., cartesian or polar x, y, z, or map coordinates with height z). Later, when used with geometric shapes, the effect of these Euclidean axes would be to delineate and control how the shapes within the series of shape instances are inserted or transformed into certain coordinates, with the appropriate scaling, transformation, and distribution. Other dimensions are indicated by layout effects such as aggregation and stacking. Other dimensions may involve higher visual dimensions such as animation, or motion speed.
As an example, consider a visual project that includes an existing country. The visual items may each correspond to a shape (describing the shape of a country), the polar coordinates of the center of the country. There may also be a series of data that will be presented according to the country's color. For example, blue may represent a small amount of resources spent in foreign assistance in that country. Red may represent a greater amount of resources spent in foreign assistance in that country.
Yet another dimension may be animated. For example, there may be a top associated with each country. The gyroscopic dimensions of each country may be populated with political stability for that country, regardless of how the political stability score is derived. The top of the country with less political stability is animated with slower movements and greater instability. The top of the country with greater political stability is animated with faster motion and less instability. Other non-euclidean dimensions may include textures.
The set of dimensions can include declarations of visibility/conditional hiding, scrollability, title, tags, subdivision, drawable granularity, permitted scope (e.g., whether negative is allowed), and so forth.
"geometry" includes a container and one or more binder-arrangers.
As for containers, the geometry contains a description of the visual elements/arrangement of the container in which the data will be visualized by the shape. For example, the simplest possible histogram specifies the shape of the bounding rectangle, the proportion in the 2/3D dimension of the container (alternatively, the absolute dimension), the coordinate system (e.g., cartesian). More complex visualizations may specify additional concepts. For example, a map may specify a subdivision of where constraints within a geometric shape may place the shape within a coordinate system (e.g., area, street). The quadrant graph may specify its cartesian axes to accommodate negative values, colors, textures, transparency, and other controllable visual parameters.
The container may optionally specify metadata that potentially allows the valid portion of the intelligence required in the corresponding binder-scheduler to be factored out to a small set of deployment assistants (often based on solvers). Metadata represents aspects of a container, such as, for example: the euclidean axis (more axial types) is straight or curved, the container is planar or occupies volume, the container is symmetric with respect to a certain dimension, etc.
Second, the geometry carries a declaration of "binder-arranger" that knows how to: 1) generate a series of shape instances by applying the incoming data series and primitive shapes to DataShapeBindingParams that describes how data values map to dimensions (e.g., height) or visual attributes (e.g., color) of the shape, and/or select from a set of shapes based on the data values, and 2) map the incoming axis set to the container's coordinate system and to other visual elements of the container (stack, aggregate, color, motion, etc.), 3) arrange the series of shape instances per ShapeAxis BindingParam into one or more dimensions, as mapped to the container, and 4) interpolate between the shapes displayed, as necessary, e.g., connecting lines, or create a continuous surface from small surface-lets or patches.
The populated geometry (i.e., the geometry instantiated with one or more data series and corresponding shapes) may itself be considered a shape to be passed into another geometry. For example, in a stacked/clustered histogram: the innermost geometry has a simple container in which the strands of the other strands can be stacked. Filling the innermost geometry results in a shape that enters a second geometry, wherein the container is a cluster of pillar lines. Finally, the shapes resulting from the filling of the second geometry are fed to the outermost geometry, which knows how to arrange the shapes along the horizontal axis, also showing their height on the y-axis.
In the stacked/clustered histogram example above, the innermost geometry has only height, the second geometry has (sorted or unsorted) clustering, and the outermost geometry has a simple (i.e., unstacked) vertical dimension and a simple (i.e., no cluster perceived) horizontal dimension. However, the composite stacked/aggregated histogram may also be viewed as a single geometry that may include a set of dimensions with three aspects: the height dimension, the aggregation dimension, and the horizontal X dimension of the stack perception.
The layout assistant relies on an algorithm or solver and helps determine the "correct" or "optimal" positioning of shapes within the contained geometry, as well as interpolation between shapes. As mentioned above, the idea of the layout assistant is to reduce specificity within the binder-scheduler. There are two types of layout assistants, a local layout assistant that is invoked at the level of a particular binder-arranger, and a global layout assistant that checks the geometry for total containment (including at intermediate results of multiple B-As that put shapes within the geometry).
As such, the conduit allows for complex interactions across various domains.
Additional example applications
The architecture of fig. 1 and 2 may allow the construction of an infinite number of data-driven analytical models, regardless of domain. These areas need not be similar at all. Wherever there is a problem to be solved, the principles described herein may be beneficial as long as it is helpful to apply analytics to the visual project. Until now, only a few example applications have been described, including a geomantic layout application. To illustrate the broad applicability of the principles described herein, several additional broad and different example applications will now be described.
Additional example # 1-retail shelf arrangement
Product salespeople often use 3-D visualization to sell items on shelves, eventually displaying and newly promoting the items. Using the pipeline 201, the salesperson will be able to perform "what if analysis" on site. Given certain product layouts and given a minimum daily sales/vertical (linear foot) threshold, the salesperson can calculate and visualize the minimum required inventory at hand. Instead, given some inventory at hand and given a two week refill period, the salesperson can calculate a product layout that will give the desired sales/vertical. The retailer will be able to visualize this effect, compare the individual situations, and compare the profits. FIG. 29 shows an example retailer shelf arrangement visualization. The input data may include a visual image of the product, the number of products, a linear square assigned to each product, and the shelf number of each product, among others. The example of FIG. 29 shows this application of charts within a virtual world. Here, plan view 2901 and shelf layout 2902 may be analytically related such that changes in shelf layout 2902 affect plan view 2901, and vice versa.
Additional example # 2-City planning
City planning mash-ups (mash up) becomes important. Using the principles described herein, analytical methods can be integrated into such solutions. The city planner will open the traffic model created by the expert and pull a bridge from the road improvement gallery. The bridge will bring analytical behavior such as length constraints and high wind operational limits. With proper visualization, the planner will see and compare the impact of different bridge types and layouts on traffic volume. The principles described herein may be applied to any map scenario in which a map may be used for various purposes. Maps can be used to understand features of the terrain and find directions to a location. The map may also be a visual backdrop for comparing the data of the regions. More recently, maps have been used to create virtual worlds in which buildings, upholstery, and any 2-D or 3-D objects may be overlaid or placed in the map. FIG. 30 illustrates an example visualized city planning map. Note that the chart can again be functionally and analytically integrated with a map of the virtual world (in this case, the city). As the graph changes, the virtual world also changes, and vice versa.
Additional example # 3-visual education
In areas such as science, medicine, and demographics where complex data need to be understood not only by domain practitioners but also by the public, authors can use the principles described herein to create data visualizations that arouse audience interest. They will use domain specific metaphors and provide the style of the author. Figure 31 is an illustration of education regarding children. Fig. 32 is a conventional illustration regarding population density. Typically, such visualization is simply a static illustration. These may become a live interactive experience using the principles described herein. For example, by inputting a geographically distributed growth pattern as input data, the user can see peak population changes. Some visualizations supported by the authored model will let the user perform "what-if analysis". That is, the author can change some values and see the effect of the change on other values.
Additional example # 4-application of View Components to parameter targets
In some cases, it may be desirable to apply view components to various parameter targets (e.g., other view components), for example, as shown in fig. 33 and 34. In particular, the height of the panels in both fig. 33 and 34 represent the size of the army in the unfortunate russian battle of melphalan in 1812. In fig. 33, a panel is applied to the parametric targets representing the actual routes taken by the military of Napoleon. In FIG. 34, the panel is applied to a different parameter target: a helix. In this manner, the view components (e.g., panels) can be applied to different parameter targets (e.g., view components representing the actual route taken by the Napoleon military or view components representing the spiral).
View components may be children of the parameter object to which they are applied. For example, the panel in FIG. 33 may be a child of a military route, while the panel in FIG. 34 may be a child of the spiral in FIG. 34. In addition, a tag representing the size of a military may be a child of a panel whose height represents the size of the military.
Ideally, solvers associated with the properties of child view components and solvers associated with the properties of parent view components can be written explicitly through a dependency tree or implicitly through a property-setter correlation that includes such solvers.
As such, setting an attribute in a child may trigger a re-solution of an attribute in the parent (sometimes referred to as "raise"), while setting an attribute in the parent may trigger a re-solution of an attribute in the child (sometimes referred to as "delegation"). For example, referring to FIG. 33, a user may attempt to change the height of the panel, which may trigger a property-setter of the panel to increase the scale of the panel. In addition, the panel's scale property-setter may invoke the panel group's scale property-setter (raise). The scale property-setter of a panel group may call a single scale property-setter (delegate) for each panel.
Note that, specifically, fig. 33 is an example of applying a chart to a virtual world. A "virtual world" is a computerized representation of some real object of a hypothetical map. The map may be two-dimensional or may also be three-dimensional. For example, a city map may include streets, buildings, rivers, etc. shown in geographic relation. On the other hand, a "graph" represents a series of data applied to a variable. A graph may be applied to a virtual world using the principles described herein by applying a series of data, as shown in the graph, to certain aspects of the virtual world. The virtual world aspect may be a horizontal or vertical surface, an external service of a three-dimensional object, a route, and so forth. The virtual world may represent the actual physical space or may represent only a hypothetical area. The virtual world may represent all or a portion of a town, city, state, province, region, building, neighborhood, planet, or any other physical region.
For example, in the example of fig. 33, the chart data includes, for each of a number of cycles, the number of people in the military of melphalan at that time. This information may be applicable to the route of the Napoleon military. The height of each bar represents the number of soldiers in the Napolan army on a particular trip segment of the army. Other geographic features may be included in the map, such as rivers, cities, oceans, etc., as desired. A chart may be applied to a virtual world by applying chart features such as coordinate systems, axes, or markers to virtual world features such as surfaces.
As other examples of the application of charts to virtual worlds, assume that a company has data representing the emotions of employees. This chart data may be applied to a virtual world, such as by presenting the color of each employee's window, a three-dimensional representation of the company's venue, with a color representing an emotion. For example, a blue window may represent happy employees, while a red window may represent unhappy employees. Appropriate analysis can be performed. For example, a high-level view of a site map may be taken and it determined which buildings tend to be red. Perhaps one building is isolated from other buildings, resulting in less interaction opportunities, and lower emotions. The company may reconsider whether the building will be rented in the future and attempt to obtain buildings for those employees that are closer to the building as a whole.
Thus, the principles described herein provide a major paradigm shift in the field of visual problem solving and analysis. The paradigm shift is applicable in all fields, as the principles described herein can be applied in any field.
Domain-specific classification of data
Referring back to fig. 2, the pipeline 201 is data driven. For example, input data 211 is provided to data portion 210, analytics data 221 is provided to analytics portion 220, and view data 231 is provided to view portion 230. Examples of each of these data are described. Suffice it to say that the amount of data that can be selected by the authoring component 240 can be quite large, particularly given the ease of synthesis in which certain portions of the model can be imported into the model to build increasingly complex models. To facilitate navigating through the data so that the appropriate data 211, 221, and 231 can be selected, the classification component 260 provides a number of domain-specific classifications of the input data.
FIG. 35 illustrates a classification environment 3500 in which the classification component 260 may operate. Classification involves classifying items into categories and associating the categories. As such, environment 3500 includes a collection of items 3510 that will be subject to classification. In FIG. 35, the set of items 3510 is shown to include only a few items in total, including items 3511A through 3511P (collectively, "member items 3511"). Although only a few member items 3511 are shown, there can be any number of items, perhaps even hundreds, thousands, or even millions of items, that should be classified, as shown by the ellipses 3511Q. The member items 3511 include a pool of member items from which the authoring component 240 can select to provide the data 211, 221, and 231 to the pipeline 201.
The domain-sensitive taxonomy component 3520 accesses all or a portion of the member items 3511 and can also generate different taxonomies for the member items 3511. For example, classification component 3520 generates a domain-specific classification 3521. In this case, there are five domain-specific classifications 3521A through 3521E, potentially with other attributes as shown by the ellipses 3521F. There can also be less than five domain-specific classifications created and managed by classification component 3520.
As an example, category 3521A may categorize member items suitable for the "geomantic" domain, category 3521B may categorize member items suitable for the motorcycle design domain, category 3521C may be equally suitable for the city planning domain, category 3521D may be suitable for the inventory management domain, and category 3521E may be suitable for the abstract illustration domain. Of course, these are only five of potentially countless fields that can be serviced by the pipeline 201. Each of the classifications may use all or a subset of the available member items to classify in the corresponding classification.
FIG. 36 shows a specific and simple example 3600 of a classification of member items. For example, the classification can be domain-specific classification 3521A of fig. 35. The following figures will illustrate more complex examples. The classification 3600 includes a category node 3610, the category node 3610 including all of the member items 3511 except the member items 3511A and 3511E. Category node 3610 may be, for example, an object that includes pointers to constituent member items, and thus, in a logical sense, member items may be considered "included within Category node 3610". The category node 3610 also has an attribute association descriptor 3611 associated with it, which descriptor 3611 uses the attributes of the candidate member items to describe membership of the category node 3610. When determining whether a member item should be included in a category, the attribute association descriptor can be used to evaluate the descriptor for the attributes of the member item.
Within one category, the two categories may be associated with each other in many different ways. One common relationship is that one category is a subset of another category. For example, if there is a "vehicle" category that contains all the objects representing vehicles, there may be a "car" category that contains a subset of the vehicle categories. The attribute association descriptors of the two categories may define a particular relationship. For example, an attribute association descriptor for a vehicle category may indicate that an object having the following attributes is to be included in the category: 1) the object is movable, 2) the object may contain a person. The car category attribute association descriptor may include both attribute requirements either explicitly or implicitly, and may also include the following attribute requirements: 1) the object contains at least 3 wheels that remain in contact with the earth during movement of the object, 2) the object is an automobile, 3) the height of the object is no more than 6 feet. Based on the attribute association descriptors for each class, the classification component can assign an object to one or more of any given domain-specific classifications, and can also understand the relationships between the classes.
In fig. 36, a second category node 3620 including another attribute association descriptor 3621 is shown. The category node 3620 logically includes all member items satisfying the attribute association descriptor 3621. In this case, the logically included member items in the category node are a subset of the member items included in the first category node 3610 (e.g., including member items 3511F, 3511J, 3511N, and 3511P). This may be because the attribute association descriptor 3621 of the second category node 3620 specifies the same attribute requirements as the association descriptor 3611 of the first category node 3610, except for one or more additional attribute requirements. The relationship between the first category node 3610 and the second category node 3620 is logically represented by a relationship 3615.
In the vehicle-car example, the relationships between the categories are subset relationships. That is, one category (e.g., car category) is a subset of another category (e.g., vehicle category). However, there are various other types of relationships, even new relationships that may never have been previously identified or used. For example, there may be a majority inheritance relationship, where if most (or some specified percentage) of the objects in one class have a particular property value, then the objects in the other class have that property and inherit that property value. There may be a "similar color" relationship, where if one class of objects has a primary color within a certain wavelength range of visible light, then another class contains objects having a primary color within a certain adjacent wavelength range of visible light. There may be a "virus mutation" relationship, wherein if a class contains objects representing certain infectious diseases caused primarily by a particular virus, the associated class may include objects representing certain infectious diseases caused by variant forms of the virus. Examples may be many. Those skilled in the art will recognize that the type of relationship between categories is not limiting upon reading the present description.
Furthermore, a classification may have many different types of relationships. For clarity, the various classifications will now be described in an abstract manner. Examples of abstractly represented classifications are shown in fig. 37A to 37C. Specific examples will then be described with the understanding that the principles described herein allow countless applications of domain-specific classification in data-driven visualization.
The example of fig. 36 is a simple two class node classification, while the examples of fig. 37A to 37C are more complex. Each node in the categories 3700A through 3700C of fig. 37A through 37C represents a category node that contains zero or more member items, and may have an attribute association descriptor associated with each, which is basically a permission policy for allowing member items to enter the category node. However, to avoid undue complexity, the member item and attribute association descriptors for each of the category nodes of categories 3700A through 3700C are not shown. Lines between the category nodes represent relationships between the category nodes. They may be subset relationships or some other type of relationship, without limitation. The exact nature of the relationships between the category nodes is not critical. Nonetheless, to emphasize that there may be various types of relationships between the class nodes in the classification, the relationships are labeled A, B, C, D or E.
Fig. 37A to 37C are provided as examples only. Not only is the exact structure of the classifications of fig. 37A-37C not critical, but the principles described herein permit greater flexibility in what classifications to generate even based on groups of identical input candidate member items. In these examples, classification 3700A includes category nodes 3701A through 3710A that are associated with each other using relationship types A, B and C. Classification 3700B includes categories 3701B through 3708B that are associated with each other using relationship types B, C and D. Classification 3700C includes categories 3701C through 3712C that are associated with each other using relationship types C, D and E. In this example, classifications 3700A and 3700B are hierarchical, while classification 3700C is to a greater extent a non-hierarchical network.
As new candidate member items become available, those candidate member items may be evaluated against the attribute association descriptors of each category node in each category. If the member item's attribute has a value that permits the member item to satisfy the requirements of the attribute association descriptor (i.e., the permission policy), then the member item is absorbed into the category node. For example, perhaps a pointer to a member item is added to the category node. Thus, if the new member item has a sufficient number of attributes, the new member item can be automatically imported into the appropriate category of all categories.
FIG. 38 illustrates a member item 3801 that includes a plurality of attributes 3801. There may be a single attribute, but there may also potentially be thousands of attributes associated with the member item 3800. In FIG. 38, the member item 3800 is shown as including four attributes 3801A, 3801B, 3801C, and 3801D, potentially along with other attributes as indicated by the ellipsis 3801F. There is no limitation to what these attributes may be. They may be anything that may be useful for classifying a member item into a classification.
In one embodiment, the potential data for each of the data portion 210, the analytics portion 220, and the view portion 230 may be classified. For example, consider the field in which an author is writing a consumer application that allows an individual (such as a consumer or a neighborhood) to connect to a map of a city.
In this consumer domain, there may be a classification of view data 231 that may be selected. For example, there may be a building category that includes all buildings. There may be different types of buildings: government buildings, hospitals, restaurants, residences, and the like. There may also be transportation categories including rail, road, and canal sub-categories. The road categories may include categories or objects representing streets, highways, bicycle lanes, overpasses, and the like. Street categories may include objects or categories of visual representations of one-way streets, multiple rows of streets, turning lanes, center lanes, and so forth. There may be a parking lot category showing visual representations of different types of parking lots or other sub-categories of parking lots (e.g., multi-story parking lots, underground parking lots, street parking lots, etc.). Parking lots may also be sub-classified according to whether they are free or charged.
There may also be a classification of the input data in this consumer domain. For example, a multi-level parking lot may have data associated with it, such as, for example, 1) whether the parking lot has valet parking, 2) how much the parking lot is paid per hour, 3) how many hours the parking lot is open, 4) whether the parking lot has a safety patrol, if any, how many security guards per unit area of the parking lot, 5) the number of levels of the parking lot, 6) the square of the parking lot if there is only one level, if a multi-level parking lot, the square of each level, 7) the number of annual historical car thefts that occur in multi-level parking lots, 8) the amount of parking lots, 9) whether parking is limited to satisfying one or more conditions (i.e., use by a nearby business, visiting a restaurant or mall, etc.), or any other data that may be useful. There may also be data associated with other visual items, and data that may in no way affect the way the visual item is presented, but may be used for calculations at a certain time.
However, there may also be a classification of the analytics data 221 that is specific to this consumer map domain. For example, the analytics may present a cost-based analytics in one category, a time-based analytics in another category, a distance-based analytics in yet another category, a catalog analytics in another category, and a route analytics in another category. Here, the analytics are classified to help authors develop analytical models for the desired application. For example, the route analysis category may include a category of formulas for calculating routes, constraints specifying what restrictions may be placed on a route (e.g., shortest route, most use of freeways, avoidance of streets, etc.), or rules (e.g., direction of traffic on a particular road). Similar subcategories may also be included for other categories.
Consider now another area, also dealing with the layout of cities, but in this case the area is city planning. Here, there are analytics of interest to the city planner with little or no interest to the consumer. For example, there may be an assay: the analysis calculates how wide the pedestrian walkway should be given a certain traffic flow, what the overall installation and maintenance costs per longitudinal scale of a certain road type placed in a certain area are, what the safety factor of the bridge is given an expected traffic flow pattern prediction of the next 20 years, what the traffic flow bottleneck is in the current city plan, what the environmental impact will be if a specific building is built in a specific location, what the impact will be if certain restrictions are imposed on the use of that specific building, etc. Here, the problems to be solved are different from those to be solved in the consumer domain. Thus, for the city planning domain, there can be a large difference in the classification of the analysis methods compared to the consumer domain, although both deal with the city topology.
On the other hand, the tractor design field may be interested in a whole set of different analytics and will use different classifications. For example, the visual project in the tractor design field may be completely different from the visual project in the city planning field. There is no longer a concern with the city visual element. Now, the various visual elements that make up the tractor are categorized. As an example, relationships of what can be connected to what can be used to classify visual items. For example, there may be categories of "something that can be connected to a seat", "something that can be connected to a carburetor", "something that can be connected to a rear axle", and so on. There may also be different analytical classifications. For example, there may be constraints on tread pattern depth on the tire, assuming that the tractor needs to navigate through wet soil. There may be an analytical method of calculating the total weight of the tractor or its sub-assemblies, etc.
FIG. 39 shows a domain-specific classification 3900 and represents one example of the domain-specific classification 3521 of FIG. 35. In one embodiment, the domain-specific classifications include a data classification 3901 in which at least some of the available data items are classified into corresponding associated classifications, a view component classification 3903 in which at least some of the available view components are classified into corresponding associated view component classifications, and an analytics classification 3902 in which at least some of the available analytics are classified into corresponding associated analytics classifications. Examples of such domain-specific classifications have been described in which data, analytics, and view components are classified in a domain-specific manner.
FIG. 40 illustrates a method for navigating and using analytics. The analytics component 220 is accessed (act 4001) along with a corresponding domain-specific analytics classification (act 4003). If there are multiple domain-specific analytics classifications, then the domain is first identified (act 4002) before the domain-specific analytics classifications can be accessed (act 4003).
The analytics classification may then be navigated (act 4004) by traversing the associated categories. This navigation may be performed by a person with assistance from a computing system, or may even be performed by the computing system alone, without simultaneous assistance from the person. A computer or a person may derive information for each category of the licensing policy of the analytics to be entered into that category from the associated attribute descriptors. Information may also be derived from relationships between categories. Navigation may be used to solve analytical problems, to solve output model parameters, or perhaps to merge analytics from multiple models. Alternatively, the analytic model may be initially written using navigation.
For example, relationships are classified assuming that analytics classify the identity of the type of problem to be solved. The writer may start by looking at all those analytics in the relevant issue type category. The categories may have associated categories that define certain portions of the problem to be solved. The user can quickly navigate to those associated categories and find relevant analytics in the field.
Searching and exploring
As mentioned above, data-driven analytical models can be used to construct the analytically-intensive search and exploration operations. FIG. 41 shows a flow diagram of a method 4100 for conducting a search using a data-driven analytics model. Method 4100 may be performed each time a search request is received or otherwise accessed by search tool 242 (act 4101).
The principles described herein are not limited to only the mechanisms by which a user may enter a search request. Nevertheless, several examples will now be provided to illustrate a wide range of search request entry mechanisms. In one example, perhaps the search request is text-based and is entered directly into a text search field. In another example, perhaps a radio button is populated to enter search parameters. Perhaps, a slider may also be used to enter a range of search parameters. The search request may be generated in a manner that interacts with the user. For example, in the case of a user requesting real estate that experiences a range of noise levels, the application may generate noise that becomes louder and requires the user to press the "too loud" button when the noise becomes louder than the user wishes to tolerate.
The search request is not a conventional search request, but may require a data-driven analytical model solving operation. Prior to solving, however, search tool 242 of fig. 2 identifies any model parameters that should be solved in order to be able to respond to the request (act 4102). This can be done using, for example, the various classifications discussed above. For example, in the case where a user is searching for real estate that is not in the shadow of a mountain at any time after 9:15 am in the year, there may be a model variable that is solved called "mountain shadow". In the case of a user searching for real estate that experiences some level of noise, given certain coordinates, there may be a model variable called "average noise" to be solved for.
Once the relevant output variables are identified, the analytical relationships of the analytics portion 220 are used to solve for the output variables (act 4103). Search tool 242 then uses the solved output variables to express a response to the search request using the solved values (act 4104). Although in some cases a user may interact with method 4100 while method 4100 is being executed, method 4100 may be executed by a computing system without simultaneous assistance from a human. The search request may be issued by the user or perhaps even by another computing or software module.
Method 4100 may be repeated many times each time a search request is processed. The model variables being solved for may be different, but need not be, for each search request. For example, there may be three search requests for a house with a certain price range and noise level. For example, there may be a search request for premises in the $400,000 to $600,000 price range and having an average noise level below 50 decibels. The parameter to be solved for here will be the noise level. The second search request may be for a house that is within a price range of $200,000 to $500,000 and has an average noise level below 60 decibels. The parameter to be solved for here will again be the noise level. Note, however, that in the second search request, some solution operations have already been performed for the search request. For example, based on the first search request, the system has identified houses that are within a price range of $400,0000 to $500,000 and whose average noise level is below 50 decibels. Thus, for those houses, it is not necessary to recalculate the noise level. Once solved, those values can be retained for future searches. This allows the user to perform the probe by submitting a subsequent request. The user may then submit a third search request for houses within the $400,000 to $500,000 price range and having a noise level of less than 45 decibels. Since the noise levels of those houses have already been solved, it is not necessary to solve them again. Thus, search results may be returned with much fewer computations. In essence, the system can learn about new information by solving problems and can utilize the new information to solve other problems.
As mentioned above, each search request may involve solving for different output model variables. For example, after performing the search request just described, the user may submit a search request for houses that are not in the shadow of mountains. Once the system solves this, the results from the solution can be used to complete the subsequent search request each time the user or another user submits a similar request. The user may submit a search request for houses that will resist an 8.0 level earthquake, resulting in the simulation verifying that each house will remain untouched, fall down, or perhaps provide some percentage of the chance that the house will remain untouched. For those houses where it is difficult to perform an accurate simulation without sufficient structural information, the system can simply declare that the result is inconclusive. Once the seismic simulation is performed, unless there is some structural change to the house that requires re-simulation, or unless there is some improved simulation solver, these results can be used whenever someone submits a search request for a house that can withstand a certain seismic magnitude. The user may also perform a search request for a range of prices for premises that would not be overwhelmed or destroyed in the event of a 5-stage hurricane.
While embodiments have been described in considerable detail, as a side note, various operations and structures described herein may be, but need not be, implemented by a computing system. Thus, to conclude this description, an example computing system will be described with reference to FIG. 25.
Fig. 42 illustrates a computing system 4200. Computing systems are now increasingly taking various forms. The computing system may, for example, be a handheld device, an appliance, a laptop computer, a desktop computer, a mainframe, a distributed computing system, or even a device not conventionally considered a computing system. In this specification and claims, the term "computing system" is defined broadly to include any device or system (or combination thereof) that includes at least one processor and memory capable of having thereon computer-executable instructions that may be executed by the processor. The memory may take any form and may depend on the nature and form of the computing system. The computing system may be distributed in a network environment and may include multiple constituent computing systems.
As shown in fig. 42, in its most basic configuration, a computing system 4200 typically includes at least one processing unit 4202 and memory 4204. Memory 4204 may be physical system memory, may be volatile, non-volatile, or some combination of the two. The term "memory" may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory, and/or storage capabilities may also be distributed. As used herein, the term "module" or "component" may refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).
In the description that follows, embodiments are described with reference to acts that are performed by one or more computing systems. If such actions are implemented in software, the one or more processors of the associated computing system that perform the actions direct the operation of the computing system in response to having executed computer-executable instructions. An example of such an operation involves the manipulation of data. The computer-executable instructions (as well as the data that is manipulated) may be stored in the memory 4204 of the computing system 4200.
Computing system 4200 may also contain a communication channel 4208 that may allow computing system 4200 to communicate with other message processors, such as over network 4210, for example. Communication channels 4208 are examples of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, wireless, infrared and other wireless media. The term "computer readable media" as used herein includes both storage media and communication media.
Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise physical memory and/or storage media such as RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described herein. Rather, the specific features and acts described herein are disclosed as example forms of implementing the claimed subject matter.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (7)
1. A computer-implemented method for using a plurality of model variables that define properties of one or more view components of view composition, the method comprising:
an act of solving (1401) an unknown second model variable (1503b) of the plurality of model variables using a known first model variable (1503a) of the plurality of model variables, the second model variable (1503b) defining a first property of a first view component of the view synthesis, the known first model variable defining a first property of a second view component of the view synthesis, wherein a solver invoked by a property-setter for the first property of the first view component performs the act of solving the unknown second model variable of the plurality of model variables using the known first model variable of the plurality of model variables, the first view component being a child of the second view component;
an act of setting (1403) the first property of the first view component of the view synthesis to a value of a solved second model variable (1503b), wherein the property-setter of the first property of the first view component performs the act of setting the first property of the first view component of the view synthesis to a value of a solved second model variable, the property-setter of the first property of the first view component being invoked by a property-setter of the first property of the second view component; and
an act of rendering (1404) the view composition including the first view component.
2. The computer-implemented method in accordance with claim 1, wherein the known first model variable defines a second property of the first view component.
3. The computer-implemented method in accordance with claim 2, wherein a solver invoked by a property-setter for the first property of the first view component performs the act of solving for the unknown second one of the plurality of model variables using the known first one of the plurality of model variables; and wherein the property-setter of the first property of the first view component performs the act of setting the first property of the first view component of the view composition to the value of the solved second model variable.
4. The computer-implemented method in accordance with claim 3, wherein the property-setter for the first property of the first view component is invoked by a property-setter for the second property of the first view component.
5. The computer-implemented method in accordance with claim 1, wherein a solver invoked by a property-setter for the first property of the first view component performs the act of solving for the unknown second one of the plurality of model variables using the known first one of the plurality of model variables; and wherein the property-setter of the first property of the first view component performs the act of setting the first property of the first view component of the view composition to the value of the solved second model variable.
6. A computer-implemented method for using a plurality of model variables that define properties of one or more view components of view composition, comprising:
invoking (1601) a first property-setter (1502a), the first property-setter (1502a) configured to set (1602) at least one property of at least one view component of a view composition, and invoking (1603) a second property-setter (1502b), wherein the first property-setter is configured to set a first property of a second view component of the view composition, the second property-setter (1502b) configured to:
invoking (1604) a solver (1501b) configured to solve a plurality of unknown first model variables of a plurality of model variables defining properties of one or more view components of the view composition; and
setting (1605) a first attribute of a first view component of the view composition to a value of the solved first model variable, the first view component being a child of the second view component.
7. The method of claim 6, wherein the first property-setter is configured to set a second property of the first view component of the view composition.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/488,201 US8692826B2 (en) | 2009-06-19 | 2009-06-19 | Solver-based visualization framework |
US12/488,201 | 2009-06-19 | ||
PCT/US2010/039268 WO2010148364A2 (en) | 2009-06-19 | 2010-06-18 | Solver-based visualization framework |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1177272A1 HK1177272A1 (en) | 2013-08-16 |
HK1177272B true HK1177272B (en) | 2016-02-26 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102804186B (en) | Solver-based visualization framework | |
US9342904B2 (en) | Composing shapes and data series in geometries | |
US9330503B2 (en) | Presaging and surfacing interactivity within data visualizations | |
US8531451B2 (en) | Data-driven visualization transformation | |
US8493406B2 (en) | Creating new charts and data visualizations | |
US8259134B2 (en) | Data-driven model implemented with spreadsheets | |
US8788574B2 (en) | Data-driven visualization of pseudo-infinite scenes | |
US8352397B2 (en) | Dependency graph in data-driven model | |
US20100325564A1 (en) | Charts in virtual environments | |
US8103608B2 (en) | Reference model for data-driven analytics | |
US8155931B2 (en) | Use of taxonomized analytics reference model | |
US8145615B2 (en) | Search and exploration using analytics reference model | |
US8314793B2 (en) | Implied analytical reasoning and computation | |
US8255192B2 (en) | Analytical map models | |
US8117145B2 (en) | Analytical model solver framework | |
US8190406B2 (en) | Hybrid solver for data-driven analytics | |
US20090322739A1 (en) | Visual Interactions with Analytics | |
US20090326885A1 (en) | Composition Of Analytics Models | |
Moreira et al. | The Urban Toolkit: A grammar-based framework for urban visual analytics | |
HK1177272B (en) | Solver-based visualization framework |