HK1171100A - Creation, editing and navigation of diagrams - Google Patents
Creation, editing and navigation of diagrams Download PDFInfo
- Publication number
- HK1171100A HK1171100A HK12111843.2A HK12111843A HK1171100A HK 1171100 A HK1171100 A HK 1171100A HK 12111843 A HK12111843 A HK 12111843A HK 1171100 A HK1171100 A HK 1171100A
- Authority
- HK
- Hong Kong
- Prior art keywords
- user
- user input
- domain
- constrained
- computer
- Prior art date
Links
Description
Technical Field
The present invention relates to a data processing technique, and more particularly, to a graph processing technique.
Background
Background and related Art
Computers and computing systems have affected almost every aspect of modern life. Computers are commonly involved in work, recreation, health care, transportation, entertainment, home administration, and the like.
Many computers are intended to be used through direct user interaction with the computer. In this way, the computer has input hardware and a software user interface to facilitate user interaction. For example, modern general purpose computers may include a keyboard, mouse, touchpad, camera, etc. for allowing a user to input data to the computer. In addition, various software user interfaces are available.
Examples of software user interfaces include graphical user interfaces, text command line based user interfaces, function key or hot key user interfaces, and the like.
Creating system architecture diagrams using existing or current tools is often slow and cumbersome. To create content, a user must constantly switch between a mouse (e.g., to create nodes and links, etc.), a keyboard (e.g., to name nodes and links, to add members to nodes, etc.), and again back to the mouse (e.g., to find fine-tune positions of nodes and links relative to other nodes and links in the graph or to rearrange multiple nodes and links to accommodate the addition of new nodes and/or links). For developers, this can result in a continuous interruption of the thought flow as the developer strives to explore the design.
To address this persistent mode switch, the system may include keyboard shortcuts for creating nodes and links. However, conventional systems make this impractical. First, in complex domains such as UML, the large number of types involved makes it difficult to create, let alone remember, all types of meaningful keyboard shortcuts. Second, layout is critical in architectural diagrams, so fully automated layout is often impractical. Conventional systems address this by giving users very fine-grained control over the positioning of nodes and links. However, in most cases the user wishes to consider the layout in a broad sense, expressing intent using terms such as "above/below", "and. The ability of the user to fine-tune a particular layout is only minimally helpful for this and is one of the reasons for continued mode switching in current systems. Furthermore, there is evidence that this is seen as an admission barrier to people who wish to move from a hand-drawn whiteboard to a more complex computer-implemented diagramming solution.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is provided merely to illustrate one exemplary technology area in which some embodiments described herein may be practiced.
Disclosure of Invention
One embodiment illustrated herein includes a method practiced in a computing environment. The method includes acts for organizing data, wherein the data has spatial significance. The method includes displaying a representation of the spatially structured data to a user on a user interface. User input is received at a computer-implemented user interface through one or more hardware user interface devices. User input is domain agnostic, but has spatial connotation. The domain of the user input is determined based on pre-existing structured data displayed on the user interface or a prior user action. Based on the determined domain, the user input is interpreted as a domain-specific response. The domain-specific response is consistent with spatial connotation across multiple domains.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
Drawings
In order to describe the manner in which the above-recited and other advantages and features of the present subject matter can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting of its scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 illustrates a table of various gestures, commands, and diagram fields;
FIG. 2A illustrates a class (class) diagram;
FIG. 2B illustrates adding an inherited class to a class diagram;
FIG. 3A illustrates an activity diagram;
FIG. 3B illustrates adding an action to an activity diagram;
FIG. 4A illustrates an activity diagram;
FIG. 4B illustrates a portion of changing an element type in an activity diagram;
FIG. 4C illustrates a portion of changing an element type in an activity diagram;
FIG. 4D illustrates a portion of changing an element type in an activity diagram;
FIG. 4E illustrates a portion of changing an element type in an activity diagram;
FIG. 5A illustrates a portion of creating a link in an activity diagram;
FIG. 5B illustrates a portion of creating a link in an activity diagram;
FIG. 5C illustrates portions of creating a link in an activity diagram;
FIG. 5D illustrates portions of creating a link in an activity diagram;
FIG. 6 illustrates a navigation activity diagram; and
FIG. 7 illustrates a method of organizing data, where the data has spatial significance.
Detailed Description
Some embodiments may use more intelligent keyboard shortcuts, semi-automatic layouts, and automatically constrained wizard hints based on domain and/or context to allow the user to remain in the flow of creating charts without having to interrupt work to process a specific layout. One embodiment system may implement a combination of commands to enable a user to place a new node in a consistent manner in semantically meaningful locations, such as above, below, or inside the current node, without burdening the user with managing the pixel-by-pixel layout. This is combined with a targeted context and/or domain-specific constrained wizard (such as IntelliSense) to set element types when entering names or flags, resulting in a system that lets users stay in the flow of creating charts without having to interrupt their work to handle specific layouts. Alternatively or additionally, embodiments may include the ability to easily navigate the chart via its links.
Some embodiments implement functionality in which standard directional commands accessible through various gestures may be used to create elements above, below, left, right, or inside a current element in a diagram or other spatially structured data. The directional command remains the same regardless of the domain, as the direction associated with the command has a universal meaning and can be combined with domain knowledge to determine, for example, which type of element to create by default.
Some embodiments may be applied to graphs and/or other spatially structured data. As described, embodiments may be used to create a system architecture blueprint, for example, using UML diagrams. Embodiments may be used with flow chart diagrams. In particular, embodiments may be used to place nodes and connectors in a graph. Embodiments may be used for physical object spatial layout. For example, embodiments may be used to describe physical objects with reference to other existing physical objects. Embodiments may be used with databases or other tables. For example, embodiments may be used to define database object dependencies or relationships.
As described, embodiments may use a variety of gestures, where the gestures are invariant regardless of domain. For example, some gestures may be implemented using keyboard gestures. Example keyboard gestures may include "Enter" for creating a new node or object below an in focus entity; "Ctrl-L" for creating a new object to the left of the focal entity; "Ctrl-R" for creating a new object to the right of the in focus entity; "Ctrl-U" for creating a new object above the in focus entity; and "Ctrl-N" for creating a new object inside the in focus entity. Obviously, keyboard gestures (or other gestures as described below) will generate different results in different domains, but with a similar "direction" meaning. For example, in the case of designing a circuit layout for an automobile, Ctrl + U may add circuitry in the orthogonal layout of the front headlights of the automobile, while Ctrl + U may add base classes in the case of a UML class diagram. In both cases, the chart elements will be properly aligned. One commonality is that these embodiments all employ direction cues, apply them to specific domains, and protect the user from having to make pixel-by-pixel changes.
In an alternative embodiment, mouse gestures may be used, where the gestures are the same regardless of the domain. Example mouse gestures may include a mouse-down tap to create a new node or object below the in focus entity; a left mouse tap to create a new object to the left of the in focus entity; a right mouse tap to create a new object to the right of the in focus entity; an up mouse flick to create a new object over an in focus entity; and an approximate circular motion around the entity for creating a new object inside the focal entity.
In alternative embodiments, touch pad or slave screen gestures may be used, where the gestures are the same regardless of domain. Example touchpad or touchscreen gestures may include a lower touchpad or touchscreen tap to create a new node or object below the in focus entity; a left touchpad or touchscreen flick to create a new object to the left of the in focus entity; a right touchpad or touchscreen tap to create a new object to the right of the in focus entity; an upper touchpad or touchscreen flick to create a new object above an in focus entity; and an approximate circular motion around the entity for creating a new object inside the focal entity.
In an alternative embodiment, the tracked gestures of the human body may be used, where the gestures are the same regardless of class. For example, there are various tools for tracking arm movements, such as using a handheld controller that can be tracked by a computing system. The gestures may be based on movement of the controller sensed by an accelerometer and/or controller tracking device, such as an infrared motion tracker.
Alternatively, other computing systems may use cameras to recognize human limbs and extremities and track their motion. Example limb or extremity movement gestures may include a lower limb or extremity movement flick to create a new node or object below an in focus entity; a left limb or extremity movement flick to create a new object to the left of an in focus entity; a right limb or extremity movement flick to create a new object to the right of the in focus entity; an upper limb or extremity movement flick to create a new object above an in focus entity; and an approximately circular limb or extremity movement around the entity for creating a new object inside the focal entity.
The gesture may be based on a motion of the computing device sensed by an accelerometer in the computing device. For example, many tablet computing devices include accelerometers for measuring the speed and direction of motion about two or more axes. A sudden upward tilt of the top of the slab followed by tilting the slab back to a horizontal position may be used to create a new node or object below the in focus entity; a sudden left tilt followed by a flattening may be used to create a new object to the left of the in focus entity; a sudden right tilt followed by a flattening may be used to create a new object to the right of the in-focus entity; a sudden downward slope of the slab tip followed by flattening may be used to create an object above the focal entity; and shaking of the plate or some circular motion may be used to create objects inside the focal entity.
Still other embodiments may use voice commands for object creation. For example, in one non-limiting example, an example voice command may include "down" for creating a new node or object below the focused entity; "left" for creating a new object to the left of the focal entity; "Right" for creating a new object to the right of the in focus entity; "Up" for creating a new object above the in focus entity; and "add" for creating new objects inside the in focus entity.
As previously described, the command may be invoked in a variety of ways. Because the commands are centered on position and orientation, a system that allows a mouse, pen, touch, multi-touch, or other command pattern in which the commands can be easily mapped to the appropriate gestures, these command patterns help well in creating natural gestures that can span domains. For example, a user may use a finger to tap up with respect to a shape to create a related shape above it, or to make an actual precise gesture to draw a node at a location approximately above the active node.
As previously described, embodiments may use domain-independent commands or gestures and apply them in a domain-specific manner. For example, referring to FIG. 1, a table 100 is shown illustrating different types of charts, entities that may be selected within a chart, domain-independent keyboard gestures, and the results of using a given keyboard gesture given a particular chart and selection.
Embodiments may also include the functionality of a semi-automatic layout that fine-tunes the alignment and spacing of elements as they are created, thereby resolving any conflicts that may arise. The user is then free to manually change the layout, but the system can greatly reduce the need to do so. Referring now to fig. 2, an example of this functionality is illustrated. FIG. 2A illustrates an "Animal" class 202. The inherited "Cat" class 204 IS related to the Animal class 202 through an "IS-A" relationship. The user may wish to create a "Dog" inheritance class under the Animal class 202. To do so, the user may select the Animal class 202, thereby bringing the Animal class 202 into focus. As illustrated in FIG. 2B, the user may then perform a gesture to create a new class 206 below the Animal class 202, such as by pressing the "Enter" key, which may be interpreted as a command to place a new object below the selected object. Thus, a new inherited class 206 can be created under the Animal class 202, as it is a domain of the class diagram and in the context of having selected the Animal class 202. As illustrated in fig. 2B, the Cat class 204 may be automatically transferred to make room for the new inherited class 206.
Embodiments may perform intelligent routing of edges to ensure that if certain shapes cannot move-or if the gesture itself is not a simple bump but a complex set of shapes that needs to be added-then the edges connecting the set of shapes may be routed in various ways in various patterns. For example, embodiments may implement slot-like routing, orthogonal routing, and the like.
Additionally, some embodiments may be implemented to perform incremental layout. Specifically, changes will occur incrementally, and a minimal set of changes will be made to the chart. This may be done in some embodiments so that the user feels the layout is fairly stable.
FIG. 2B illustrates an example in which the only user input is the use of a context-free command (i.e., pressing the Enter key) without any additional input specifying where the object should be moved, placed, or connected to. Embodiments may allow for later movement with respect to the automatic movement if so desired by the user.
As illustrated in the above example, newly created elements may be automatically given a type based on context. For example, in the example shown in FIGS. 2A and 2B, since the Animal class 204 is selected, the system knows that the gesture should be interpreted in the domain of the class diagram in the context of the class. Thus, pressing Enter causes an inherited class 206 to be created under the selected Animal class 202. This illustrates that, based on the domain, the new element will be "type" or "class". Further, embodiments may infer a relationship between the original element and the new element, and add a link to illustrate that the relationship is an additional one.
As will be explained in more detail below, various other features may be implemented in various embodiments. However, a partial enumeration of such features includes: allowing a user to change the type of the element by adding the type to the name when inputting the name from the keyboard; guided and bounded input prompts such as IntelliSense or memory to help the user select the type while entering the name; link creation using guided and bounded input prompts such as IntelliSense; and navigating the chart from the keyboard via the links of the chart using a consistent scheme similar to creating the nodes. Embodiments may include functionality to enable intelligent use of gestures in a domain of code using a custom database backend of the code-specific domain. For example, gestures may be used in the indicated inheritance of classes, polymorphisms between methods, and so on.
Various examples will now be explained. A first example illustrates the use of domain agnostic commands with spatial context to create elements supported by an automatic type system and incremental layout. In this example, the context includes a user-created graph containing nodes and links. In this particular example, a UML activity diagram is illustrated, however this example may be generalized to any node and link diagram.
Embodiments may include a standard set of commands accessible through a keyboard, mouse, and/or through other gestures to create an element in a position that has semantic meaning with respect to the current selection. By way of example, a semantically meaningful location can include, but is not limited to, above, below, to the left, to the right, or inside of the current element. Embodiments may include functionality for laying out a new element, where the call by command using gestures relative to the source element (e.g., the selected element or the focus element) and other existing elements results in the creation of the new element. Embodiments may automatically assign a type to the element, driven by the domain and the specific context.
Embodiments may be implemented in which user commands remain the same regardless of domain, as the direction has a universal meaning, although not necessarily always the same in each domain. In this way, the user has a consistent way to perform similar tasks across a large number of domains. Fig. 1 as illustrated above illustrates that a standard set of commands and gestures may be used to perform similar tasks across different domains.
Fig. 3A illustrates portions of one example scenario. In FIG. 3A, a user may create a node 302 using a conventional toolbox. In the examples illustrated herein, a node may be a single node, a collection of nodes, a container with a collection of children, a composite shape such as a compartment shape that is a class shape (with rows for each method inside it), or a combination of the above, and so forth. The toolbox may include user interface elements presented to the user, where the user may interact with hardware devices such as a mouse and keyboard to interact with the user interface. The user may be able to select a chart type and an element type, for example, for the node 302. The user may now wish to create a next node in the stream immediately below the original node 302. Fig. 3A illustrates one user interface view that a user may be able to interact with. In the illustrated example, the user sees four controls 304-1, 304-2, 304-3, and 304-4 around the node, each pointing in one direction. In the illustrated example, the user hovers a mouse pointer 306 over the point-down control 304-4. Tooltip 308 tells the user that the user can use the Enter key instead of clicking on the control. Hovering over control 304-1 may, in this example, cause a tool-tip to indicate to the user that a new action may be created by pressing "Ctrl-R". Hovering over control 304-2 may cause a tool-tip to indicate to the user that a new action may be created by pressing "Ctrl-U". Hovering over control 304-3 may cause a tool-tip to indicate to the user that a new action may be created by pressing "Ctrl-L". The user may not need to hover over these controls, but may already know what gesture will be taken. Further, where other interfaces are used, such as a touch screen, mouse, motion sensing system, etc., the tooltip may indicate different gestures depending on the environment of the interface.
In this example, the user presses Enter, and a new node 310 is created with or without a connection relationship as illustrated in fig. 3B. What types of elements and relationships to create are derived from the type of chart, the location from which the gesture originated, the node from which the gesture originated, the direction of the command, and/or user preferences and/or recent activity. In this example, the domain is an activity diagram, so the default element of the new node 310 is an action. Since actions are often connected through control flows, control flows are automatically created. The control flow is pointing to the new element because the most common flow direction in an active graph is downward. In an alternative embodiment, one way to decide the direction in this case may be such that the flow always travels from an existing node to a newly created node. Thus, embodiments may be implemented in which the default link direction is also determined by a domain, selection, gesture, or the like.
In some embodiments, once a node or link is placed, the user can change the direction or type of the link in a very easy manner. For example, a user may be able to use or change smart tags, and the like.
In various embodiments, the automatic location of the arrangement of element nodes 310 may be specified by various factors. For example, the arrangement may be specified by the direction of the command. Additional considerations for automatic alignment and spacing may be based on constraints in the layout engine, which may be based on the chart type and its domain and/or user preferences. The user preferences may be pre-specified in the user settings. Alternatively, user preferences may be inferred based on past user interactions. Automatic layout frees the user from having to manually define the layout. However, embodiments may include functionality that allows a user to further modify or "adjust" the arrangement of elements placed using an automatic layout.
Embodiments may have various novel features that distinguish them from previous systems. In particular, embodiments may implement a consistent set of commands and gestures across content and chart types. Embodiments may include an automated system for determining which elements and their types to create based on domain, goal, orientation, user preferences, etc., allowing the system to be consistent and useful across multiple domains. Embodiments may include auto-incremental placement driven by domain-specific constraints, which allows commands to give similar but domain-specific results depending on the type of chart.
Illustrating now another example showing other functionality, embodiments may implement a system that uses hints constrained by the current domain to quickly set the type of an element. For example, a user may be designing a chart. When the user uses commands and gestures, the user may in some cases wish to change or further specify the type of element. Alternatively, as part of the design process, the user may find that the type that the user originally created needs to be altered.
Embodiments may include functionality to execute a command to set a type of an element based on a current context and a new set of entry points for the command. Some embodiments may implement a system that allows a user to specify the type of an element using a mnemonics or constrained hinting module such as IntelliSense while typing the name of the element. Embodiments may be implemented in which other gestures may be used to invoke this functionality, such as a mouse wheel or a context menu.
Referring now to FIG. 4A, an example is shown. The example illustrated in fig. 4A shows that the user has created a new action node 404 below the current node 402 by interacting with input hardware using a create below command. The user wants this node to be a decision shape. As illustrated in fig. 4B, the user begins typing when the new action node is selected and/or brought into focus. A drop down 406 occurs to provide a prompt or suggestion for one or more domain constraints. Since the first letter the user types is'd', the list is filtered of those available types in this field that start with'd'. As mentioned, the suggestion list may be constrained based on domain. Thus, in this example, only suggestions appropriate for the particular domain shown in fig. 4A and 4B will be provided.
As noted, the user may wish to change the type of element to precision and thus tap the Tab key to automatically complete the word as illustrated in FIG. 4C. The user then adds the name of the shape (Fuu as shown in FIG. 4D in this example) and taps Return or Enter. At this point, the shape is converted to a precision shape as illustrated in fig. 4E.
Alternative gestures may be used for type selection. For example, if the user prefers to use a mouse, the user may invoke the type selection command by clicking on the shape and using a mouse wheel to scroll through the set of types available for this context.
Embodiments with even simpler functionality can also be implemented. For example, it is possible to construct a system using an auto-completion system as in a word processor, where input mnemonics (such as d for precision, a for Action, etc.) would allow the user to change the type of an element by typing "d < name >" and then have it automatically interpreted as "precision < name >" and have the type changed accordingly. Some such embodiments may also include functionality to display a preview of what the new element will look like on the chart. For example, when a user types "d" and before persisting the selection of "d," a preview of the precision element may be displayed in the appropriate place in the chart to help the user determine whether the precision element is of the appropriate desired type.
The mouse in current tools often supports link creation well. However, many currently implemented systems have drawbacks related to creating links across large distances or when the user is in the stream using a keyboard to edit charts.
Some embodiments allow the user to invoke the link creation tool and then define a new link using only the keyboard. The system uses a constrained suggestion system (such as IntelliSense) to provide the user with a list of domain-specific types, directions, and targets for context when the user types in, allowing the user to quickly specify how the links are defined.
Referring now to FIG. 5A, an example is shown. In FIG. 5A, a user invokes an "add link" command with respect to node 502 using a gesture originating from a keyboard or other input device. Control 504 appears with a domain-constrained drop-down 506 that has been expanded. As illustrated in fig. 5B, as the user types (in this case, the letter L), the list is filtered and as usual with domain-constrained suggestions, and by pressing Tab, the user can automatically complete the selected element, as illustrated in fig. 5C. The user can now type the type, direction, and other nodes of the link to/from which node. When the statement is complete and the user taps Enter, the link is created as illustrated in fig. 5D.
Embodiments may also include fast navigation of charts using search and link navigation. As systems for overriding engineer diagrams through code and for conducting the design and analysis of code become better and more common, software architecture diagrams tend to become larger and carry a greater number of elements. Charts created using existing systems become difficult to navigate when they are large. Navigation often navigates lists or tables based on traditional keyboard schemes. This may be sufficient when navigating to neighboring nodes in a smaller graph, but users often need to either make a large jump from their current location, or they wish to follow a link from one node to another even if the nodes are not neighbors or even not close to each other in this regard.
Some embodiments may overcome these challenges by implementing a system to navigate links using the principles from the system to create the nodes described above. For example, an embodiment may include a set of commands to navigate up, down, do, navigate to the right along a link. The system will then select the first link to connect to the node that is going in that direction. By issuing the same command again, the node at the other end of the link will be selected.
If the node has multiple links connected in the direction of the command, a drop-down will be displayed allowing the user to select another of the links or its desired target node. Referring now to FIG. 6A, an example is shown. In the example illustrated in fig. 6A, the user may wish to navigate from decision shape 602 to action 3604. The user may invoke the command to navigate the link to the right, such as by using a gesture. In this example, the user may indicate navigation to the right using a "Ctrl- - >" keyboard gesture. When the user invokes the command to navigate the link to the right, the link to action 1606 happens to be the first link selected. But since action 3604 also runs out from the same side of decision shape 602, it is also displayed in drop down 606. The user may then use a keyboard or other gesture to select action 3604 and complete its selection.
Embodiments may include functionality to navigate directly to a node by name or other content (such as the name of an operation or attribute within a class shape) using a keyboard or other hardware input device. Embodiments may use the incremental search used in the chart as a way to quickly go to a specified element. For example, the user invokes an increment search command. A search control appears. As the user types, the node containing the current string is highlighted. As the user types more, the user may zoom to the node that the user is looking for and then hit return. The node is now selected. Similar functionality may also be applied to link labels and containers.
The following discussion now refers to various methods and method acts that may be performed. Although the various method acts are discussed in, or illustrated in, a particular order by flowcharts that occur in the particular order, the particular order is not required unless explicitly stated, or required because an act is dependent on another act being completed before the act is performed.
Referring now to FIG. 7, a method 700 is shown. The method 700 may be practiced in a computing environment and includes acts for organizing data where such data has spatial significance. For example, graphs and diagrams are one example of data that has spatial significance. The position of the nodes relative to each other and/or the connectors connecting the nodes help define the meaning of the data represented by the nodes. Other data having spatial significance may include hierarchical data in which spatial positioning indicates a position in a hierarchy. Such hierarchical data may be included in a database, table, or the like. Although not enumerated here, still other types of data may have spatial significance, and the examples herein are not comprehensive.
Method 700 includes displaying a representation of the spatially structured data to a user on a user interface (act 702). For example, FIG. 2A illustrates a user interface display displaying a class diagram with class nodes 202 and 204, where nodes 202 and 204 represent spatially structured data. FIG. 3B illustrates a user interface with an activity diagram having an action node 302 and an action node 310 connected by a flow connector, where the spatial meaning indicates the sequence of actions.
Method 700 further includes receiving user input at the computer-implemented user interface through one or more hardware user interface devices (act 704). User input is domain agnostic, but has spatial connotation. For example, as illustrated above, spatial connotations may be above, below, to the right, to the left, and inside. These have spatial connotations but may differ slightly between particular domains and/or contexts. As illustrated above, the user input may be one or more of a variety of different types of input. The examples illustrated herein are not limiting, but include a mouse, a pen, a touch, a multi-touch, a gesture, a human limb or extremity gesture tracked by a camera or handheld controller, a controller or computing system movement flick gesture tracked by an accelerometer in a controller or computing system, or some other user input.
Method 700 also includes determining a field of user input based on pre-existing structured data or previous user interactions displayed on the user interface (act 706). For example, in FIG. 2A, the presence of a class node 202 may indicate that the domain is a class diagram. In fig. 3A, the presence of the action node 302 may indicate that the domain is an active graph. Alternatively, a previously entered user preference or a user's previous chart construction action may be used to determine the domain of the user input.
Method 700 further includes interpreting the user input as a domain-specific response based on the determined domain (act 708), but wherein the domain-specific response is consistent with spatial connotation across multiple domains. For example, in FIG. 2B, pressing the Enter key causes a new inherited class 206 to be placed below the Animal class node 202. Whereas in fig. 3B, pressing the Enter key results in the placement of a new action node 310 below the action node 302. Thus, while spatial connotations are created below in any domain, each is a domain-specific response in terms of what is created below.
Embodiments of method 700 may be practiced where receiving user input includes receiving user input generated with the aid of a prompt module for providing one or more constrained prompts. For example, fig. 5A-5D illustrate examples in which gestures are used and a prompt module is provided with a constraint prompt. Hints can be constrained based on domain and/or context. For example, the hints may be constrained based on what type of chart is being created and what entities in the chart are in focus.
Embodiments of method 700 may be practiced where receiving user input includes receiving user input generated with the assistance of a mnemonics module for providing one or more constrained cues. Fig. 4A and 4B illustrate how a memory module may be used to determine the type of chunk.
Method 700 also includes automatically adding objects of a new data type to the spatially structured data as a result of interpreting the user input as a domain-specific response. For example, in fig. 3A and 3B, since the chart is an active chart, action blocks are automatically added. Some embodiments may include changing the type of new data type added using at least one of a mouse button or a scroll function. For example, the user may select a newly added data type and scroll through the various available data types. For example, the user may be able to change the automatically added action data type to a decision data type. In some embodiments, changing the type of new data type added may be performed using user input generated with the aid of a mnemonics module for providing one or more constrained hints. In still other alternative embodiments, changing the type of new data type added may be accomplished using user input generated with the aid of a hints module (such as IntelliSense) for providing one or more constrained hints.
Embodiments of method 700 may also include creating links between structured data objects using user input generated with the aid of a hints module for providing one or more constrained hints. For example, fig. 5A-5D specifically illustrate an example of creating a link using a hints module. These links may actually be quite complex. For example, an embodiment may pick one class A and connect it to class B using keyboard gestures. This may result in a series of intermediate classes and links that need to be generated to correctly identify how class a and class B relate. For example, the two classes may need to be connected by intermediate classes C, D and E. However, embodiments may provide appropriate prompts and make appropriate links.
Embodiments of method 700 may be practiced where the spatially structured data includes data having a spatial layout that is visually observable and meaningful. For example, flow charts, activity diagrams, and the like have layouts that can be visually observed by a user.
Embodiments of method 700 may receive user input for navigating links between data objects in a representation of spatially structured data. For example, as shown in the example illustrated in fig. 6, a user gesture may be used to navigate through a chart. In some embodiments, such as the embodiment illustrated in fig. 6, the user may be provided with a list of constraints based on the domain's possible navigation destinations.
Further, the method may be implemented by a computer system comprising one or more processors and a computer-readable medium, such as computer memory. In particular, the computer memory may store computer-executable instructions that, when executed by the one or more processors, cause various functions to be performed, such as those acts described in the embodiments.
Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in further detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. The computer-readable medium storing the computer-executable instructions is a physical storage medium. Computer-readable media carrying computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can include at least two disparate types of computer-readable media: physical computer-readable storage media and transmission computer-readable media.
Physical computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage (e.g., CD, DVD, etc.), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A "network" is defined as one or more data links that allow electronic data to be transferred between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media.
Furthermore, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer readable media to physical computer readable storage media (or vice versa) upon reaching various computer system components. For example, computer-executable instructions or data structures received over a network or a data link may be cached in RAM within a network interface module (e.g., a "NIC") and then ultimately transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system. Thus, a computer-readable physical storage medium may be included in a computer system component that also (or even primarily) utilizes transmission media.
Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the features and acts described above are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (15)
1. A method of organizing spatially meaningful data in a computing environment:
displaying a representation of the spatially structured data to a user on a user interface (702);
receiving user input on a computer-implemented user interface through one or more hardware user interface devices, wherein the user input is domain agnostic but has spatial connotation (704);
determining a field of the user input based on at least one of pre-existing structured data or previous user interactions displayed on the user interface (706); and
interpreting the user input as a domain-specific response based on the determined domain, but wherein the domain-specific response is consistent with spatial connotation across multiple domains (708).
2. The method of claim 1, further comprising receiving user input generated with the aid of a prompt module that provides one or more constrained prompts.
3. The method of claim 2, wherein the hint is constrained based on the domain.
4. The method of claim 2, wherein the hint is constrained based on context.
5. The method of claim 1, wherein receiving user input comprises receiving user input generated with the aid of a mnemonics module that provides one or more constrained cues.
6. The method of claim 1, further comprising automatically adding an object of a new data type to the spatially structured data as a result of interpreting the user input as a domain-specific response.
7. The method of claim 6, further comprising changing the type of new data type added using at least one of a mouse button or a scroll function.
8. The method of claim 6, further comprising changing the type of new data type added using user input generated with the aid of a mnemonics module that provides one or more constrained hints.
9. The method of claim 6, further comprising changing the type of new data type added using user input generated with the aid of a hints module that provides one or more constrained hints.
10. The method of claim 1, further comprising creating links between structured data objects using user input generated with the aid of a hints module that provides one or more constrained hints.
11. The method of claim 1, wherein the spatially structured data comprises data having a visually observable spatial layout.
12. The method of claim 1, wherein additional user input is received for navigating links between data objects in the representation of spatially structured data.
13. The method of claim 12, further comprising providing a user with a constrained list of possible navigation destinations based on the domain.
14. A computer-readable medium comprising computer-executable instructions that when executed by one or more processors are configured to cause the one or more processors to perform the following acts for navigating data having spatial significance:
displaying a representation of the spatially structured data to a user on a user interface (702);
receiving user input on a computer-implemented user interface through one or more hardware user interface devices, wherein the user input is domain agnostic but has spatial connotation (704);
determining a field of the user input based on at least one of pre-existing structured data or previous user interactions displayed on the user interface (706); and
interpreting the user input as a domain-specific response based on the determined domain, but wherein the domain-specific response is consistent with spatial connotation across multiple domains (708).
15. The computer-readable medium of claim 14, wherein receiving user input comprises receiving user input generated with the aid of a prompt module that provides one or more constrained prompts.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/972,060 | 2010-12-17 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1171100A true HK1171100A (en) | 2013-03-15 |
| HK1171100B HK1171100B (en) | 2019-01-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5270537B2 (en) | Multi-touch usage, gestures and implementation | |
| US7461355B2 (en) | Navigational interface for mobile and wearable computers | |
| US11010032B2 (en) | Navigating a hierarchical data set | |
| Gesslein et al. | Pen-based interaction with spreadsheets in mobile virtual reality | |
| US9395906B2 (en) | Graphic user interface device and method of displaying graphic objects | |
| US10551990B2 (en) | Contextual browser frame and entry box placement | |
| KR20170041219A (en) | Hover-based interaction with rendered content | |
| US20070214436A1 (en) | Positional navigation graphic link system | |
| US20140145945A1 (en) | Touch-based input control method | |
| US8281258B1 (en) | Object traversal to select focus | |
| AU2011343428B2 (en) | Creation, editing and navigation of diagrams | |
| EP1546853A2 (en) | Graphical user interface navigation method and apparatus | |
| US20100077304A1 (en) | Virtual Magnification with Interactive Panning | |
| US7765496B2 (en) | System and method for improving the navigation of complex visualizations for the visually impaired | |
| US20170242568A1 (en) | Target-directed movement in a user interface | |
| Langner et al. | Content sharing between spatially-aware mobile phones and large vertical displays supporting collaborative work | |
| HK1171100A (en) | Creation, editing and navigation of diagrams | |
| HK1171100B (en) | Creation, editing and navigation of diagrams | |
| Gebhardt et al. | Vista widgets: a framework for designing 3D user interfaces from reusable interaction building blocks | |
| US20220365642A1 (en) | Metabolic network explorer | |
| Freitag et al. | Liquid: library for interactive user interface development | |
| Moon | Prototyping Touchless User Interface for Interacting with a Website | |
| KR20230085122A (en) | Folder icon on the information processor | |
| Feuerstack et al. | Designing and executing multimodal interfaces for the web based on state chart XML | |
| TWI437485B (en) | Portable device and operating method thereof |