HK1182466B - Alias selection in multiple-aliased animations - Google Patents
Alias selection in multiple-aliased animations Download PDFInfo
- Publication number
- HK1182466B HK1182466B HK13109584.8A HK13109584A HK1182466B HK 1182466 B HK1182466 B HK 1182466B HK 13109584 A HK13109584 A HK 13109584A HK 1182466 B HK1182466 B HK 1182466B
- Authority
- HK
- Hong Kong
- Prior art keywords
- animation
- user
- determined
- act
- starting point
- Prior art date
Links
Description
Background
Computers have become highly integrated in labor, homes, mobile devices, and many other settings. Computers can process large amounts of information quickly and efficiently. Software applications designed to run on computer systems allow users to perform a wide variety of functions, including business applications, lessons, entertainment, and the like. Software applications are typically designed to perform specific tasks, such as word processing applications for drafting documents, or email programs for sending, receiving, and organizing emails.
Software applications have a variety of different types of user interfaces that allow a user to interact with the application. Some applications have graphical user interfaces that employ graphics such as icons, pictures, windows, and other graphical elements with which a user may interact. In some cases, the graphical elements are displayed according to a predefined layout (such as a grid or list). When such a predefined layout is used, movement of existing elements or addition of new elements may disrupt or change the existing layout of the elements. When such a change occurs, an animation can be used to graphically show how the change occurred.
Disclosure of Invention
Embodiments described herein relate to determining an appropriate alias to select after an animation break and determining which user interface element the user is currently focusing on. In one embodiment, a computer system determines that various User Interface (UI) elements are to be moved to different locations within a UI. The computer system initiates a first animation that creates a first alias and a second alias for each UI element to be moved. The computer system then receives input that interrupts the initiated first animation. The input indicates that at least one of the UI elements that was moved during the first animation is to be moved to a different location. The computer system then determines which of the first alias and the second alias is most appropriate for use as a starting point for the second animation based on various visibility factors, and initiates the second animation at the determined most appropriate alias. The second animation uses the determined optimal alias as a starting point and a different third alias as an ending point.
In another embodiment, a computer system receives input from a user at a User Interface (UI). The computer system identifies a UI element that currently changes position during the animation initiated by the received input. The computer system accesses various visibility factors, each of which at least partially identifies which UI elements a user is focusing on. Then, based on one or more of the accessed visibility factors, the computer system determines which of the plurality of UI elements the user is currently focusing attention on.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be set forth in the description which follows, and in part will be readily apparent to those skilled in the art from that description or may be learned by practice of the teachings herein. The features and advantages of embodiments of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of embodiments of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The embodiments of the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 illustrates a computer architecture in which embodiments of the present invention may operate, including determining an appropriate alias to select after an animation interrupt.
FIG. 2 illustrates a flow diagram of an example method for determining an appropriate alias to select after an animation interrupt.
FIG. 3 illustrates a flow chart of an example method for determining which user interface element a user is currently focusing on.
Fig. 4A-4D illustrate embodiments in which different visibility factors are used to determine where a user is looking when traveling.
Detailed Description
Embodiments described herein relate to determining an appropriate alias to select after an animation break and determining which user interface element the user is currently focusing on. In one embodiment, a computer system determines that various User Interface (UI) elements are to be moved to different locations within a UI. The computer system initiates a first animation that creates a first alias and a second alias for each UI element to be moved. The computer system then receives input that interrupts the initiated first animation. The input indicates that at least one of the UI elements that was moved during the first animation is to be moved to a different location. The computer system then determines which of the first alias and the second alias is most appropriate for use as a starting point for the second animation based on various visibility factors, and initiates the second animation at the determined most appropriate alias. The second animation uses the determined optimal alias as a starting point and a different third alias as an ending point.
In another embodiment, a computer system receives input from a user at a User Interface (UI). The computer system identifies a UI element that currently changes position during the animation initiated by the received input. The computer system accesses various visibility factors, each of which at least partially identifies which UI elements a user is focusing on. Then, based on one or more of the accessed visibility factors, the computer system determines which of the plurality of UI elements the user is currently focusing attention on.
The following discussion now refers to a number of methods and method acts that may be performed. It should be noted that although the method acts may be discussed in a certain order or illustrated in a flowchart as occurring in a particular order, no particular ordering is necessarily required (unless explicitly stated) or required, as the acts depend on another act being completed before the acts are performed.
Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer such as computer hardware including, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions in the form of data are computer storage media. Computer-readable media bearing computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can include at least two distinct categories of computer-readable media: computer storage media and transmission media.
Computer storage media includes RAM, ROM, EEPROM, CD-ROM, Solid State Drives (SSDs) based on RAM, flash memory, Phase Change Memory (PCM) or other types of memory, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means (means) in the form of computer-executable instructions, data or data structures and which can be accessed by a general purpose or special purpose computer.
A "network" is defined as one or more data links and/or data switches that enable the transfer of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network which can be used to carry data or desired program code means in the form of computer-executable instructions or in the form of data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
In addition, program code means in the form of computer-executable instructions or data structures may be automatically transferred from transmission media to computer storage media (or vice versa) upon reaching various computer system components. For example, computer-executable instructions or data structures received over a network or a data link may be buffered in RAM within a network interface module (e.g., a network interface card or "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable (or computer-interpretable) instructions include, for example, instructions which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions (such as assembly language), or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, and each performs tasks (e.g., cloud computing, cloud services, etc.). In a distributed system environment, program modules may be located in both local and remote memory storage devices.
FIG. 1 illustrates a computer architecture 100 in which the principles of the present invention may be employed. Computer architecture 100 includes computer system 101. Computer system 101 may be any type of local or distributed computer system, including a cloud computer system. The computer system may have various different modules for performing different functions. The functionality may be displayed on a User Interface (UI) 125. The user interface may be capable of receiving various types of user input 106, including mouse input, keyboard input, touch screen or other touch input, or any other form of user interaction. The UI allows a user of the computer system (e.g., user 105) to interact with different software applications and different functional modules of the computer system.
In some cases, the UI125 may be configured to display various different visual elements 126 (e.g., blocks a and B). During user interaction with the user interface, the user may indicate other elements (e.g., blocks Z and K) to be added to other elements of the display. It will be appreciated that although only some visual elements are shown in fig. 1, substantially any number of visual elements may be displayed within the UI 125. The elements may be rearranged or repositioned as new elements are added or as existing elements are moved. For example, element Z may be added between elements a and B. Animation may occur when an element is added, moved, or deleted. The animation may show a visual element being moved from one location to another (or being added or removed from the display).
When a visual element is repositioned on the screen, an animation may be initiated to move the element smoothly from its old position to a new position. An "alias" or "brush" may be used to show the element on the UI screen, although the actual element has moved to its new location from the perspective of the underlying layout system (which controls how the element is laid out on the UI). In some cases, multiple aliases (alias 1 (130A), alias 2 (130B), and/or alias 3 (130C)) may be used to represent a visual element.
When an animation occurs, the element may be moved again due to user input that interrupts the animation. For example, element B may be moved to alias 2 and then moved again to alias 3. A new animation is then initiated, starting from either alias 1 or alias 2's current position. At least in some embodiments, the visual element most likely to be tracked by the user (or more particularly, the user's eye) is the element (or alias) most appropriate for starting the new animation. To determine which alias is most suitable for starting a new animation, a number of different factors may be considered: opacity, where the element with higher opacity is the best; cropping, where the least cropped element (i.e., the most visible on the screen) is the best; distance from the last interaction point, wherein the element closest to the location where the user last interacted with the screen is the best; and speed, with the fastest moving element being the best. Once it is determined which alias to use, a snapshot of its state is created and used to set up a new animation. These concepts will be explained below with respect to methods 200 and 300 of fig. 2 and 3, respectively.
In view of the above-described systems and architectures, methodologies that may be implemented in accordance with the disclosed subject matter will be better appreciated with reference to the flow charts of fig. 2 and 3. For purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks. It is to be understood and appreciated, however, that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies described hereinafter.
FIG. 2 illustrates a flow diagram of a method 200 for determining an appropriate alias to select after an animation interrupt. The method 200 will now be described with frequent reference to the components and data of the environment 100.
Method 200 includes an act of determining that one or more User Interface (UI) elements are to be moved to different locations within the UI (act 210). For example, the movement determination module 110 may determine that element Z is to be added between elements a and B in the UI 125. If the base layout system is a grid system (essentially any type of layout including list form, loop form, etc. can be used), then element B will be moved down to accommodate the inclusion of element Z. Thus, when element B is to be moved, an animation will be created to move element B down into the UI.
Method 200 includes an act of initiating a first animation that creates a first alias and a second alias for each UI element that is to be moved (act 220). For example, animation module 115 of computer system 101 may initiate an animation (animation 1 (131A)) to move element B downward to make room for element Z. When an input is received to add element Z, alias 1 may be created at the current position of element B (130A). Alias 2 is also created (130B). Thus, element B is animated to appear as if the element is moving from its original position at alias 1 to its new position at alias 2.
Next, method 200 includes an act of receiving input that interrupts the initiated first animation, the input indicating that at least one of the UI elements that was moved during the first animation is to be moved to a different location (act 230). For example, the UI125 may receive another user input 106 that interrupts the first animation (moving from alias 1 to alias 2). For example, element K may be added between elements Z and B. If this new element K is added during the animation of element B, a new animation will be required to move element B to its updated new position. Thus, the animation module 115 may create a new alias (alias 3) and a new animation from alias 2 to alias 3 (animation 2 (131B)).
In another embodiment in the environment of FIG. 1, if multiple entries are added (instead of just Z), element B will move from the first column to the second column. At this point, two aliases will be created for element B. If, during this animation, another element or set of elements is added (instead of just element K), then element B will be moved to the third column. At this point, the system will then select one of the two previously created aliases as the starting point for the second animation.
When the first animation is interrupted, it may be automatically stopped by the computer system. At this point, state information for one or more of the UI elements in the first animation may be persistently stored in local or remote data storage. Because the persistently stored state information may include a number of different details about the current state of the visual element, this information may be used to determine which alias is most suitable as a starting point for the second animation. In some cases, this information may be used to identify a visibility factor that helps identify which element the user is most likely to focus on when the animation begins.
Method 100 also includes an act of determining which of the first alias and the second alias is most suitable for use as a starting point for the second animation based on one or more visibility factors (act 240). Alias selection module 120 may determine whether alias 2 or alias 3 is the most appropriate alias to use as a starting point for animation 2 (131B). In at least some embodiments, the optimal alias is an alias for the display element that is currently most likely to be tracked or viewed by the user. Selecting the currently viewed element allows the second transition to be more fluid and less irritating to the eye.
As mentioned above, various different visibility factors may be used to indicate which UI element the user's eye is currently tracking. The term "visibility factor" as used herein may refer to any portion of data that will indicate where the user's eyes are currently looking. Thus, one visibility factor 121 may be opacity. When opacity is used as the determining factor, the alias of the UI element having the highest opacity is determined as the optimum starting point of the second animation. As shown in fig. 4A, element K406 is added between display elements (426) B and C in UI 425A. Element C is moved from its position at alias a1 (430A) to a new position at alias a2 (430B). Animation in this embodiment involves fading elements out in their old positions and fading them into their new positions. The element represented by alias a2 is darker than the element represented by alias a1, as indicated by the relatively lighter and darker (darker) lines. Thus, in this case, the computer system would determine that the user is most likely looking at the element at alias a2 because the element at a2 is deeper in opacity, and thus the second animation should begin at alias a 2.
In another embodiment, at least one of the visibility factors may include clipping, such that the alias of the UI element that is least clipped is determined to be the most appropriate starting point for the second animation. As shown in UI425B of fig. 4B, element K (406) is added between elements B and C. The animation in this embodiment includes moving element C down through the bottom of the screen and from the top of the screen down into a new column of elements. Although the element is moved during animation, the element may start in the left column at 100% and move slowly downward so that as more of the element is cropped by the bottom of the screen, the cropped portion is shown in the next column. The element has a different alias in each column. Thus, element C may start at alias a1 (430A) and move downward toward its new position at alias a2 (430B). In some cases, whichever element is cropped the least, that element is determined to be the most likely element to be viewed, and is therefore the most appropriate starting point for the second animation.
In another embodiment, at least one of the visibility factors includes a distance from the last user interaction point. Thus, computer system 101 may determine a last interaction point indicating which UI element in UI425C the user last interacted with. Then, the alias of the UI element closest to the location where the user last interacted with the UI is determined to be the most appropriate starting point for the second animation. Thus, as shown in fig. 4C, element K (406) is added between elements B and C. The computer system may determine that the last user interaction was made with the mouse at a point marked as the "last interaction point". The computer system may then determine that the element at alias a2 (430B) is closer to the last interaction point than the element at alias a1 (430A), and this is a better choice to use as a starting point for the second animation.
In yet another embodiment, at least one of the visibility factors includes the determined current speed of the UI element. Thus, computer system 101 can determine the current speed at which each UI element that is part of the first animation is moving. Using this current speed, the alias selection module 120 may determine that the alias of the UI element that moves the fastest is the most appropriate starting point for the second animation. When the user's eyes tend to follow a moving object, the fastest moving object may be the best indicator of where the user is currently looking. Thus, as shown in fig. 4D, element K (406) may be added between elements B and C in UI 425D. In the animation in this embodiment, element C may move down from the left column to the column on the right. If for whatever reason the element at alias a2 (430B) moves faster than the element at alias a1 (430A), alias a2 will be used as the starting point for the second animation. It should be noted herein that although fig. 4A-4D illustrate the use of a single visibility factor, these visibility factors may be used in combination with each other. In fact, substantially any number of visibility factors may be used in determining which alias is most suitable for use as a starting point for the second (or any subsequent) animation.
Returning to FIG. 2, method 200 includes an act of initiating a second animation at the determined optimum alias, the second animation using the determined optimum alias as a starting point and a different third alias as an ending point (act 250). Thus, alias 1 (130A) or alias 2 (130B) is selected as the most appropriate choice for starting the second animation that will originate at the selected alias and will end at alias 3 (130C). In this way, when the first animation is interrupted and the second animation is to be initiated, the second animation will be initiated at the location that the user is currently most likely to view.
FIG. 3 illustrates a flow chart of a method 300 for determining which user interface element a user is currently focusing on. The method 300 will now be described with frequent reference to the components and data of the environment 100.
Method 300 includes an act of receiving input 106 from user 105 at User Interface (UI) 125 (act 310). Next, movement determination module 110 identifies UI elements (e.g., 126) that currently change position during the animation (e.g., animation 1 (131A) initiated by the received input) (act 320). computer system 101 then accesses one or more visibility factors 121, each of which identifies, at least in part, which UI elements the user is currently focusing on (act 330).
As indicated above, the visibility factor may include opacity, where the alias of the UI element with the highest opacity is determined to be the most appropriate starting point for the second animation (as shown in FIG. 4A). The visibility factor may further include clipping, wherein the alias of the UI element that is least clipped is determined to be the most appropriate starting point for the second animation. The distance from the last interaction point may also be used as a visibility factor (i.e. as an indication of where the user is currently looking on the UI). The computer system may determine a last interaction point indicating which UI element the user last interacted with. Then, the alias of the UI element closest to where the user last interacted with the UI is determined to be the most appropriate starting point for the second animation. The current speed at which each UI element that is part of the first animation is moving may also be used as a visibility factor. In this case, the alias of the UI element that moves the fastest is determined to be the most appropriate starting point for the second animation. Thus, using one or more of the visibility factors, the computer system may determine which of many different UI elements displayed in the UI the user is currently focusing on (act 340). After determining the currently viewed element (or the most likely currently viewed element), the alias of that element is used as the starting point for the second animation.
Thus, methods, systems, and computer program products are provided that determine an appropriate alias to select after a first animation has been interrupted and that a second animation is to be initiated. Further, methods, systems, and computer program products are provided that determine which user interface element the user is currently focusing on, and as a result which alias is to be used as a starting point for the second animation.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (15)
1. At a computer system including at least one processor and a memory, in a computer networking environment including a plurality of computing systems, a computer-implemented method for determining to select an appropriate position for moving an element in a new animation when a first animation for moving the element is interrupted, the method comprising:
an act of determining that one or more User Interface (UI) (125) elements (126) are to be moved to different locations within the UI;
an act of initiating a first animation (131A) that creates a first position (130A) and a second position (130B) for the UI element to be moved, wherein:
the first position is a current position of the UI element such that the first position serves as a starting point for the first animation, an
The second position is an end point of the first animation;
an act of receiving an input (106) from a user that interrupts the initiated first animation, the moved UI element being displayed during the interrupted first animation, the input indicating that the UI element moved during the first animation is to be moved to a different location;
an act of determining which UI location currently has the greatest likelihood of being tracked by the user's eye based on one or more visibility factors (121), wherein the determined UI location is an optimum location for initiating the second animation (131B); and
an act of initiating a second animation at the determined optimum position, the second animation using the determined optimum position as a starting point for the second animation and a third position (130C) as an ending point for the second animation.
2. The method according to claim 1, wherein at least one of the visibility factors comprises opacity, such that the position of the UI element with the highest opacity is determined to be the most appropriate starting point for the second animation.
3. The method according to claim 1, wherein at least one of the visibility factors comprises clipping such that the position of the UI element that is least clipped is determined to be the most appropriate starting point for the second animation.
4. The method of claim 1, further comprising an act of determining a last user interaction point, wherein the last user interaction point indicates which UI element the user last interacted with.
5. The method according to claim 4, wherein at least one of the visibility factors comprises a distance from a last user interaction point, such that a position of the UI element closest to where the user last interacted with the UI is determined to be an optimum starting point for the second animation.
6. The method of claim 1, further comprising determining a current speed at which each UI element that is part of the first animation is moving.
7. The method according to claim 6, wherein at least one of the visibility factors comprises the determined current speed of the UI element, such that the position of the UI element moving the fastest is determined to be the most appropriate starting point for the second animation.
8. The method of claim 1, wherein upon receiving input to interrupt the initiated first animation, state information of one or more of the UI elements is persistently stored in the first animation.
9. A method for determining which user interface element a user is currently focusing on, the method comprising:
an act of receiving an input (106) from a user (105) at a User Interface (UI) (125);
an act of identifying one or more UI elements (126) that currently change position during the animation (131A) initiated by the received input;
an act of accessing one or more visibility factors (121), each of the visibility factors at least partially identifying which UI elements a user is focusing on; and
an act of determining which UI element (126) of the plurality of elements the user is currently focusing on based on one or more of the accessed visibility factors.
10. The method according to claim 9, wherein at least one of the visibility factors comprises opacity, such that the position of the UI element with the highest opacity is determined to be the most appropriate starting point for the second animation.
11. The method according to claim 9, wherein at least one of the visibility factors comprises clipping such that the position of the UI element that is least clipped is determined to be the most appropriate starting point for the second animation.
12. The method of claim 9, further comprising an act of determining a last user interaction point, wherein the last user interaction point indicates which UI element the user last interacted with.
13. The method according to claim 12, wherein at least one of the visibility factors comprises a distance from a last user interaction point, such that a position of the UI element closest to where the user last interacted with the UI is determined to be an optimum starting point for the second animation.
14. A method for determining when a first animation for moving an element is interrupted to select an appropriate position for moving the element in a new animation, the method comprising:
an act of determining that one or more User Interface (UI) (125) elements (126) are to be moved to different locations within the UI;
an act of initiating a first animation (131A) that creates a first position (130A) and a second position (130B) for the UI element to be moved, wherein:
the first position is a current position of the UI element such that the first position serves as a starting point for the first animation, an
The second position is an end point of the first animation;
an act of receiving an input (106) that interrupts the initiated first animation, the moved UI element being displayed during the interrupted first animation, the input indicating that the UI element moved during the first animation is to be moved to a different location;
determining a last user interaction point, wherein the last user interaction point indicates which UI element the user last interacted with;
an act of determining a current speed at which each UI element is moving as part of the first animation;
an act of determining which UI position currently has the greatest likelihood of being tracked by the user's eye based on one or more visibility factors (121), wherein the determined UI position is an optimum position for initiating the second animation (131B), the visibility factors including one or more of: opacity, wherein the location of the UI element having the highest opacity is determined to be the most appropriate starting point for the second animation; cropping, wherein the position of the UI element that is cropped the least is determined as the most appropriate starting point for the second animation; distance from the last user interaction point, wherein the position of the UI element closest to where the user last interacted with the UI is determined to be the most appropriate starting point for the second animation; and the determined current speed of the UI element, wherein the position of the UI element moving the fastest is determined as the most suitable starting point of the second animation; and
an act of initiating a second animation (131B) at the determined optimum position, the second animation using the determined optimum position as a starting point of the second animation and a third position (130C) as an ending point of the second animation.
15. The method according to claim 14, wherein the visibility factor indicates which UI element the user's eye is currently tracking, thereby identifying which UI element position is used as a starting point for the second animation.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/229586 | 2011-09-09 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1182466A HK1182466A (en) | 2013-11-29 |
| HK1182466B true HK1182466B (en) | 2017-08-25 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10254928B1 (en) | Contextual card generation and delivery | |
| US10775971B2 (en) | Pinch gestures in a tile-based user interface | |
| US10725632B2 (en) | In-place contextual menu for handling actions for a listing of items | |
| US8949739B2 (en) | Creating and maintaining images of browsed documents | |
| JP6116581B2 (en) | Dynamic navigation bar docking and undocking for enhanced communication services | |
| EP2788847B1 (en) | Dynamic navigation bar for expanded communication service | |
| US9015639B2 (en) | Methods and systems for navigating a list with gestures | |
| JP6050347B2 (en) | Launcher for context-based menu | |
| US8935610B2 (en) | Dynamic minimized navigation bar for expanded communication service | |
| US10936568B2 (en) | Moving nodes in a tree structure | |
| US9720557B2 (en) | Method and apparatus for providing always-on-top user interface for mobile application | |
| TW201606631A (en) | Context menu utilizing a context indicator and floating menu bar | |
| CN105531657B (en) | Rendering open windows and tabs | |
| JP2014507026A (en) | User interface interaction behavior based on insertion point | |
| EP2807549B1 (en) | Presenting data driven forms | |
| JP2010250815A (en) | Method, device and computer program for navigating a plurality of instantiated virtual desktops (navigation of a plurality of virtual instantiated desktops) | |
| US20130111382A1 (en) | Data collection interaction using customized layouts | |
| CN102937891B (en) | Another name in many another name animations is selected | |
| US20150293889A1 (en) | Perception of page download time by optimized resource scheduling | |
| HK1182466B (en) | Alias selection in multiple-aliased animations | |
| HK1182466A (en) | Alias selection in multiple-aliased animations | |
| WO2017100011A1 (en) | Spatially organizing communications |