HK1189680A - Optimization schemes for controlling user interfaces through gesture or touch - Google Patents
Optimization schemes for controlling user interfaces through gesture or touch Download PDFInfo
- Publication number
- HK1189680A HK1189680A HK14102753.7A HK14102753A HK1189680A HK 1189680 A HK1189680 A HK 1189680A HK 14102753 A HK14102753 A HK 14102753A HK 1189680 A HK1189680 A HK 1189680A
- Authority
- HK
- Hong Kong
- Prior art keywords
- input
- browser
- touch
- event
- gesture
- Prior art date
Links
Description
This application claims the benefit of U.S. provisional patent application serial No.61/653, 530, filed on 31/5/2012. The disclosure of this provisional patent application is incorporated by reference into this application for all purposes.
Technical Field
The present application relates to an optimization scheme for controlling a user interface by gesture or touch.
Background
Editable text displayed on a gesture or touch device is managed through operating system utilities. The system utility facilitates placement of an insertion point or selection over content, such as a drag handle. The utility relies on the user managing content that is restricted according to browser behavior. If a web page controls user interaction with content in order to provide a richer web application experience, the utility may either malfunction or block the user.
In scenarios involving drag and click, in conventional systems, the browser may handle events inconsistently. In addition, conventional systems may address new inputs to web applications in a complex manner. Conventional systems address these challenges by logic for handling clicks and drags distributed among isolated handlers. Other solutions of conventional systems include repeating code in the event handler for similar events.
Users may access web applications from a variety of devices, desktop, tablet, and laptop computers with gesture or touch monitors. Most devices support mouse, gesture, touch or similar input mechanisms. A User Interface (UI) that works well for mouse-based input does not necessarily work well for gestures or touch inputs without a cursor and imprecise fingers.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to specifically identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Embodiments are directed to providing a user interface to manipulate a selection by creating a selection handle below or at the end of an insertion point of the scoping selection. The handle may replace the browser handle. The handle may repeat an application or operating system selected behavior in a gesture or touch based environment. The handle may provide a selected consistency across the gesture or touch enabled platform and browser. The handle may force the selected behavior to be more suitable for editing content than for consuming content.
Other embodiments are directed to providing a scheme for semantic interpretation of browser gestures or touch events. The abstraction layer of an application may act as an interface between other applications and the browser. Detected browser events in a touch or gesture enabled device may be non-deterministic and vary across devices and browsers. A system executing an application according to embodiments may receive browser events and normalize them into a consistent stream of semantic events (clicks, context menus, drags, etc.) while providing consistency across devices and browsers.
Still other embodiments are directed to providing a browser with a user interface optimized for gestures or touches. In a system executing an application according to embodiments, a user interface may be invoked in response to a user action. The user interface may be optimized to match the type of input detected, such as touch and mouse. Alternatively, the behavior of a particular portion of the user interface may change based on the type of input. In addition, the default user interface presented during initialization is user changeable according to the selected criteria presented.
These and other features and advantages will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory and are not restrictive of aspects as claimed.
Brief Description of Drawings
FIG. 1 illustrates an exemplary network diagram in which embodiments may be implemented;
FIG. 2 illustrates an exemplary default and custom selected behavior flow according to embodiments;
FIG. 3 illustrates an exemplary abstraction layer managing events between a browser and an application, in accordance with embodiments;
FIG. 4 illustrates a flow diagram showing optimization of a user interface, in accordance with various embodiments;
FIG. 5 illustrates an example of an optimized user interface for a table control based on input type, in accordance with embodiments;
FIG. 6 illustrates an example of an optimized user interface for color and font controls based on detected input types, in accordance with various embodiments;
FIG. 7 illustrates an example of an optimized user interface for styles and search controls based on detected input types, and a selected control presented to enable the optimized user interface, in accordance with embodiments;
FIG. 8 is a networked environment, where a system according to embodiments may be implemented;
FIG. 9 is a block diagram of an example computing operating environment, where embodiments may be implemented;
FIG. 10A illustrates a logic flow diagram for a process of providing a hits handle at the end and below a scoped selected insertion point;
FIG. 10B illustrates a logical flow diagram of a process for a scheme for semantic interpretation of browser gestures or touch events, an
FIG. 10C illustrates a logic flow diagram for a process of providing a user interface for a browser that is optimized for gestures or touches.
Detailed Description
As briefly described above, a user interface may be provided to manipulate the selection by creating a selection handle below or at the end of the insertion point of the scoping selection. A scheme for semantic interpretation of browser gestures or touch events may also be provided. Additionally, a user interface optimized for gestures or touches may be provided for the browser.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the spirit or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents.
While the embodiments will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a computing device, those skilled in the art will recognize that aspects may also be implemented in combination with other program modules.
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that embodiments may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Embodiments may be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for causing the computer or computing system to perform an example process. The computer readable storage medium is a computer readable memory device. For example, the computer-readable storage medium may be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy or compact disk, and similar media.
Throughout this specification, the term "platform" may be a combination of software and hardware components for providing custom selections for editing content on a gesture or touch screen, schemes for semantic interpretation of browser gestures or touch events, and user interfaces in browsers optimized for gestures or touches. Examples of platforms include, but are not limited to, hosted services executing on multiple servers, applications executing on a single computing device, and similar systems. The term "server" generally refers to a computing device executing one or more software programs, typically in a networked environment. However, a server may also be implemented as a virtual server (software program) executing on one or more computing devices viewed as a server on a network. Further details regarding these techniques and example operations will be provided below.
Referring to fig. 1, a diagram 100 illustrates an exemplary network diagram in which embodiments may be implemented. The components and environments shown in diagram 100 are for illustration purposes. Embodiments may be implemented in various local computing environments, networked computing environments, cloud-based computing environments, and similar computing environments employing various computing devices and systems, hardware, and software.
In the exemplary environment illustrated by chart 100, custom selections for editing content on a touchscreen may be provided by a locally installed application or web application on a client device having touch and/or gesture based input mechanisms, such as a stationary computing device (desktop or laptop computer), or a mobile computer such as a handheld computer, tablet computer 106, smart phone 108, in-vehicle computer, etc. The content may be text, tables, images, etc., and may be created or edited using techniques in accordance with embodiments.
In the case of a web application, the server 102 may provide a service (e.g., a spreadsheet service, a word processing service, an email service, or the like) and the service may be accessed through the web application via one or more networks 104. In general, a particular service may be accessed by browsers on client devices 106 and 108, which may display user interfaces for viewing and editing various types of documents, such as spreadsheets, word processing documents, presentations, emails, graphical documents, and so forth.
An application according to an embodiment may intercept gesture, touch and mouse events detected on client devices 106 and 108. The application may cause gestures and touch events to cease activating the tab depending on the browser or device. Instead, the application may initiate a opt-in handle optimized for input to provide a consistent experience for cross-platform content opt-in.
FIG. 2 illustrates an exemplary default and custom selected behavior flow according to embodiments. In diagram 200, an application, such as a web application, may intercept events that trigger selected behavior of the browser and Operating System (OS) mid-selection handles. The application may cancel browser events, which may prevent the OS from seeing those events and taking actions based on those events. The application may determine a selection corresponding to a browser and/or OS event. The application may display a selection handle corresponding to the selection. The application locates the selected end of the scope and renders the corresponding selected handle based on the location of the end.
In the default selection behavior 202, a browser event 204, such as a detected gesture or touch event, can initiate selection of content, such as text in an application (e.g., browser). The detected browser event may initiate a browser action to process the selection with the operating system 206. The operating system may initialize the selected utility.
In the custom opt-in behavior 208, the web application may provide a user interface that intercepts browser events and replaces the events with opt-in utilities. Additionally, the application may detect a browser event such as a gesture or touch event 210. The application may also intercept browser events and may cancel the event 212. Further, the application may use a selection utility provided by browser 214 to make the selection.
An application according to embodiments may detect selection of a range of text. The selection may be by gesture or touch input, mouse input, keyboard input, or the like. Additionally, the application may create a hit handle below the detected selected insertion point. The application may then replace the native browser handle with the selected handle optimized for input.
In addition, the application may create two additional selection handles at the selected end to provide a user interface to manage the selection. The user interface may enable the user making the selection to expand or contract the selection. The application may also simulate selected behavior of the native browser handle. The application may simulate default browser-selected behavior based on the type of input detected and provide appropriate handles to recreate the default behavior. Further, the application may provide selected alternative behaviors. The application may execute rules associated with alternative selected behaviors. An example can include showing a notification associated with the selection.
FIG. 3 illustrates an exemplary abstraction layer managing events between a browser and a web application, according to embodiments. In diagram 300, an Event Handler Manager (EHM)306 may communicate browser events 304 from browser 302 to an input manager. There may be one or more input managers 308, 310, and 312.
The input manager may process the event. The input manager may translate the event into a semantic event 314 that may be understood by the web application 316. Semantic events 314 may then be passed to the application by event handler manager 306. In an exemplary scenario, if the input manager can receive a gesture or touch event from the browser, the input manager can translate the browser event into a gesture or touch click event for communication to the web application 316.
A scheme according to embodiments may include a registration event handler for events from the browser (touch start, touch end, touch move, etc.). The registration event handler may route the event to the input manager after receiving the browser event. The input manager may have knowledge of semantic events (e.g., "clicks") that may be understood by the application. The input manager may also maintain information about the series of browser events that constitute the semantic event (e.g., touch start, and then touch end constitute a click event). The input manager may receive browser events and use their internal knowledge about semantic events to decide whether to notify an application of the occurrence of a semantic event.
If there is more than one input manager handling a browser event, one input manager may also have the ability to communicate with other input managers. An example may include a touch start event from a browser that initiates either a click event or a context menu event. As a result, two different input managers can handle browser events.
The input manager may communicate by placing a cache of information on a Document Object Model (DOM) element of the event. Each input manager can process the DOM element after receiving the event, and can query other input managers to determine further processing of the same event.
The event handler manager may also decide the order in which events are routed to the input manager. The event handler manager may order the routing of events by storing the input manager in a priority queue. In an exemplary scenario, an input manager with priority 1 may receive an event before an input manager with priority 2. Input managers having the same priority may receive events in a random order. Additionally, the input manager may notify the application after determining that a semantic event, such as a click, occurred. The input manager may also notify other input managers to stop listening for associated semantic events.
In applications according to embodiments, the event handler manager code may encompass one large component rather than separating a portion of the code that receives browser events from the input manager, another portion that operates on browser events, and yet another portion that translates browser events. Furthermore, the input managers can communicate with each other by maintaining references to each other rather than attaching a cache of information to DOM elements associated with events processed by each input manager.
According to an example embodiment, an input manager handling drag events may take mouse and gesture or touch events and normalize them into a drag event that is understandable by an application. For example, a Drag Input Manager (DIM) may manage a drag event by using an event handler manager that has been used by an input manager that handles keyboard events and an input manager that handles context menu events. The EHM may track registration of the event handler agent with the browser in the DOM with which the event handler, and the rest of the applications, are registered. Additionally, an input manager using EHM may receive browser events and then normalize them into semantic events that can be understood by an application. The input manager may then pass the normalized semantic events back to the EHM, which notifies the appropriate listener in the application.
The DIM may register a plurality of browser events, including mouse down, mouse release, mouse move, click, touch down, touch up, touch move, and touch down, via the EHM. The DIM may listen for mouse down or touch down events to determine a drag event. The DIM may compare the movement increment to the drag threshold upon receiving a mouse movement or touch movement event. If the mouse movement or touch movement event moves far enough away from the original value, the DIM may trigger the drag event by transmitting the drag event to the drag adaptor. The drag adapter may be code present in a Web Application Companion (WAC) attached to a particular DOM element. The DIM may also pass either mouse event arguments or gesture or touch event arguments to the drag adaptor to enable the adaptor to distinguish between mouse and touch drags.
An input manager registered with the EHM for a particular event may be placed in a priority queue. The browser event may notify the input manager in order of priority after the browser event is captured by the EHM. As a result, the DIM may receive a mouse, gesture, touch, or similar event to determine whether a drag is occurring, while communicating the event interpreted by the DIM as a drag event to a Mouse Input Manager (MIM) or Touch Input Manager (TIM) for additional processing.
DIM may be registered in EHM with higher priority than MIM and TIM. The DIM may cancel the event when it starts processing the event that is interpreted by the DIM as a drag event to prevent the MIM or TIM from processing the event. In an exemplary scenario, prevention of DIM may stop the context menu from running during touch drag.
In some examples, the drag-specific source code may be removed from the MIM and placed in the DIM. MIMs may be delivered by EHMs. The EHM may remove MIM source code that repeats the EHM function. The MIM can be source code optimized to receive and normalize mouse events that are not drag related.
The TIM may receive gesture or touch events that are not related to dragging and normalize the events for application use. The TIM may also use EHM for browser proxy handler and application handler management. The TIM may register with the EHM for browser events such as mouse down, touch up, touch move, and click. The TIM may receive tap events, touch and hold events, and normalize the events to events understandable by the WAC. The TIM may normalize a tap event to a click event. The TIM may receive events such as a touch down, touch up, and click when the user taps in the mobile browser.
Upon detecting a click event, the TIM may transmit the click event to the application after attaching a gesture or touch event argument object to the event. The TIM may attach a gesture or touch event to notify the application of the click event that originated from the gesture or touch event rather than the mouse event. The mobile browser may generally generate a click event after a touch down or touch up event is detected by the browser and interpreted as a click event. In an exemplary scenario, a click event may not be detected if the browser determines that the user has attempted to flick or drag rather than flicking. The DIM may filter out any click events that immediately follow the drag event. The TIM may normalize two taps in rapid succession into a double tap event. A user making two taps in the browser in quick succession may also generate events such as mouse down, touch up, and click.
The TIM may also communicate a click event to the application and begin listening for a second tap. If the TIM may receive the second click event within a predetermined amount of time (which a user or system may define), the TIM may transmit the second click event to the application followed by a double click event to the application. Both clicks and double clicks may be handled by the application. Processing the click and the double click initiated by the double tap together may force the application to behave consistently for the double tap and the double click.
The TIM may normalize the touch and hold event to a context menu event. The TIM may receive a touch down event while the user may touch and remain on the screen of the browser. Upon receiving a touch down event, the TIM may initiate a timer, which may display a context menu after a predetermined time. After the predetermined time expires, the TIM may communicate a context menu event to the application. If a touch up event is detected before the predetermined time expires, the timer may be cancelled and the context menu may not be displayed. Once the TIM communicates the context menu event, if the user does not end the touch and hold, the TIM simulates the touch up event through the browser to prevent handling of the drag event after the context menu is displayed.
Application developers may be enabled to provide custom components to register and respond to gestures or touch events. Since the TIM may operate by normalizing gesture or touch events to mouse events or context menu events, custom components may register with the MIM or input manager through a registration handler for context menus.
When the TIM normalizes the click event and requests the EHM to process the click event, the EHM may communicate to the application handler. A handler method in the application may receive an event handler argument object that may be used to communicate with information to handle an event.
Scrollable content may generally be present in an inline frame. Scrolling in an application may include events generated by two fingers. According to embodiments, the event handler manager may provide basic single finger scrolling for tablet computers and similar devices. Such an embodiment may perform the actions of (1) creating a new scroll-drag adapter that is appended to the DOM element, and (2) registering the adapter with the DIM of the DOM element at a priority of "first last". The DIM can support a queue of drag adapters attached to the element. The DIM may process the scroll events in order and pass the events on to the next drag adaptor when the current drag adaptor cannot process the drag event. The priority "first to last" may mean that any other drag adaptor has an opportunity to handle the drag event before the drag adaptor handles the drag event.
The scroll drag adaptor may be an internal class that may implement a drag adaptor interface. The interface can handle scrolling events on the associated DOM elements. The start in drag method may remember the current location. The move-in-drag method may calculate the difference between the new location and the previous location. The move-in-drag method may set the scroll top and scroll left of the DOM element to match the discrepancy. A list of event recording points and timestamps may be moved for each processed drag. In the drag end method, a parabolic regression may be calculated to determine the acceleration of the finger and perform a page turn animation accordingly.
FIG. 4 illustrates a flow diagram showing optimization of a user interface, according to embodiments. In diagram 400, according to embodiments, an application may detect a click event 402 on an anchor displayed by a User Interface (UI). The application may make a determination 404 of a gesture or touch input with respect to the click event. If the application can determine touch input, the application can be displayed as a gesture or touch optimized UI 406. If the application cannot determine the type of input as a gesture or touch, the application may make a determination 408 regarding mouse input. If the application determines the type of input to be mouse-based, the application may display a UI410 optimized for mouse input. If not, the application may display the UI412 in a previous state or a static state.
The application may initiate instantiation of an anchor element of the dynamic UI. In an exemplary scenario, the button control may initiate a new pane. In another exemplary scenario, the editable surface control may initiate a caret.
If the application detects gesture or touch input, the UI may be optimized for gesture or touch based controls. In an exemplary scenario, a large UI control may be used by the UI for detected gestures or touch input. If the application detects mouse input, the UI can be optimized for mouse-based controls. In an exemplary scenario, the UI may enable a mouse control-centric feature, such as hovering within the UI.
The contextual menu control may also be optimized simply according to the type of input to the UI. If the application detects a context menu initiated by a mouse event, a context menu optimized for mouse controls may be initiated within the UI. Alternatively, a contextual menu optimized for a gesture or touch control may be initiated within the UI in response to a gesture or touch event detected to initiate the contextual menu.
The application may determine the type of input used on the anchor by registering for a click (or context menu), detecting a click event, and evaluating the click event for the type of input. If the browser is unable to provide the type of input, the type of input may be obtained using an input manager Application Programming Interface (API). A Click Input Manager (CIM) component of the input manager API may notify the type of click event and communicate the type of input of the click event. The CIM may listen for multiple events to determine the type of input. The CIM may listen for click events as well as other browser-specific gestures or touch events. If the touch start event occurs immediately before the click (e.g., within 300 ms), the application may conclude that the click event is the result of a gesture or touch input. Alternatively, whenever the input manager receives a touch start event and a subsequent touch end event without a touch move event, the input manager may immediately decide to initiate a click event and determine that the gesture or touch input if a pointer event with a gesture or touch input occurs immediately prior to the click event, the application may determine that the click event is from the gesture or touch input. The pointer event may not be a gesture or touch event. A custom pointer event implemented by the browser may be initiated for each type of input supported by the browser, including, but not limited to, gestures or touch, pen, and mouse inputs. The browser may communicate the type of input for the event detected by the custom pointer. Alternatively, the application may listen to the gesture event and use information about the gesture event provided by the browser to determine the type of input.
The UI may be initially optimized for mouse input. Thereafter, the application may implement gesture or touch input customization by applying Cascading Style Sheets (CSS), which implement size and spacing parameters of the gesture or touch input. The CSS class may be added to the top CSS level, which activates gesture-or touch-specific styling in that portion of the browser's DOM or UI element. Alternatively, when the anchor can be initiated through a gesture or touch input, different DOM elements can be generated on the dynamic UI. The application may also initiate a UI optimized for keyboard-based input. Additionally, the application may initiate a dynamic UI optimized for pen-based input.
The behavior of the UI may change based on the type of input detected. The application may change the behavior of the UI according to the detected click event on the component of the UI. Alternatively, the application may change the behavior of the UI according to a detected gesture or touch initiated by a finger tap. In an exemplary scenario, the UI may display a split button control. The UI may display a drop down menu that allows the user to change the type of split button control. The application may perform an action associated with the button control when the user mouse clicks on the button control. Alternatively, the UI may display a menu when the user taps a button control. The input manager may determine the type of input from a user action on the split button control.
The application may use the user agent string and browser-specific APIs to optimize the static or boot time UI. The application may use information about the browser to determine the browser's mouse, gesture, or touch or combination input capabilities. The static UI may optimize mouse input when a browser with mouse-only input capabilities is detected. The optimization of the static UI may also be customized according to gesture or touch input and combination input scenarios. Additionally, button controls may be provided to switch the static UI between gestures or both touch and mouse inputs when both can be detected as capabilities of the browser. The button control can persist its state between user sessions in the browser cookie.
FIG. 5 illustrates an example of an optimized user interface for a table control based on input type according to embodiments, a chart 500 showing an exemplary table control optimized according to mouse input 502 and gesture or touch input 512.
An application managing the UI may detect an input type of a user action on the UI. The detected user action may be a gesture or touch input or a mouse-based input. The UI may be initially optimized for the table control 502 of the mouse input. The UI may display a standard-size-table control 504 and a standard-size-table sizing control 506. The standard size may be a system setting for each component of the UI determined during development of the application. The UI may display standard size controls because the user may be able to move the mouse pointer with greater precision than other input types.
Alternatively, the UI may display an optimized table control 512 for the detected gesture or touch input. The UI may display a large-size table control 514 and a large-size table size selection control 516. The large size may be a system setting for each UI component determined according to the display characteristics. The UI may display large-size controls because the user may be less able to provide fine control over the UI through gesture or touch input.
FIG. 6 illustrates an example of an optimized user interface for color and font controls based on detected input types, according to embodiments. Diagram 600 shows exemplary color and font controls optimized according to the type of input detected.
An application managing the UI may detect an input type of a user action on the UI as a mouse or a gesture, or a touch-based input type. If the application detects a mouse input type, the UI may display a color control 602 or font control 610 optimized for the mouse input type associated with the corresponding user action. The UI may display a font control 610 with a standard size color selection control 606 or with a standard size font selection control 614. The UI may display standard size controls because the user may be able to move the mouse pointer with greater precision than other input types.
Alternatively, if the application detects a gesture or touch input type, the UI may display a color control 604 or a font control 612 associated with the respective user action that is optimized for the gesture or touch input. The UI may display a color control 604 with a large-size color selection control 608 or a font control 612 with a large-size font selection control 616. The UI may display large-size controls because the user may be less able to provide fine control over the UI through gesture or touch input.
FIG. 7 illustrates an example of an optimized user interface based on styles and search controls of detected input types and a selected control presented to enable the optimized user interface, in accordance with various embodiments. Chart 700 shows exemplary text styles and search controls optimized according to a detected input type, and a pull-down control for selecting the input type.
The application may detect the input type of the user action on the UI as either a mouse input, or a gesture or touch input. If the application detects mouse input, the UI may display a text style control 702 or a search control 710 optimized for mouse input associated with the respective user action. The UI may display a text style control 702 with a standard text style selection control 706 or a search control 710 with a standard size search box control 714. The UI may display standard size controls because the user may be able to move the mouse pointer with greater precision than other input types.
Alternatively, if the application detects a gesture or touch input, the UI may display a font control 704 or a search control 712 associated with the respective user action that is optimized for the gesture or touch input. The UI may display a font style control 704 with a large size font selection control 708 or a font control 712 with a large size search box control 716. The UI may display large-size controls because the user may be less able to provide fine control over the UI through gesture or touch input.
The application may also enable the user to select the input type via a drop down menu control 720. The application may notify the user to tap anywhere on the UI with a finger to activate the drop down menu 726. The UI may also display a notification 722 informing the user to click on the control surface of the drop down menu for the bulleted (bu11et) action. The bullet actions may provide additional controls associated with the input type, such as managing the behavior of the input type. Additionally, the UI may display a notification informing the user to click on the arrow control 724 of the drop-down menu to activate the selected control for selecting the input type.
According to some embodiments, touch or gesture enabled input devices and display screens may be used to view/edit documents and receive input from a user through a user interface. Gesture-enabled input devices and display screens may utilize any technology that allows a user's touch input or optically captured gestures to be recognized. For example, some techniques may include, but are not limited to, heat, finger pressure, high capture rate cameras, infrared light, optical capture, tuned electromagnetic induction, ultrasonic receivers, sensing microphones, laser rangefinders, shadow capture, and the like. The user interface of the touch-enabled or gesture-enabled device may display content and documents associated with word processing applications, presentation applications, spreadsheet applications, and web page content, as well as menus of actions for interacting with the displayed content. A user may interact with the user interface using gestures to access, create, view, and edit content such as documents, tables, spreadsheets, charts, lists, and any content (e.g., audio, video, etc.). Gesture-enabled input devices may utilize features specific to touch-or gesture-enabled computing devices, but may also be used with conventional mice and keyboards. Gestures or touch input actions such as tap or swipe actions as used herein may be provided by a user through a finger, pen, mouse or similar device, as well as through predefined keyboard input combinations, eye tracking and/or voice commands.
The exemplary scenarios and schemes in fig. 2 through 7 are illustrated with specific components, data types, and configurations. Embodiments are not limited to systems configured in accordance with these examples. Providing custom selections for editing content on a gesture or touch screen, schemes for semantic interpretation of browser gestures or touch events, and user interfaces in browsers optimized for gestures or touches may be implemented in applications and user interfaces in configurations employing fewer or additional components. Furthermore, the example schemes and components shown in fig. 2-7 and their subcomponents may be implemented in other values in a similar manner using the principles described herein.
FIG. 8 is a networked environment, where a system according to embodiments may be implemented. The local and remote resources may be provided by one or more servers 814, such as a hosted service, or a single server (e.g., a web server) 816. The application may communicate with client interfaces on various computing devices such as a laptop 811, a tablet 812, or a smart phone 813 ("client device") over a network 810.
As described above, providing custom selections for editing content on a gesture or touch screen, schemes for semantic interpretation of browser gestures or touch events, and user interfaces in browsers optimized for gestures or touches may be provided through web applications interacting with browsers. As previously discussed, client device 811-813 may allow access to applications executing on a remote server (e.g., one of servers 814). The servers can retrieve related data from data store 819 or store related data to data store 819, either directly or through database server 818.
Network 810 may include any topology of servers, clients, internet service providers, and communication media. Systems according to embodiments may have a static or dynamic topology. Network 810 may include a secure network, such as an enterprise network, an unsecure network, such as a wireless open network, or the internet. Network 810 also coordinates communications through other networks, such as the Public Switched Telephone Network (PSTN) or cellular networks. Further, the network 810 may include a short-range wireless network such as a bluetooth or similar network. Network 810 provides communication between nodes as described herein. By way of example, and not limitation, network 810 may include wireless media such as acoustic, RF, infrared, and other wireless media.
Many other configurations of computing devices, applications, data sources, and data distribution systems may be employed to provide custom selections for editing content on a gesture or touch screen, schemes for semantic interpretation of browser gestures or touch events, and gesture or touch optimized user interfaces in browsers. Moreover, the networked environments discussed in FIG. 8 are for illustration purposes only. Embodiments are not limited to the example applications, modules, or processes.
FIG. 9 and the associated discussion are intended to provide a brief, general description of a suitable computing environment in which embodiments may be implemented. With reference to FIG. 9, a block diagram of an example computing operating environment for an application according to embodiments is illustrated, such as computing device 900. In a basic configuration, computing device 900 may include at least one processing unit 1602 and system memory 904. Computing device 900 may also include multiple processing units that cooperate with one another in executing programs. Depending on the exact configuration and type of computing device, the system memory 904 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. System memory 904 typically includes an operating system 905 suitable for controlling the operation of the platform, such as from Microsoft corporation of Redmond, WashingtonAnd (4) operating the system. The system memory 904 may also include one or more software application programs, such as program modules 906, applications 922, and user interface modules 924.
The application 922 may provide a user interface for custom selection of editing gestures or content on a touchscreen, a scheme for semantic interpretation of browser gestures or touch events, and/or gesture or touch optimization in a browser according to embodiments. User interface module 924 may help application 922 provide the services described above in connection with touch and/or gesture-enabled devices. This basic configuration is illustrated in fig. 9 by those components within dashed line 908.
Computing device 900 may have additional features or functionality. For example, computing device 900 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 9 by removable storage 909 and non-removable storage 910. Computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. The computer readable storage medium is a computer readable memory device. System memory 904, removable storage 909 and non-removable storage 910 are all examples of computer readable storage media. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computing device 900. Any such computer-readable storage media may be part of computing device 900. Computing device 900 may also have input device(s) 912 such as keyboard, mouse, pen, voice input device, gesture or touch input device, and the like. Output device(s) 914 such as a display, speakers, printer, and other types of output devices may also be included. These devices are well known in the art and need not be discussed at length here.
Computing device 900 may also contain communication connections 916 that allow the device to communicate with other devices 918, such as over a wireless network in a distributed computing environment, a satellite link, a cellular link, and similar mechanisms. Other devices 918 may include computer devices executing communication applications, storage servers, and the like. One or more communication connections 916 are one example of communication media. Communication media may include computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
Example embodiments also include various methods. These methods may be implemented in any number of ways, including the structures described herein. One such way is by machine operation of an apparatus of the type described herein.
Another optional way is to perform one or more of the individual operations of the method in conjunction with one or more human operators performing some of the individual operations of the method. These human operators need not be co-located with each other, but rather each may be co-located with only a machine that executes a portion of the program.
10A-C illustrate a process of providing a opt-in handle below and at the end of a scope-selected insertion point, a scheme for semantic interpretation of browser gestures or touch events, and a logical flow diagram for a user interface optimized for input for a browser. In some examples, processes 1000, 1002, and 1004 may be implemented by an application, such as a web application.
Process 1000 may begin at operation 1010, where at operation 1010 an application may detect selection of a range of text. At operation 1020, the application may create a tab under the selected insertion point. The application may intercept and cancel events associated with the browser-generated selections. Next, at operation 1030, the application can replace the native browser handle with the selected handle.
The process 1002 may begin at operation 1040, where the application may receive a series of browser events from a touch and gesture enabled device at an abstraction layer between the application and the browser at operation 1040. At operation 1050, the application may normalize the received browser event into a consistent semantic event stream compatible with multiple devices and browsers.
The process 1004 may begin at operation 1060, where the application may detect an input at operation 1060. At operation 1070, the application may initiate a user interface that is optimized based on the type of input detected. Next, at operation 1080, the application may modify a behavior of a portion of the user interface based on the type of input detected.
Certain embodiments may be implemented in a computing device that includes a communications module, a memory, and a processor, where the processor performs a method as described above or a similar method in conjunction with instructions stored in the memory. Other embodiments may be implemented as a computer-readable storage medium having stored thereon instructions for performing the method described above or a similar method.
The operations included in processes 1000, 1002, and 1004 are for illustration purposes only. Providing custom selections for editing content on a gesture or touch screen, schemes for semantic interpretation of browser gestures or touch events, and user interfaces in browsers optimized for gestures or touches may be implemented by similar processes with fewer or additional steps, and different orders of operations using the principles described herein.
The above specification, examples and data provide a complete description of the manufacture and use of the composition of the embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims and embodiments.
Claims (10)
1. A method executed on a computing device for providing a custom selection for editing content on a touch screen, the method comprising:
detecting selection of a content range;
determining the selected tip position;
creating a hit handle to replace a native browser handle, and
presenting the neutral handle at the end position.
2. The method of claim 1, further comprising:
two additional selection handles may be created at the selected end positions to provide a user interface for managing the selections.
3. The method of claim 1, further comprising:
detecting the type of input, an
A hit handle is provided to recreate a hit behavior based on the type of input.
4. A method executed on a computing device for providing a scheme for semantic interpretation of browser touch events, the method comprising:
receiving a series of browser events from a touch and gesture enabled device at an abstraction layer between a web application and a browser, and
the series of browser events is transformed into a stream of semantic events that models the behavior of the series of browser events.
5. The method of claim 4, further comprising:
receiving a browser event from the series at a registered event handler within the event handler manager;
communicating, by the registered event handler, the browser event to at least one associated input manager.
6. The method of claim 5, further comprising:
processing the browser event at the corresponding input manager, and
communicating communications to other input managers associated with the browser event to further process the browser event.
7. The method of claim 6, further comprising:
document object model elements for processing said browser events, and
other input managers are queried to further process the browser event.
8. A method executed on a computing device for providing a touch-optimized user interface in a browser, the method comprising:
detecting a type of the input as one of a gesture input, a touch input, and a mouse input, an
A user interface optimized for the type of the detected input is initiated.
9. The method of claim 8, further comprising:
in response to failing to detect the type of input as one of a gesture input, a touch input, and a mouse input, providing an option to select between the gesture, touch, and mouse input for the user interface.
10. The method of claim 8, further comprising:
utilizing an original user interface if the mouse input is a type of the detected input;
utilizing the augmented user interface control if the type of the detected input is one of a touch input and a gesture input.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US61/653,530 | 2012-05-31 | ||
US13/671,832 | 2012-11-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1189680A true HK1189680A (en) | 2014-06-13 |
HK1189680B HK1189680B (en) | 2018-04-20 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10762277B2 (en) | Optimization schemes for controlling user interfaces through gesture or touch | |
US12093598B2 (en) | System to facilitate interaction during a collaborative screen sharing session | |
US9003298B2 (en) | Web page application controls | |
US9152529B2 (en) | Systems and methods for dynamically altering a user interface based on user interface actions | |
CN102609188B (en) | User interface interaction behavior based on insertion point | |
US20140304648A1 (en) | Displaying and interacting with touch contextual user interface | |
US20130191779A1 (en) | Display of user interface elements based on touch or hardware input | |
CN110520848B (en) | Surfacing task-related applications in a heterogeneous tab environment | |
CN104169853B (en) | web application control | |
KR20160125401A (en) | Inline and context aware query box | |
US20160231876A1 (en) | Graphical interaction in a touch screen user interface | |
CN103412704B (en) | Optimization schemes for controlling user interfaces through gesture or touch | |
US9037958B2 (en) | Dynamic creation of user interface hot spots | |
US20180090027A1 (en) | Interactive tutorial support for input options at computing devices | |
US11494056B1 (en) | Dynamic document updating application interface and corresponding control functions | |
US10845953B1 (en) | Identifying actionable content for navigation | |
HK1189680A (en) | Optimization schemes for controlling user interfaces through gesture or touch | |
WO2016200715A1 (en) | Transitioning command user interface between toolbar user interface and full menu user interface based on use context | |
HK1189680B (en) | Optimization schemes for controlling user interfaces through gesture or touch | |
US10296190B2 (en) | Spatially organizing communications | |
US10409453B2 (en) | Group selection initiated from a single item | |
HK1173814B (en) | User interface interaction behavior based on insertion point |