WO2024182322A1 - Feature discovery layer - Google Patents
Feature discovery layer Download PDFInfo
- Publication number
- WO2024182322A1 WO2024182322A1 PCT/US2024/017366 US2024017366W WO2024182322A1 WO 2024182322 A1 WO2024182322 A1 WO 2024182322A1 US 2024017366 W US2024017366 W US 2024017366W WO 2024182322 A1 WO2024182322 A1 WO 2024182322A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- entity
- image
- cursor
- text
- active window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/413—Classification of content, e.g. text, photographs or tables
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44521—Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
- G06F9/44526—Plug-ins; Add-ons
Definitions
- OS operating system
- many applications do not take advantage of the full suite of features made available by the OS. For example, an application may have been developed before an OS-provided feature was available. In other circumstances, the application developer may not have had the time, resources, or interest to leverage a particular OS-provided feature or third-party provided feature. OS and/or third-party provided features may also go unused because users do not know how to invoke them, or because the user is unaware of the feature. Increasingly, these OS and/or third-party provided features leverage machine learning and/or other artificial intelligence techniques. Among other disadvantages, failing to take advantage of OS and/or third-party provided features limits user productivity and causes computing resources to be used inefficiently.
- an operating system (OS) discovery mode that identifies and provides access to OS and/or third-party provided features within an applicable region of a desktop.
- OS operating system
- the content displayed by the applicable region is analyzed to identify content usable by OS and/or third-party provided features.
- Visual cues are rendered in the applicable region near the identified content, highlighting the availability of the OS and/or third-party provided features. Users may interact with the visual cues to manipulate the underlying content or to invoke the OS and/or third-party provided features.
- OS and/or third-party provided features may modify 7 content displayed by an application, launch an inline micro-experience, crop or export images, etc.
- the discovery mode visual cues are highlighted as a discovery mouse cursor moves around the desktop.
- the discovery mode is triggered automatically, causing an OS service to automatically identify and display a set of visual cues across the applicable region.
- FIG. 1 A illustrates an active window of an application running on a desktop.
- FIG. IB illustrates activating a discover ⁇ ' mode.
- FIG. 1C illustrates visual cues overlaid on entities in the active window.
- FIG. ID illustrates highlighting an image entity visual cue in response to a discovery cursor hover-over.
- FIG. IE illustrates highlighting a text visual cue nested within an image entity.
- FIG. IF illustrates an entity action list displayed in response to selecting the nested text visual cue.
- FIG. 1G illustrates de-activation of the discovery mode.
- FIG. 2A illustrates highlighting a paragraph entity wi thin a text entity.
- FIG. 2B illustrates an entity action list displayed in response to selecting the highlighted paragraph entity.
- FIG. 2C illustrates selecting an action from the entity action list.
- FIG. 2D illustrates the selected paragraph after the selected action has modified it.
- FIG. 3 A illustrates a subject entity highlighted in response to the cursor hovering over the subject entity.
- FIG. 3B illustrates a drag operation that extracts the subject entity from the image.
- FIG. 4A illustrates an image entity that is highlighted in response to hovering over the image.
- FIG. 4B illustrates an entity action list displayed in response to receiving a selection of the image entity (outside of the subject entity).
- FIG. 4C illustrates the image after the selected action has removed the background.
- FIG. 5A illustrates an entity action list displayed in response to receiving a selection of an address entity.
- FIG. 5B illustrates an inline micro-experience displayed in response to receiving a selection of the address entity action.
- FIGS. 6A-6C illustrate highlighting particular visual cues or particular portions of visual cues that are close to the discovery cursor.
- FIG. 7 illustrates automatically triggering the discovery mode in which an address enti ty visual cue is overlaid on top of the active window without explicit user activation of a discovery mode.
- FIG. 8 is a flow diagram of an example method for a feature discovery layer.
- FIG. 9 is a computer architecture diagram illustrating an illustrative computer hardware and software architecture for a computing system capable of implementing aspects of the techniques and technologies presented herein.
- FIG. 10 is a diagram illustrating a distributed computing environment capable of implementing aspects of the techniques and technologies presented herein.
- FIG. 1A illustrates active window 1 10 of application 1 16.
- Application 116 is running within desktop 100, which is a user interface of operating system 106.
- Active window 110 partially occludes inactive window 112, which is also running within desktop 100.
- Inactive window 112 may be part of application 116 or a different application.
- Desktop 100 includes menu 101 which contains, among other items icons of active and/or pinned applications, a search bar. and activation button 102. In the MICROSOFT WINDOWS operating system, menu 101 may also be ataskbar, a title bar, a ribbon menu.
- Activation button 102 is a graphical button found on menu 101. In some operating systems, such as MICROSOFT WINDOWS, selecting activation button 102 displays a menu that provides access to various applications, files, and settings on a computer. Often, keyboards have a dedicated key or combination of keys that, when pressed, triggers activation button 102. Activation button 102 may also be selected by moving cursor 120 over activation button 102 and selecting it, using voice activation, or by any other technique commonly used to select a button.
- Cursor 120 represents the current location of a mouse pointer on desktop 100. As illustrated, cursor 120 may be represented by a default operating system pointer icon. Cursor 120 is used to select, drag, and drop objects on the screen. The shape and appearance of cursor 120 can change depending on the type of operation being performed. For example, moving the cursor over a link in a web browser may cause the cursor to change to a hand icon. The shape and appearance of the cursor can also change depending on whether OS 106 has entered a discovery mode 118, as discussed below, such as in conjunction with FIG. IB.
- an active window is an application window with a graphical user interface (GUI) that is currently displayed in the foreground and has focus, meaning it is the window that receives keyboard, mouse, microphone, and/or other types of input.
- GUI graphical user interface
- mouse input is supplied to active window 110 when cursor 120 hovers over or clicks on active window' 110.
- the active window' is often indicated by a highlighted and/or visually distinct title bar.
- Inactive window 112 is a window that is currently not in the foreground, but that is still open and accessible to the user. Inactive window 112 does not have the OS focus, meaning it does not receive keyboard or mouse input.
- Application 116 is a program designed to perform a specific task or set of tasks for the user. Applications are designed to run on various operating systems, such as WINDOWS, MACOS, and LINUX. Applications can be as simple as a calculator app, or as complex as a video game. Some common examples of computer applications include word processors, web browsers, email clients, and media players.
- active window' 110 displays a block of text surrounded by tw o images.
- This content is illustrative and not limiting, and any other type of content is similarly contemplated, including spreadsheets, video games, web pages, maps, computer aided design figures, videos, and the like.
- Application 116 may draw content in a variety of ways using a variety of technologies such as 2D graphics, 3D graphics, vector graphics, raster graphics, animation, composition, ray tracing, immediate mode, retained mode, or the like.
- Operating system 106 includes content analysis engine 107. While described in more detail below in conjunction with FIG. 1 B, content analysis engine 107 analyzes the graphics output of and/or underlying data associated with active window' 116 to identify content usable by OS and/or third-party provided features. In this way, OS and/or third-party provided features can be made available to a user even if application 116 was not designed to utilize them, and without participation by application 116.
- an applicable region 115 of the desktop 100 refers to one or more regions analyzed by content analysis engine 107.
- Applicable region 115 may include a portion of a window, such as an application-drawn portion of active window 110 or a portion of inactive w indow 112 that is visible.
- Applicable region 115 may also include portions of a window' that are not visible.
- applicable region 115 may include a single active window' such as active w indow' 110, two or more window s associated with the same application 116, or a set of windows associated with two or more applications.
- applicable region 115 may include windows that were active within a defined period of time, such as windows that had been active in the past five minutes. In some configurations, applicable region 115 may include windows from applications that were most recently used, e.g., windows from the three most recently active applications. In some configurations, applicable region 115 may include windows from applications selected based on frequency of use. For example, if a user frequently uses an email application, applicable region 115 may be defined to include windows associated with the email application, as OS and/or third- party provided features may have a greater opportunity to provide benefit to the user. In other embodiments, infrequently used applications may be selected for inclusion in applicable region 115 so as to highlight features the user may not be aware of.
- Applicable region 115 may also be selected to include applications based on how frequently OS and/or third-party provided features have been invoked from different applications. Applicable region 115 may also include a portion of the desktop. Applicable region 115 may be determined based on a combination of these criteria, among other criteria.
- FIG. IB illustrates activating discovery mode 118.
- discovery mode 118 is activated in response to a user holding down an activation key 113 on keyboard 111. Additionally, or alternatively, discovery mode 118 may be activated by mouse click, mouse press and hold, touch gesture such as a swipe, voice activation, or the like. Activation key 113 may perform other functions when pressed and released in a conventional manner (i.e., not an extended hold), such as displaying a menu for launching applications. In some configurations, when discovery' mode 118 is triggered by holding activation key 113, a mouse press and hold, or atouch press, discovery mode 118 remains active until the user releases the trigger.
- Discovery mode 118 may also be entered by selecting an ‘"enter discovery mode” action from a menu. For example, a user may move cursor 120 over activation button 102, click the right mouse button to activate a context menu, and select “enter discovery' mode” from the context menu. A context menu may similarly be accessed by performing a right-click or equivalent action on a window or over the desktop itself. When entered via a context menu, discovery mode 118 may continue without the user having to hold down a key. The user may exit discovery mode 1 18 by selecting a “leave discovery' mode” action from the context menu or otherwise dismissing discovery' mode 118. In some configurations, once discovery mode 118 has been entered, activation indication 104 appears proximate to activation button 102.
- activation indication 104 highlights activation button 102.
- Activation indication 104 may call attention to activation button 102 by shading, bolding, increasing prominence, or the like.
- Activation indication 104 is one indication that discovery' mode 118 has been entered. Other indications include a different mouse cursor icon, analysis indicator 101, tint 114, and the like.
- applicable region 115 coincides with active window 110 of application 116, but as discussed above applicable region 115 could include only a portion of active window 110, additional windows of application 116, and/or windows from other applications, and the like.
- Active window 110 is the applicable region 115 for many of the examples discussed herein, but one of ordinary skill will understand that the same techniques are equally applicable to the other ty pes of applicable regions discussed above.
- content analysis engine 107 analyzes active window 110 to identify content usable by OS and/or third-party provided features.
- Content analysis engine 107 may analyze a snapshot of the graphics displayed in active window 110, or content analysis engine 107 may observe the graphics displayed in active window 110 over time.
- OS and/or third-party’ provided features may be provided by operating system 106.
- OS and/or third-party provided features may also be provided by other vendors via a plugin mechanism.
- the plugin mechanism allows configuration or customization of how applicable region 115 is determined.
- the plugin mechanism may also allow configuration or customization of which high-level entities are identified, how high-level entities are identified, the identification of entities, which entities are highlighted with visual cues, which visual cues are used for which entities, which OS and/or third-party provided features are made available via the visual cues, and/or the implementation of a particular OS and/or third-party provided feature.
- Content analysis engine 107 may read the graphics displayed by active window 110 from a display buffer 108.
- Display buffer 108 is a portion of computer or GPU memory dedicated to storing image data that will be displayed on a computer screen or monitor.
- the graphics displayed by active window 110 are also referred to herein as content 109 stored in display buffer 108.
- Content 109 may be stored in the form of pixels and is periodically updated by application 116 to produce the graphics displayed by active window 110. Additionally, or alternatively, content 109 may be stored in a vector graphics format that mathematically defines the content. Content 109 may also be stored in the form of instructions for drawing content, such as drawing primitives, a document object model (DOM), or the like.
- DOM document object model
- Reading content 109 directly from display buffer 108 enables portions of content 109 to be identified regardless of the technology used to generate that content. For example, an image displayed by active window 110 may be analyzed as an array of pixels, independent of whatever compression techniques were used to store the image on disk.
- text displayed in active window 110 may be analyzed independent of which library was used to generate the text.
- Raw text may be identified from pixel-based content with optical character recognition (OCR). If content 109 contains a series of drawing commands, then text may be inferred by analyzing the drawing commands that output text to display buffer 108.
- OCR optical character recognition
- Accessing raw display buffer data in this way allows a greater amount of content to be analyzed, increasing the likelihood that content usable by OS and/or third-party provided features will be identified. For example, accessing content 109 from display buffer 108 enables text contained in an image to be recognized and made selectable to the user. Without the ability to analyze content 109 from display buffer 108 this text would remain unselectable.
- analysis indicator 101 visually indicates that content analysis engine 107 of OS 106 is analyzing active window 1 10 for content that may be used by OS and/or third-party provided features. As illustrated, analysis indicator 101 is a band that encircles active window 110. In some configurations, analysis indicator 101 may appear as a tint or other overlay over some or all of the applicable region 115. Analysis indicator 101 may alert a user that active window 110 is being analyzed by changing color or shape. For example, analysis indicator 101 may shimmer, color cycle, or otherwise animate and/or indicate that content analysis engine 107 is in the process of identifying content within active window 110.
- tint 114 is an example of shading applied to a portion of desktop 100 that is not currently being analyzed by content analysis engine 107. As illustrated, tint 114 covers inactive window 112 as well as portions of desktop 100 that do not contain any windows. In some configurations, tint 114 may also cover menu 101. Tint 114 reinforces that active window 110 is being analyzed, not inactive window 112 or other portions of desktop 100.
- OS 106 may change one or more visualizations upon entering discovery mode 118.
- discovery cursor 122 may replace the default OS cursor icon 120.
- the default OS cursor 120 may change color.
- discovery' cursor 122 shares colors, animations, and other properties with analysis indicator 101.
- a layer refers to a collection of user interface elements that appear between active window 110 and the user. Consistent with appearing over active window 110. user interface elements in the layer may intercept user interface commands that would otherwise go directly to active window 110.
- FIG. 1C illustrates visual cues overlaid near entities that content analysis engine 107 discovered in active window 110.
- an entity refers to a portion of content displayed by active window 110 that can be consumed by an OS and/or third-party provided feature.
- an OS and/or third-party provided feature may show a location on a map.
- This OS and/or third-party provided feature may use an address identified by content analysis engine 107 to show a location of the address on a map.
- content analysis engine 107 first segments the content displayed by active window 110 into images, text blocks, UI elements (such as buttons, scroll bars, and dialog boxes), and other high-level entities. As illustrated, the content displayed by active window 110 has been divided into three high-level entities: image 140, text 150, and image 160. In some configurations, machine learning and/or artificial intelligence techniques are used to identify high level entities.
- image 140 is highlighted by image entity visual cue 148.
- Image entity visual cue 148 is depicted as a border around image 140, but other means for highlighting image 140 and other entities discussed herein are similarly contemplated.
- image entity visual cue 148 may cause a shadow to appear under image 140.
- image entity visual cue 148 may be animated.
- Image entity visual cue 148 may also highlight image 140 by causing image 140 to move around the screen, e g., by shifting back and forth.
- Image entity visual cue 148 may also highlight image 140 by causing surrounding content to be dimmed - i.e., displayed with reduced brightness. While discussed in more detail below' in conjunction with FIGS. 6A-6C, image entity visual cue 148 may appear or disappear, change color, or animate in response to increased or decreased distance from discovery cursor 122.
- entities are nested within one another, creating a hierarchy of entities. For example, if image 140 contains text, content analysis engine 107 may identify a text entity 149 within image 140. such as depicted below in conjunction with FIG. ID.
- visual cues for all entities are displayed upon entenng discovery mode 118.
- visual cues for a portion of the entities are displayed upon entering discovery’ mode 118.
- a visual cue for the top-level entify(ies) may be displayed upon entering discovery mode 118, while visual cues for nested entities are not displayed until a parent entity, or an action associated with the parent entity, is selected. Delaying or otherwise staggering the display of entity visual cues in this w ay reduces screen clutter, reduces the number of choices that the user is presented with at any given time, decreases and/or defers the consumption of computing resources, and improves human-computer interaction, among other benefits.
- content analysis engine 107 may defer analyzing the content of a parent entity until it is selected. Child entities may then be identified during the analysis of the parent entity. Delaying analysis in this way reduces the time and computing resources it takes to present an initial set of entity visual cues when entering discovery mode. Deferring computation also may increase efficiency, including if processing that otherwise would have occurred is skipped entirely.
- Text 150 has been processed by content analysis engine 107 to identify a number of entities.
- Paragraph entity 152 is one of a number of entities that represent individual paragraphs in a block of text.
- a date entity such as "Tuesday May 3 rd at noon” may be identified by analyzing the text of text 150 itself.
- machine learning and/or artificial intelligence techniques are applied to distinguish text that is usable by OS and/or third-party provided features, such as dates, times, locations, and the like.
- Other illustrated examples of text-based entities are the location entity “‘Willard Park” and the address entity 157 “1234 Main St.. Lincoln, WI”.
- Each of the entities mentioned above has been overlaid with corresponding entity visual cues that can be used to identify and/or invoke OS and/or third-party provided features relevant to the associated entity.
- date entity visual cue 154 appears as an underline under the text of the date entity “Tuesday May 3 rd at noon”.
- location entity visual cue 156 appears as a line under the “Willard Park” location entity and address entity visual cue 158 appears as a line under the “1234 Main St., Lincoln WI” entity 157.
- Images 140 and 160 may also be processed by content analysis engine 107 using one or more image recognition and/or processing techniques. For example, within image 160. content analysis engine 107 has identified subjects of a photograph or image. Subjects of a photograph or image may be people, animals, or other prominently displayed objects. What constitutes a subject may be customized by plugins that provide the OS and/or third-party feature.
- FIG. ID illustrates highlighting image entity visual cue 148 in response to a hover-over by discovery cursor 122.
- image entity visual cue 148 has been shaded.
- image entity visual cue 148 may be highlighted by changing color or thickness of the border appearing around image 140.
- content analysis engine 107 may analyze image 140 for nested entities in response to discovery cursor 122 hovering over image entity visual cue 148. Nesting entities in this way provides a number of benefits over identifying and/or displaying all entity visual cues at once. First, identify ing a top-level entity such as image 140 without having to further identify child entities decreases latency and reduces computing costs. Furthermore, displaying numerous entity visual cues (e g., from all levels in an entity hierarchy), such as text contained within an image, may clutter the screen, degrading the user experience. [0065] As illustrated, content analysis engine 107 has identified the text ‘'Park 1.5km” entity within a sign identified in image 140. Accordingly, content analysis engine 107 has overlaid text- in-image entity visual cue 142 (hereinafter “visual cue 142”) over the identified text. Visual cue 142 appears as a line under the text “Park 1.5km”.
- visual cue 142 text- in-image entity visual cue 142
- FIG. IE illustrates highlighting text nested within image 140.
- Discovery cursor 122 has been moved over visual cue 142.
- discovery cursor 122 may become “I” cursor 124, indicating that the user may select and/or otherwise interact with the highlighted text.
- Moving discovery' cursor 122 over visual cue 142 also caused visual cue 142 to highlight the underlying text 146: “Park 1.5km.”
- FIG. IF illustrates an entity action list 180 displayed in response to receiving a selection of the text highlighted by text-in-image visual cue 142.
- Visual cue 142 may be selected by clicking a mouse or touchpad button while “I” cursor 124 is over visual cue 142.
- Other input methods e.g., a touch press or swipe, voice command, eye gaze, etc., are similarly contemplated.
- a shading style associated with the text-in-image entity visual cue 142 may be changed from highlighted text 146 to selected text 148. This change to the shading style conveys to the user that the selection of text 148 was successful and that entity action list 180 is associated with the selected text 148.
- an entity action list displays a list of actions that may be invoked for an underlying entity.
- a visual cue indicates which visual cue the entity action list is associated with, such as changing the shading of the corresponding entity.
- Entity action list 180 contains two entity actions selected by content analysis engine 107: search web 182 and convert to miles 184. Certain entity actions may be included in entity’ action list 180 because they are generally applicable to any displayed content and/or a particular type of content. For example, content analysis engine 107 may select the search web 182 entity action because this action is made available any time text is selected. Other entity actions may be included in entity action list 180 because they are relevant to the particular content selected and/or attributes of the user and/orthe computing device. For example, content analysis engine 107 may select the convert to miles 184 action based on having identified a measure of distance (“1.5km”) in the text and/or a geolocation of the computing device.
- Content analysis engine 107 may use artificial intelligence techniques such as machine learning and natural language processing to identify distances such as “1.5km.” Additionally, or alternatively, content analysis engine 107 may 7 use regular expressions, lexers, parsers, finite state machines, or other known techniques to identify' text within a text entity that can be utilized by an OS and/or third-party' provided feature. Similar techniques may be used to identify dates, times, email addresses, web addresses, and the like. The same techniques may also be used to identify named entities such as locations, people, businesses, countries, and the like.
- Content analysis engine 107 may provide a default set of entities to identify, visual cues to display, and entity actions to include in entity action list 180. Content analysis engine 107 may also invoke plugins to perform some or all of these operations. For example, a plugin may define the entity actions to be displayed in entity action list 180. Content analysis engine 108 may query multiple plugins for entity actions to add to entity action list 180.
- FIG. 1G illustrates de-activation of discovery mode 118.
- the manner of exiting discovery' mode 118 may depend on how discovery' mode 118 was entered. For example, if discovery’ mode was entered by holding down activation key 113. then releasing activation key 113 may cause deactivation of discovery' mode. Discovery mode 118 may also be exited by pressing a designated key, such as the “escape” key, by selecting “exit discovery mode” from a context menu, or in any other suitable manner.
- Deactivation indication 105 visually indicates that discovery’ mode 118 has exited. For example, deactivation indication 105 may revert activation button 102 to a non-invoked state.
- FIG. 2A illustrates highlighting a paragraph entity 152 within text 150.
- Content analysis engine 107 may use machine-learning based or traditional image segmentation techniques to distinguish text, images, UI controls, and other sub-sections of an application-generated image obtained from display buffer 108.
- Content analysis engine 107 may also distinguish and identify content using screen description frameworks, such as accessibility' frameworks, in which applications provide metadata about the content 109 of buffer 108.
- screen description frameworks such as accessibility' frameworks, in which applications provide metadata about the content 109 of buffer 108.
- content analysis engine 107 may use optical character recognition to extract text.
- text entity When the text entity is obtained from a screen description framework, text may be obtained directly from the framework. In either way, OS 106 is able to identify and utilize content generated by application 116 that would otherwise be opaque and inaccessible.
- Discovery cursor 122 has been moved over paragraph entity 152, becoming “I” cursor 224. Moving cursor 122 over paragraph entity 152 also causes hovered paragraph highlighting 252 to be displayed over paragraph entity' 152. Hovered paragraph highlighting 252 indicates to a user that content generated by active window 110 has been identified as text, and that the identified text may be operated on by one or more OS and/or third-party’ provided features.
- Paragraph entity 152 is an example of an entity that is not highlighted by a visual cue until discovery cursor 122 has been moved over text 150 and/or paragraph entity 152. However, in other embodiments, a visual cue may be displayed proximate to paragraph entity 152 when entering discovery mode 118. For example, a vertical bar visual cue may be displayed in the margin next to paragraph entity 152.
- Changing discovery cursor 122 into “I” cursor 224 indicates that the user may insert a caret into paragraph entity 152, select text within paragraph entity 152, and perform other operations that would typically require knowledge of what text had been drawn to window 110.
- a caret refers to a vertical bar inserted between characters of text that indicates where additions, edits, or selections of text will occur. These text-manipulation operations are enabled by content analysis engine 107 identifying what text has been drawn where in active window 110.
- the shape and style of discovery cursor 122 and “I” cursors 124 and 224 are not limiting - other cursor icons and styles are similarly contemplated.
- FIG. 2B illustrates paragraph entity action list 280 displayed in response to selecting the highlighted paragraph entity 152.
- a user may have selected highlighted paragraph entity 152 by clicking a right mouse button while “I” cursor 224 was over paragraph entity 7 152.
- the highlighting of paragraph entity 152 changes to selected paragraph 254, giving a visual indication of the entity that paragraph entity actions list 280 is associated with.
- Paragraph entity actions list 280 includes two entity 7 actions: summarize 282 and translate 284. Content analysis engine 107 may have selected these entity actions based on an analysis of the text of paragraph entity 152. These entity actions may also be based on the location of “I” cursor 224 within paragraph entity 152 when the selection was made.
- the selected paragraph 254 may be determined to include two entities, as described above: (1) paragraph entity 152, and (2) date entity 154.
- Different entity actions may be surfaced based on, for example, which entity the user is predicted to be interested in. For example, based on the cursor 224 position (e.g., over the paragraph entity 152, in particular Kimball’s name, but not over the date entity 7 154) and/or other factors, text-based paragraph entity 7 actions 280 may be selected, such as those illustrated in Fig. 2B. If the cursor 224 were alternatively positioned over the date entity 154 or the user were otherwise predicted to be more interested in this entity, other paragraph entity actions may be surfaced, such as a create calendar event action.
- FIG. 2C illustrates receiving a selection of an entity' action 284 from the paragraph entity' actions list 280.
- the "‘translate” entity action 284 is selected by discovery cursor 222.
- discovery cursor 222 replaces “I” cursor 224 when the cursor moves from paragraph entity 152 to paragraph entity actions list 280.
- FIG. 2D illustrates the selected paragraph 152 after the selected action 284 has modified the paragraph.
- the text of paragraph entity 152 has been translated into a different language, pursuant to selection of the “translate” entity action.
- the translated text 262 has replaced the original text of the selected paragraph 152 in active yvindow 1 10.
- the translated text replaces the original text by yvriting the translated text to the display buffer 108.
- content analysis engine 107 places the translated text into a surface that occludes paragraph entity 152, effectively replacing it from the perspective of a user.
- translated text 262 is highlighted as action completed paragraph 256, giving a visual indication that the selected entity' action was performed.
- the resulting effect is semi-permanent, and will outlast discovery mode 118.
- an effect such as translating text will be reverted when discovery mode 118 exits.
- FIG. 3 A illustrates a subject entity' highlighted in response to discovery' cursor hovering over the subject entity.
- content analysis engine 107 has identified subject entities within image 160.
- Subject entities are portions of a photo or image that appear in the foreground, in focus, and are typically people, animals, objects for sale, or other significant features of a photo or image.
- Machine learning and/or artificial intelligence techniques may be used to identify subjects within a photo or image.
- Machine learning and/or artificial intelligence techniques may further be used to determine an outline of the identified subjects.
- content analysis engine 107 may display an extract subject entity visual cue 162 within or proximate to the subject entity, as illustrated in FIG. 1C above. As illustrated in FIG. 1C, this visual cue may be relatively small compared to the subject entity itself.
- a visual cue may be a geometric shape, an image, or the like located within the boundaries of the subject entity or proximate to the subject entity - e.g.. overlapping the subject entity' in part. Other visual cues may follow a contour of the subject entity.
- extract subject entity' 362 is highlighted in response to discovery' cursor 322 moving over or hovering over extract subject entity visual cue 162.
- extract subject entity 362 may be highlighted in response to discover ⁇ ' cursor 322 moving over any portion of the subject entity.
- extract subject entity visual cue 162 may be highlighted to indicate that it may be selected to perform an OS and/or third-party provided feature.
- FIG. 3B illustrates receiving a drag operation that extracts the subject entity from the image.
- an OS and/or third-part ⁇ ' provided feature - extracting the subject entity - is activated without a context menu.
- a copy of subject entity 364 may be dragged to another application where releasing the drag may initiate a copy operation.
- OS and/or third-party provided features may be initiated in a similar manner.
- copy and cut keyboard shortcuts may be used to copy subject entity 162 to the OS clipboard or copy it to the clipboard while removing it from image 160, respectively.
- an “F‘ cursor 224 such as that depicted in FIG. 2A, may be used to select some or all of the text, and keyboard shortcuts may similarly be used to extract, remove, or replace the selected text.
- FIG. 4A illustrates an image 160 that is highlighted in response to the discovery cursor 422 hovering over a background of the image.
- Content analysis engine 107 may distinguish a background of a photo or image from a subject of the photo or image as discussed above in conjunction with FIG. 3A.
- Image 160 highlights the movement of discovery cursor 422 over image 160 by overlaying a shadow over image 160.
- this shadow may be removed if discover ⁇ ' cursor 422 moves over extract subject entity visual cue 162 or subject 362, in which case subject 362 may be highlighted instead.
- the shadow may also be removed if discovery cursor 422 moves off of image 160.
- FIG. 4B illustrates an entity action list 480 displayed in response to receiving a selection of the image 160 (outside of the subject entity 362).
- Entity action list 480 may be selected in ways similar to the entity action lists discussed above in conjunction with FIGS. IF and 2B.
- image entity actions list 480 has a single entity action - remove background entity action 462.
- FIG. 4C illustrates the image 160 after the remove background action 462 has removed the background.
- discovery mode 118 has exited as a result of completing the remove background action 462.
- discovery mode 118 may continue so long as activation key 113 remains depressed and the user has not otherwise exited discovery mode 118.
- One result of applying the remove background action 462 is that discover ⁇ ' cursor 422 has been replaced with the OS or application default cursor 120. Furthermore, the background of image 160 has been removed. Similar to the translate entity action 284 introduced in FIG. 2C and applied in FIG. 2D, the background of image 160 may be removed by writing to the display buffer 108 directly, or by overlaying a copy of image 160 that has had the background removed. Other techniques for applying these changes are similarly contemplated.
- FIG. 5 A illustrates an entity action list 580 displayed in response to selecting an address entity visual cue 558.
- Content analysis engine 107 has identified that the text “1234 Main St, Lincoln WI” is an address, and generated address entity visual cue 558 accordingly.
- address entity visual cue 558 has been selected in a manner similar to the selection of text-in- image entity visual cue 142 of FIG. ID. Specifically, “I” cursor 524 appears when the cursor moves over address entity visual cue 558. Once selected, e g., with a mouse button click, highlighting of the address entity visual cue 558 changes, indicating that a selection was made.
- Entity action list 580 is also displayed in response to the selection of address entity visual cue 558. Entity action list 580 contains a single entity action: directions 582.
- FIG. 5B illustrates an inline micro-experience 502 displayed in response to selecting the address entity action 582.
- a micro-experience surfaces an application, widget, or other component that enables functionality to be provided that overlays, is proximate to, or otherwise associated with the active window.
- a micro-experience enables a feature to be provided as part of the current user experience while maintaining focus on the current application.
- Map micro-experience 502 may be a pop-up, dialog, or other UI control that presents OS and/or third-party functionality based on the content of address entity visual cue 558.
- map micro-experience 502 presents a street map of the address associated with address entity visual cue 558.
- Map micro-experience 502 is one example of a micro-experience.
- An email address entity may support a micro-experience that brings up a list of recent communications with the underlying email address.
- An online encyclopedia micro-experience may display a definition of a selected word.
- micro-experiences may be interacted with similar to a web page. At the same time, micro-experiences are not limited to displaying HTML, and may in fact utilize features of OS 106 or other applications.
- FIGS. 6A-6C illustrate highlighting particular visual cues that are close to the discovery cursor.
- particular entity visual cues may be selectively displayed based on proximity to discovery cursor 622. This technique may be employed to reduce the number of entity visual cues that are overlay ed on top of active window 110, similar to the reasons for displaying nested entity visual cues. Specifically, reducing the number of visual cues reduces screen clutter and processing time.
- discovery cursor 622 is closest to entity visual cue 660 associated with image 160 and date entity visual cue 154, and therefore these entity visual cues are highlighted for display. Discovery cursor 622 is also proximate to paragraph entity 152. but in this configuration paragraph entity 152 is not highlighted with a visual cue until discovery cursor 622 moves over it.
- an entity visual cue associated with a particular entity is not displayed in full.
- entity visual cue 660 could encircle image 160 similar to how image entity visual cue 148 encircles image 140 in FIG. 1C.
- entity visual cue 660 is selected for display in response to the location of discovery cursor 622 only a portion closest to discovery cursor 622 is displayed. The portion selected for display may be updated as the location of discovery cursor 622 changes.
- How close an entity visual cue has to be to the discover ⁇ ' cursor 622 in order to be highlighted is configurable, such as by an administrator or end user.
- one or more filters may be applied to determine which entity' visual cues to highlight, and at what distance from discovery’ cursor 622. These filters may be built into content analysis engine 107 or provided by a plugin. Such filters may also be configurable.
- FIG. 6B illustrates the discovery cursor 622 having moved, and the portion of entity visual cue 660 that is highlighted moving with it. At the same time, date entity visual cue 154 is no longer highlighted, while address entity visual cue 158 is.
- FIG. 6C illustrates discovery cursor having moved again, this time in proximity to image entity visual cue 148.
- the highlight of address entity visual cue 158 remains as it is still proximate to discovery cursor 622.
- FIG. 7 illustrates an ambient mode in which discovery mode 118 is triggered automatically. Specifically, ambient address entity visual cue 758 is overlaid on top of the active window 110 without explicit user activation of discovery mode 118. Triggering discovery' mode 118 automatically enables the availability of OS and/or third-party' provided features to be surfaced without a user having to manually enter discovery' mode 118. In some configurations, discovery’ mode 118 is automatically triggered when the applicable region 115 changes, when a window scrolls, when a new application is launched, or when previously unanalyzed content comes into view. A user may still manually enter discovery mode 1 18 while in the automatically- triggered discovery mode, at which point additional visual cues may be surfaced.
- the type, location, and/or number of entity visual cues may be limited. For example, address entities may be allowed, but not subject entities. As another example, entity visual cues may only be allowed within a prescribed distance of the discovery' cursor 622 when discovery' mode 118 has been entered automatically.
- the type, location, and number of entity visual cues allowed may be customized based on individual usage history. For example, suppose a user had previously activated the address entity visual cue 558 depicted in FIG. 5A. From this usage history 7 , the same user may be more likely to be presented with address entity visual cues when discovery mode 118 has been automatically triggered. This history is reflected in FIG. 7. which only displays ambient address entity visual cue 758 in contrast with the more numerous visual cues displayed in FIG. 1C.
- routine 800 begins at operation 802, where an activation 104 of a discovery mode 118 is received.
- the activation 104 may be in the form of pressing and holding an activation key 113 associated with an activation button 102 displayed by OS 106.
- display content extracted from a display buffer 108 of an active window 110 is segmented into high level entities. For example, blocks of text, images, UI controls, and other recognizable ty pes of content may be identified.
- an entity is identified within one of the high level entities. For example, a mailing address entity may be identified from text within text 150.
- an entity visual cue is overlaid on top of the entity identified by operation 806.
- an address entity 7 visual cue 558 is displayed as an underline proximate to the text of the associated entity.
- an indication of a selection of the address entity visual cue 558 is received.
- This indication may be in the form of a mouse click.
- an OS and/or third-party 7 provided feature is activated using the entity associated with the selected entity visual cue.
- an OS and/or third- party provided feature that displays a map micro-experience 502 may be launched with the address of the selected address entity visual cue 558.
- the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system.
- the implementation is a matter of choice dependent on the performance and other requirements of the computing system.
- the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof.
- routine 800 is described herein as being implemented, at least in part, by modules running the features disclosed herein can be a dynamically linked library (DLL), a statically linked library, functionality produced by an application programing interface (API), a compiled program, an interpreted program, a script or any other executable set of instructions.
- Data can be stored in a data structure in one or more memory components. Data can be retrieved from the data structure by addressing links or references to the data structure.
- routine 800 may be also implemented in many other ways.
- routine 800 may be implemented, at least in part, by a processor of another remote computer or a local circuit.
- one or more of the operations of the routine 800 may alternatively or additionally be implemented, at least in part, by a chipset working alone or in conjunction with other software modules.
- one or more modules of a computing system can receive and/or process the data disclosed herein. Any service, circuit or application suitable for providing the techniques disclosed herein can be used in operations described herein.
- FIG. 9 shows additional details of an example computer architecture 900 for a device, such as a computer or a server configured as part of the systems described herein, capable of executing computer instructions (e.g., a module or a program component described herein).
- the computer architecture 900 illustrated in FIG. 9 includes processing unit(s) 902, a system memory 904, including a random-access memory 906 (“RAM”) and a read-only memory (“ROM”) 908, and a system bus 910 that couples the memory 904 to the processing unit(s) 902.
- RAM random-access memory
- ROM read-only memory
- Processing unit(s) 902 can represent, for example, a CPU- type processing unit, a GPU-ty pe processing unit, a neural processing unit, a field-programmable gate array (FPGA), another class of digital signal processor (DSP), or other hardware logic components that may, in some instances, be driven by a CPU.
- FPGA field-programmable gate array
- DSP digital signal processor
- illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-Chip Systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
- ASICs Application-Specific Integrated Circuits
- ASSPs Application-Specific Standard Products
- SOCs System-on-a-Chip Systems
- CPLDs Complex Programmable Logic Devices
- the computer architecture 900 further includes a mass storage device 912 for storing an operating system 914, application(s) 916, modules 918, and other data described herein.
- the mass storage device 912 is connected to processing unit(s) 902 through a mass storage controller connected to the bus 910.
- the mass storage device 912 and its associated computer-readable media provide non-volatile storage for the computer architecture 900.
- computer-readable media can be any available computer-readable storage media or communication media that can be accessed by the computer architecture 900.
- Computer-readable media can include computer-readable storage media and/or communication media.
- Computer-readable storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and non-removable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
- communication media can embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism.
- a modulated data signal such as a carrier wave, or other transmission mechanism.
- computer storage media does not include communication media. That is. computer-readable storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.
- the computer architecture 900 may operate in a networked environment using logical connections to remote computers through the network 920.
- the computer architecture 900 may connect to the network 920 through a network interface unit 922 connected to the bus 910.
- the computer architecture 900 also may include an input/output controller 924 for receiving and processing input from a number of other devices, including a keyboard, mouse, touch, or electronic stylus or pen. Similarly, the input/output controller 924 may provide output to a display screen, a printer, or other type of output device.
- the software components described herein may, when loaded into the processing unit(s) 902 and executed, transform the processing unit(s) 902 and the overall computer architecture 900 from a general-purpose computing system into a specialpurpose computing system customized to facilitate the functionality presented herein.
- the processing unit(s) 902 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the processing unit(s) 902 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the processing unit(s) 902 by specifying how the processing unit(s) 902 transition between states, thereby transforming the transistors or other discrete hardware elements constituting the processing unit(s) 902.
- FIG. 10 depicts an illustrative distributed computing environment 1000 capable of executing the software components described herein.
- the distributed computing environment 1000 illustrated in FIG. 10 can be utilized to execute any aspects of the software components presented herein.
- the distributed computing environment 1000 can be utilized to execute aspects of the software components described herein.
- the distributed computing environment 1000 can include a computing environment 1002 operating on, in communication with, or as part of the network 1004.
- the network 1004 can include various access networks.
- One or more client devices 1006A-1006N (hereinafter referred to collectively and/or generically as “clients 1006” and also referred to herein as computing devices 1006) can communicate with the computing environment 1002 via the network 1004.
- the clients 1006 include a computing device 1006 A such as a laptop computer, a desktop computer, or other computing device; a slate or tablet computing device (“tablet computing device”) 1006B; a mobile computing device 1006C such as a mobile telephone, a smart phone, or other mobile computing device; a server computer 1006D; and/or other devices 1006N. It should be understood that any number of clients 1006 can communicate with the computing environment 1002.
- the computing environment 1002 includes servers 1008, data storage 1010, and one or more network interfaces 1012.
- the servers 1008 can host various services, virtual machines, portals, and/or other resources.
- the servers 1008 host virtual machines 1014, Web portals 1016, mailbox services 1018, storage sendees 1020, and/or, social networking sendees 1022.
- the servers 1008 also can host other sendees, applications, portals, and/or other resources (“other resources”) 1024.
- the computing environment 1002 can include the data storage 1010.
- the functionality of the data storage 1010 is provided by one or more databases operating on, or in communication with, the network 1004.
- the functionality 7 of the data storage 1010 also can be provided by one or more servers configured to host data for the computing environment 1002.
- the datastorage 1010 can include, host, or provide one or more real or virtual datastores 1026A-1026N (hereinafter referred to collectively and/or generically as “datastores 1026”).
- the datastores 1026 are configured to host data used or created by the servers 1008 and/or other data.
- the datastores 1026 also can host or store web page documents, word documents, presentation documents, data structures, algorithms for execution by a recommendation engine, and/or other data utilized by any application program. Aspects of the datastores 1026 may be associated with a service for storing files.
- the computing environment 1002 can communicate w ith, or be accessed by, the network interfaces 1012.
- the network interfaces 1012 can include various types of netw ork hardw are and software for supporting communications between two or more computing devices including, but not limited to, the computing devices and the servers. It should be appreciated that the network interfaces 1012 also may be utilized to connect to other types of networks and/or computer systems.
- the distributed computing environment 1000 described herein can provide any aspects of the software elements described herein with any number of virtual computing resources and/or other distributed computing functionality that can be configured to execute any aspects of the software components disclosed herein.
- the distributed computing environment 1000 provides the software functionality described herein as a service to the computing devices.
- the computing devices can include real or virtual machines including, but not limited to, server computers, web servers, personal computers, mobile computing devices, smart phones, and/or other devices.
- various configurations of the concepts and technologies disclosed herein enable any device configured to access the distributed computing environment 1000 to utilize the functionality described herein for providing the techniques disclosed herein, among other aspects.
- Example 1 A method comprising: receiving an indication that a discovery mode has been activated; retrieving content displayed by an active window; identifying an image within the retrieved content; identifying a text entity within the identified image; determining that the text entity is applicable to an OS or third-party provided feature; overlaying an entity visual cue onto the active window proximate to the text entity; receiving an indication of a selection of the entity visual cue; and invoking the OS or third-party provided feature.
- Example 2 The method of Example 1, wherein invoking the OS or third-party feature comprises providing the OS or third-party provided feature with the text entity.
- Example 3 The method of Example 1, wherein the OS or third-party 7 feature modifies the content displayed by the active window, launches an inline micro-experience, crops an image, or exports an image.
- Example 4 The method of Example 1, wherein the discovery' mode comprises an ambient mode that analyzes content displayed by the active indow' when previously unanalyzed content comes into view.
- Example 6 The method of Example 5, wherein the number, location, and type of visual cues displayed while in ambient mode is customized based on an individual usage history'.
- Example 7 The method of Example 1, wherein the content displayed by the active window is retrieved from a display buffer after the active window has drawn the content to the display' buffer.
- Example 8 The method of Example 1, wherein the content displayed by the active window is retrieved from a visible portion of the active window and a non-visible portion of the active window.
- Example 9 A system comprising: a processing unit; and a computer-readable storage medium having computer-executable instructions stored thereupon, which, when executed by the processing unit, cause the processing unit to: receive an indication that a discovery' mode has been activated; retrieve content displayed by an active window; identify an image within the retrieved content; identify an image subject entity within the identified image; determine that the image subject entity is applicable to an OS or third-party provided feature; overlay an entity visual cue onto the active window proximate to the image subject entity; receive an indication of a selection of the entity visual cue; and invoke the OS or third-party provided feature.
- Example 10 The system of Example 9, wherein the entity visual cue comprises a shape displayed within or proximate to the image subject entity.
- Example 11 The system of Example 10, wherein the computer-executable instructions further cause the processing unit to: highlight the entity visual cue in response to an indication of a system cursor moving over or proximate to the image subject entity.
- Example 12 The system of Example 10, wherein the computer-executable instructions further cause the processing unit to: highlight the image subject entity in response to an indication of a system cursor moving over or proximate to the entity visual cue.
- Example 13 The system of Example 9, wherein the OS or third-party feature extracts the image subject entity from the identified image.
- Example 14 The system of Example 9, wherein the computer-executable instructions further cause the processing unit to: identify a background portion of the image; and highlight the image in response to a determination that a system cursor moved over the background portion of the image.
- Example 15 The system of Example 9, wherein the computer-executable instructions further cause the processing unit to: display an entity action list in response to the selection of the entity visual cue; and receive a selection from the entity action list that selects the OS or third- party provided feature.
- Example 16 A computer-readable storage medium having encoded thereon computer- readable instructions that when executed by a processing unit causes a system to: receive an indication that a discovery 7 mode has been activated; retrieve content displayed by an active window; identify a paragraph entity 7 within the retrieved content; determine that the paragraph entity is applicable to an OS or third-party provided feature; overlay an entity visual cue onto the active window proximate to the paragraph entity in response to a discovery cursor moving over or proximate to the paragraph entity 7 ; receive an indication of a selection of the entity 7 visual cue; and invoke the OS or third-party provided feature.
- Example 17 The computer-readable storage medium of Example 16, wherein the instructions further cause the processing unit to: extract text depicted in the paragraph entity from the retrieved content; and provide the extracted text to the OS or third-party' provided feature.
- Example 18 The computer-readable storage medium of Example 16, wherein a system cursor changes to the discovery cursor in response to entering the discovery mode, and wherein the discover ⁇ ' cursor changes to an “I” cursor in response to moving over or proximate to the paragraph entity.
- Example 19 The computer-readable storage medium of Example 18, wherein the instructions further cause the processing unit to: display a caret proximate to the “I” cursor within the paragraph entity; receive an indication of an edit command; applying the edit command at a location of the caret; and modify the content displayed by the active window to reflect a result of the edit command.
- Example 20 The computer-readable storage medium of Example 18, wherein the instructions further cause the processing unit to: select text from the paragraph entity at a location of the “I” cursor in response to a text selection command.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24712716.0A EP4673821A1 (en) | 2023-03-01 | 2024-02-27 | Feature discovery layer |
| CN202480006368.0A CN120476381A (en) | 2023-03-01 | 2024-02-27 | Feature Discovery Layer |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363487764P | 2023-03-01 | 2023-03-01 | |
| US63/487,764 | 2023-03-01 | ||
| US18/216,512 US20240295942A1 (en) | 2023-03-01 | 2023-06-29 | Feature discovery layer |
| US18/216,512 | 2023-06-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024182322A1 true WO2024182322A1 (en) | 2024-09-06 |
Family
ID=90366538
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/017366 Ceased WO2024182322A1 (en) | 2023-03-01 | 2024-02-27 | Feature discovery layer |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4673821A1 (en) |
| CN (1) | CN120476381A (en) |
| WO (1) | WO2024182322A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080086700A1 (en) * | 2006-10-06 | 2008-04-10 | Rodriguez Robert A | Systems and Methods for Isolating On-Screen Textual Data |
| WO2019118149A1 (en) * | 2017-12-13 | 2019-06-20 | Microsoft Technology Licensing, Llc | Automatically launched software add-ins for proactively analyzing content of documents and soliciting user input |
| US20210142763A1 (en) * | 2019-11-07 | 2021-05-13 | Microstrategy Incorporated | Systems and methods for context-based optical character recognition |
| US11573810B1 (en) * | 2014-07-11 | 2023-02-07 | Google Llc | Sharing screen content in a mobile environment |
-
2024
- 2024-02-27 WO PCT/US2024/017366 patent/WO2024182322A1/en not_active Ceased
- 2024-02-27 EP EP24712716.0A patent/EP4673821A1/en active Pending
- 2024-02-27 CN CN202480006368.0A patent/CN120476381A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080086700A1 (en) * | 2006-10-06 | 2008-04-10 | Rodriguez Robert A | Systems and Methods for Isolating On-Screen Textual Data |
| US11573810B1 (en) * | 2014-07-11 | 2023-02-07 | Google Llc | Sharing screen content in a mobile environment |
| WO2019118149A1 (en) * | 2017-12-13 | 2019-06-20 | Microsoft Technology Licensing, Llc | Automatically launched software add-ins for proactively analyzing content of documents and soliciting user input |
| US20210142763A1 (en) * | 2019-11-07 | 2021-05-13 | Microstrategy Incorporated | Systems and methods for context-based optical character recognition |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120476381A (en) | 2025-08-12 |
| EP4673821A1 (en) | 2026-01-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11561807B2 (en) | Desktop launcher | |
| US11740914B2 (en) | Positioning user interface components based on application layout and user workflows | |
| US10775971B2 (en) | Pinch gestures in a tile-based user interface | |
| US10248305B2 (en) | Manipulating documents in touch screen file management applications | |
| US9152529B2 (en) | Systems and methods for dynamically altering a user interface based on user interface actions | |
| US8527896B2 (en) | User interface menu with hovering icons | |
| US20120306747A1 (en) | Device, Method, and Graphical User Interface for Entering Alternate Characters with a Physical Keyboard | |
| KR20140051228A (en) | Submenus for context based menu system | |
| CN114067797A (en) | Voice control method, device, equipment and computer storage medium | |
| US9804767B2 (en) | Light dismiss manager | |
| CN111401323A (en) | Character translation method, device, storage medium and electronic equipment | |
| CN111708959A (en) | Page display method, device, storage medium and electronic device | |
| CN107766108A (en) | A kind of method of interception image, terminal and computer-readable medium | |
| US10970476B2 (en) | Augmenting digital ink strokes | |
| US20240295942A1 (en) | Feature discovery layer | |
| US10514841B2 (en) | Multi-layered ink object | |
| WO2024182322A1 (en) | Feature discovery layer | |
| WO2016101768A1 (en) | Terminal and touch operation-based search method and device | |
| US20150058774A1 (en) | Gesture-based visualization of financial data | |
| CN113253904A (en) | Display method, display device and electronic equipment | |
| US20250370598A1 (en) | Systems and methods for generating content | |
| US20250335217A1 (en) | Trigger mechanism for generating graphical captures enabling downstream user experiences | |
| CN114217709B (en) | Page display method, reader and computer storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24712716 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202480006368.0 Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 202480006368.0 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024712716 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2024712716 Country of ref document: EP Effective date: 20251001 |
|
| ENP | Entry into the national phase |
Ref document number: 2024712716 Country of ref document: EP Effective date: 20251001 |
|
| ENP | Entry into the national phase |
Ref document number: 2024712716 Country of ref document: EP Effective date: 20251001 |
|
| WWP | Wipo information: published in national office |
Ref document number: 2024712716 Country of ref document: EP |