US20170024086A1 - System and methods for detection and handling of focus elements - Google Patents
System and methods for detection and handling of focus elements Download PDFInfo
- Publication number
- US20170024086A1 US20170024086A1 US15/053,501 US201615053501A US2017024086A1 US 20170024086 A1 US20170024086 A1 US 20170024086A1 US 201615053501 A US201615053501 A US 201615053501A US 2017024086 A1 US2017024086 A1 US 2017024086A1
- Authority
- US
- United States
- Prior art keywords
- focus
- focus element
- input
- data
- graphical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/316—Indexing structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G06F17/30619—
-
- G06F17/30705—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/3827—Portable transceivers
- H04B1/3833—Hand-held transceivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/142—Managing session states for stateless protocols; Signalling session states; State transitions; Keeping-state mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72469—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/22—Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
Definitions
- the present disclosure relates to operation of computing devices with displays, and in particular, detection and handling of focus elements associated with an application.
- Information associated with electronic devices, and in particular personal electronic devices is both numerous and varied. Many electronic devices can be used to view or access hundreds, if not thousands (or more), of instances of applications and websites every day. Information is varied in that the type of information received and processed by personal electronic devices can be from any number of sources such as text, communication, location data, photographs, web browsing, etc. Beyond various types of information, information can vary in its degree of importance or priority; some pieces of information are more important than others. There exists a need for device configuration to allow for access to and storage of information based on priority or importance.
- Tracking information and inputs across a device can be overwhelming with conventional devices and methods.
- Conventional devices typically store information based on the type. For example, contacts may be stored in a particular application of a device. As such, with conventional devices, users actively determine particular types of information for storage.
- conventional applications are configured to receive only a particular type of input. For example, a photo application for a device is not capable of saving, processing, and interacting with text information, address information, contact information, etc.
- Each of these particular types of data inputs requires its own additional application, focused on those particular types of data inputs. Storage of data inputs, and accessibility of stored data inputs, becomes more and more difficult as the number of different types of data inputs grows.
- a method for detection and handling of focus elements associated with an application includes presenting, by a device, at least one graphical entry space for entry of focus elements on a display of the device.
- the method also includes detecting, by the device, an input to the at least one graphical entry space.
- the method also includes categorizing, by the device, the input. Categorizing includes determining a focus element type for the input and assigning the focus element type to the input.
- the method also includes creating, by the device, a focus element based on the input and the focus element type.
- the method also includes displaying, by the device, a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type.
- the graphical representation of the focus element is presented in a list of one or more focus elements.
- the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
- the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
- the graphical entry space includes a text entry area on the display of the device.
- the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
- categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
- categorizing further includes updating the graphical representation of the focus element.
- creating includes storing, by the device, the focus element, the input, and the focus element type, in an input list.
- displaying includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input and the at least one graphical symbol identifying the focus element type.
- the method also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
- Another embodiment is directed to a device including an input, a display configured for presentation of a user interface, and a controller configured to communicate with the input and the display.
- the controller is further configured to control presentation of at least one graphical entry space for entry of focus elements on the display.
- the controller is further configured to detect the input to the at least one graphical entry space.
- the controller is further configured to categorize the input, wherein categorizing includes determining a focus element type for the input and assigning the focus element type to the input.
- the controller is further configured to control creation of a focus element based on the input and the focus element type.
- the controller is further configured to control display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, wherein the graphical representation of the focus element is presented in a list of one or more focus elements.
- the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
- the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
- the graphical entry space includes a text entry area on the display of the device.
- the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
- categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
- categorizing further includes updating the graphical representation of the focus element.
- controlling creation includes storing the focus element, the input, and the focus element type, in an input list.
- controlling display includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input the at least one graphical symbol identifying the focus element type.
- controlling also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
- FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments
- FIG. 2 depicts a graphical representation of a device with a focus element and a list of focus elements according to one or more embodiments
- FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments
- FIG. 4 depicts a simplified diagram of a device according to one or more embodiments
- FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments
- FIG. 6 depicts a graphical representation of the focus application according to one or more embodiments
- FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments.
- FIG. 8 depicts a process of detection and handling of focus elements according to one or more embodiments.
- One aspect of this disclosure relates to detection and handling of inputs, including data and inputs which can vary both in type and in priority.
- Personal electronic devices such as phones, tablets, laptops, personal computers, televisions, gaming systems and other electronic display devices, can receive a massive amount of information or data input every day. Inputs to a device may be detected and one or more focus elements may be generated based on the inputs.
- inputs relate to any particular information to be stored on a device.
- inputs can include typed data, copy and paste data, audio data, image data, video data, and location data.
- Data can be user generated, received from other users, or obtained from other sources (e.g., from the Internet).
- inputs relate to entries and/or data supplied to a particular application of a device, such as a Focus application.
- inputs in general to the device and/or data that is associated with Focus application types may be stored as focus elements.
- various inputs are identified and categorized into particular focus element types.
- focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. In one embodiment, from this categorization, focus elements are generated.
- the Focus application detects and handles focus elements that are processed on a device. Implementation may be system-wide, across both the device and all related applications on the device. Input detection is built into the system, such that the device can dynamically identify, categorize, and generate focus elements.
- the Focus application is running underneath the typical user interface of the device. This allows the Focus application to operate while the device is running other applications on the user interface. In another embodiment, the Focus application runs as a full application on the device. Likewise, the user has the ability to transition between these different embodiments.
- a device including a display configured for presentation of the user interface, and a controller configured to communicate with the display.
- the device detects and handles focus elements through implementation of the Focus application.
- the Focus application operates across the devices. For example, the user can save pertinent information, derived from inputs on a mobile device with a network list on the Focus application.
- a device can identify, for the user, different types of information. Generation and use of focus elements, including graphical symbols, allows the user to quickly assess information in an efficient manner. For example, the user is no longer required to self-identify whether text is merely text, or whether it includes a web address.
- the device will identify the important aspects of a given piece of information. As important information is identified, the device provides for recordation in a central location. The user is no longer burdened by having to save information to its respective application location (e.g., saving a picture to the photo application); likewise, the user is no longer burdened by having to transition between a multitude of different applications. The user can quickly and efficiently record important information to a centralized location.
- Providing a centralized location allows for the user to recover previously saved information, without having to search for where it is located. Categorizing of saved information, into different focus element types, allows for quick and efficient navigation. Additionally, saving all pertinent information to a centralized location acts as a timeline of relevant content, as dictated by the user. In this way, the Focus application acts as an aggregator for important user-specific information.
- a focus element is derived from information that is on a device. More particularly, focus elements include a data input, which is a portion of relevant data associated with a particular focus element type. The data input can be user generated or can be received from other sources, both within the device and from sources beyond the device. Graphically, a focus element will include the data input and a graphical symbol. The graphical symbol, like the focus element itself, is associated with a particular focus element type.
- a focus element type is a category of focus element. Focus elements are grouped into specific categories, or types. As an example, focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. Focus element types are used with subsequent interaction of focus elements.
- the Focus application is source of identification, categorization, and generation for all focus elements.
- the Focus application is the centralized location from where the list of focus elements is stored.
- the Focus application may be run as a discrete application, accessed like any other typical application on a device.
- the Focus application may be constantly running underneath the typical user interface of the device.
- Updates to the Focus application can implement changes to the identification, categorization, and generation for focus elements. For example, application updates can add additional focus element types to the Focus application, add additional pattern matching parameters to improve categorization accuracy, etc.
- the terms “a” or “an” shall mean one or more than one.
- the term “plurality” shall mean two or more than two.
- the term “another” is defined as a second or more.
- the terms “including” and/or “having” are open ended (e.g., comprising).
- the term “or” as used herein is to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
- FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments.
- device 100 may be configured for presentation of focus elements on a display 101 .
- the device 100 may be any one of a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device.
- Device 100 includes a controller (not shown).
- Display 101 may additionally include a plurality of inputs 115 .
- Device 100 may further include a data entry field 110 configured to allow for the user to enter information into the device 100 .
- device 100 is configured to detect and characterize inputs to the device, including text information (e.g., txt format), websites (e.g., html format), pictures (e.g., jpeg file), etc.
- text information e.g., txt format
- websites e.g., html format
- pictures e.g., jpeg file
- device 100 may be configured to collect different types of input in a single application. In addition to collection, device 100 may be configured to detect and characterize the inputs.
- Device 100 is configured to process data inputs across a variety of different types. Device 100 can categorize different types of data inputs and provide a central location from which other applications can be conveniently accessed.
- the display 101 may additionally include a data input interface 102 (e.g., keyboard, free-form pad, etc.).
- the data input interface 102 can be a part of the display 101 (e.g., touch-screen keyboard). Alternatively, the data input interface 102 can be separate from the display 101 (e.g., a physical keyboard).
- Information can be entered into the device 100 , at the data entry field 110 , via the data input interface 102 (e.g., typed data). For example, the text “office meeting” is added into the data entry field 110 .
- information can be entered into the device 100 via the plurality of inputs 115 (e.g. audio data, image data, video data, location data, etc.). Likewise, information can be entered into the device 100 via copy and paste data. This information entered into the data entry field 110 is processed, by the device 100 , into a focus element 120 .
- the focus element 120 1 includes a data input 121 1 and a graphical symbol 122 1 , which is associated with a specific type of focus element. For example, the text “office meeting at 11:30 a.m.” is the data input 121 1 for the focus element 120 1 .
- the symbol of a calendar is the graphical symbol 122 1 for focus element 120 1 .
- data input 121 1 is one of typed data, copy and paste data, audio data, image data, video data, and location data.
- Data input 121 1 can be user generated (e.g., via the keyboard), entered into the device 100 (e.g., via the plurality of inputs 115 ) or can be received from other sources (e.g., via a received SMS text message).
- the device 100 will detect the data input 121 2 , which is at least a portion of information in the data entry field 110 .
- This data input 121 2 is subsequently used by the display device 100 to generate the focus element 120 2 .
- the information “frank's phone 416-358-8543” is entered into the data entry field 110 .
- this information is subsequently converted to the data input 121 2 of the focus element 120 2 .
- the device 100 To generate the focus element 120 2 , the device 100 must categorize the data input 121 2 . Categorizing includes determining a specific type of focus element for the data input 121 2 . There are a number of specific types of focus elements, from which the data input 121 2 may be associated. In an embodiment, focus element type includes notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. A camera on the device 100 can dynamically determine whether something being viewed is a standard photo or, alternatively, a barcode. Barcode information is translated into other ASCI information or textual information. To determine a specific type of focus element for the data input 121 2 , the device 100 may match at least a portion of the input to one or more data patterns.
- These data patterns can be associated with a plurality of predefined focus element types. For example, names and phone numbers are associated with contacts; city/states are associated with locations, etc.
- the device 100 can determine a specific type of focus element for the data input 121 2 . Once a specific type of focus element has been determined for the data input 121 2 , the device assigns the focus element type to the data input 121 2 . Using this newly determined information: the data input 121 2 and the focus element type, the device generates a focus element 120 2 .
- the focus element 120 2 is depicted graphically on the device 100 .
- the graphical representation of the focus element 120 2 includes the data input 121 2 and the graphical symbol 122 2 .
- the graphical symbol 122 2 is used to identify the focus element type. More particularly, the graphical symbol 122 2 helps the user quickly identify the type of focus element through visual cues. For example, the user can quickly determine that focus element 120 1 is the focus element type of events, by seeing the graphical symbol 122 1 of a calendar. Likewise, the user can quickly determine that focus element 120 2 is the focus element type of contacts, by seeing the graphical symbol 122 2 of a face.
- the focus element 120 may be updated. For example, as information is first added to the data entry field 110 , the device 100 will dynamically identify, categorize, and generate focus elements (as described above and in greater detail below). Imagine the user enters the information “Becky . . . ” into the data entry field 110 . The device 100 may determine that the data input 121 is a name: Becky. Through pattern matching, the device 100 may assign the focus element type of contact, and subsequently assign the graphical symbol 122 of a face, associated with the focus element type of contacts. Thus, a focus element 120 has been dynamically created, based off information the user has added to the data entry field 110 .
- the device 100 may determine that the data input 121 , which was previously a name, is now actually an address. Through pattern matching, the device 100 may assign a new focus element type of location, and subsequently assign the graphical symbol 122 of a moon, associated with the focus element type of location. Thus, the focus element 120 that was dynamically created, as a contact, has now been dynamically re-categorized. The graphical representation of the focus element 120 is updated to reflect this dynamic re-categorization.
- new information is not necessarily required to be entered into data entry field 110 by the user, in order to be identified, categorized, and generated into focus element 120 .
- User entry is only one way that data can be processed by device 100 . Beyond typed data, by the user, information can be entered into the device 100 through the plurality of inputs 115 , including audio data, image data, video data, and location data. Likewise, information can be entered into the device through copy and paste data.
- device 100 can detect online article browsing using a web browser application for the device 100 including content associated with “The White House” in Washington, D.C.
- the online article may note in part that “The White House is located at 1600 Pennsylvania Ave.”
- the device 100 may determine that the data input is an address: 1600 Pennsylvania Ave.
- the device 100 may assign the focus element type of location, and subsequently assign the graphical symbol of a moon, associated with the focus element type of location.
- a focus element has been dynamically created, based on information input by the user via a copy and pasted with the device 100 .
- the device can detect clipboard information to identify, categorize, and generate additional focus elements.
- focus elements can be identified, categorized, and generated through audio data, image data, video data, and location data. This includes data received from other sources, such as the Internet, or from devices of other users.
- the device 100 can take a number of different actions.
- the focus element 120 is automatically stored, by the device 100 , once it is created.
- the focus element 120 is not stored, by the device 100 , until the user performs some additional action. For example, the user swipes the focus element 120 to the right to store the focus element 120 to the device 100 .
- Alternative commands to store a focus element 120 can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.
- focus element identification, categorization, and generation can continually operate underneath the typical user interface of the device.
- the interface could be a text message conversation; typically, a text message conversation will take place on a text message application.
- the device 100 is running a text message application, the device 100 may still identify pertinent information and generate a focus element 120 .
- Continual uninterrupted analysis of information, for identification, categorization, and generation of focus elements is beneficial in many ways.
- the device 100 may automatically detect content that is categorized by one of the specific types of focus elements (e.g., addresses, contact information, websites, etc.). Categorization utilizes pattern-matching for all content automatically processed.
- the user can instantly see whether information on the device is relevant (i.e., information is a specific type of focus element) or irrelevant (i.e., information is not a specific type of focus element).
- dynamic categorization enables the user to push content from specific applications (e.g., a text message application) into a centralized list.
- specific applications e.g., a text message application
- the user has the ability to modify the categorization for any focus element.
- the list may be stored on a specific Focus application on the device 100 .
- the Focus application is described in greater detail below with respect to FIGS. 5-7 .
- FIG. 2 depicts a graphical representation of a display device with a focus element and a list of focus elements according to one or more embodiments.
- Device 200 may be configured for presentation of focus elements on a display 201 that includes a data input interface 202 .
- Display 201 can additionally include a plurality of inputs 215 .
- Device 200 may store a focus element 220 , including the data input 221 and a graphical symbol 222 for the focus element type, in an input list.
- focus element 220 can include a graphical symbol 222 of a face, which is associated with the focus element type of contacts.
- this input list is displayed on a specific application: the Focus application.
- Device 200 may display the focus element 220 in the Focus application.
- the device may additionally display a plurality of previously created focus elements 230 in the Focus application.
- Each of the plurality of previously created focus elements 230 may, likewise, have a data input and a graphical symbol.
- the plurality of previously created focus elements 230 each have a graphical symbol (e.g., cloud, note, moon), which is associated with a respective focus element type (e.g., websites, notes, location).
- graphical symbols used herein are merely examples. A number of other illustrative graphics and symbols could be used.
- the device By providing the focus element 220 and the plurality of previously created focus elements 230 in a central location (e.g., the list on the Focus application), the device is able to view all information that the user has saved as important. Because focus elements are displayed to include graphical symbols, the user can quickly navigate among all saved information to find specific information. Likewise, and as discussed below, the user is able to interact with all information that the user has saved as important, from one centrally organized location. In this sense, the Focus application acts as a gateway to a number of related applications.
- the Focus application itself can be invoked, by the user, by clicking an icon on the device 200 that is associated with the Focus application (e.g., an app icon).
- the Focus application can be invoked via swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.
- FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments.
- process 300 is described with reference to the flowchart illustrated in FIG. 3 , it will be appreciated that many other processes of performing the acts associated with the process 300 may be used.
- the process 300 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both.
- Process 300 may be performed by a device such as device 100 of FIG. 1A .
- At block 305 at least one graphical entry space for entry of focus elements is presented on a display of the device.
- this graphical entry space is an area where the user can type or paste information.
- the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data.
- an input to the at least one graphical entry space is detected. The input may be any type of data in the graphical entry space, as described above.
- Categorization may involve both a determination and an assignment, by the device, at block 315 .
- Categorization may include determining a focus element type for the input.
- Pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode.
- the particular focus element type assigned to an input is determined, by the device, through pattern matching. For example, the device will match at least a portion of the input to one of the pre-defined focus element types.
- the device may also assign the focus element type to the input. While assignment is made by the device, it should be noted that the user can modify the assignment, for any input, to a different focus element type.
- the device will create a focus element, based on the input and the focus element type.
- the device now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above.
- the device may store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on the device.
- the device will display a graphical representation of the focus element. This could include the input and at least one graphical symbol identifying the focus element type.
- this graphical representation of the focus element is presented dynamically, in another application on the device. For example, the focus element could be shown by the device while the user is accessing a text message application.
- this graphical representation of the focus element is presented alone.
- this graphical representation of the focus element is presented in a list. The list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on the device.
- FIG. 4 depicts a simplified diagram of a device according to one or more embodiments.
- Device 400 may relate to one or more devices for providing an application, such as a Focus application.
- device 400 relates to a device including a display, such as a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device.
- a display such as a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device.
- device 400 includes controller 405 , user interface 410 , communications unit 415 , and memory 420 .
- Controller 405 may be configured to execute code stored in memory 420 for operation of device 400 including presentation of a graphical user interface.
- Controller 405 may include a processor and/or one or more processing elements.
- controller 405 may be include one or more of hardware, software, firmware and/or processing components in general.
- controller 405 may be configured to perform one or more processes described herein.
- Controller 405 may be configured to run a Focus application, the Focus application including one or more focus elements, Focus application user interface configuration.
- User interface 410 may be configured to receive one or more commands via an input/output (I/O) interface 425 , which may include one or more inputs or terminals to receive user commands.
- I/O interface 425 may receive one or more remote control commands.
- graphical user interface 410 may be configured to receive one or more commands from a display 430 .
- commands from the display 430 are sent to the controller 405 via user interaction with a touch screen.
- Communications unit 415 may be configured for wired and/or wireless communication with one or more network elements, such as servers.
- Memory 420 may include non-transitory RAM and/or ROM memory for storing executable instructions, operating instructions and content for display.
- FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments.
- Focus element identification, categorization, and generation can be triggered, on the device 500 , by the Focus application.
- the Focus application provides the user with the ability to view all focus elements, in a list. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location.
- the Focus application can include a focus menu 510 , which acts as a precursor to viewing all focus elements in a list.
- the focus menu 510 provides the user with the opportunity to quickly access the plurality of inputs 515 1 to 515 5 , and subsequently create a focus element by entering data into the device 500 .
- clicking the location input button 515 1 would enter data related to the user's current location into the device 500 as a focus element.
- clicking the photo input button 515 2 would enter picture data into the device 500 as a focus element.
- clicking the video input button 515 3 would enter video data into the device 500 as a focus element.
- clicking the microphone input button 515 4 would input audio data into the device 500 as a focus element.
- clicking the note input button 515 5 would input text data into the device 500 as a note. It should be appreciated that each of the plurality of inputs 515 1 to 515 5 discussed herein are merely examples. A number of other illustrative graphics and symbols could be used.
- FIG. 6 depicts a graphical representation of the Focus application according to one or more embodiments.
- Focus element identification, categorization, and generation can be triggered, on the device 600 , by the Focus application.
- the Focus application as shown graphically on the display 601 , provides the user with the ability to view all focus elements, in a list.
- the Focus application displays, to the user, a focus element 620 , including the data input 621 and a graphical symbol 622 for the focus element type, in an input list.
- focus element 620 can include a graphical symbol 622 of a calendar, which is associated with the focus element type of events. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location.
- the Focus application interface allows the user to add additional focus elements directly into the Focus application (e.g., into the list of focus elements).
- the user may enter information into the Focus application via the data entry field 610 , such that the device 600 will dynamically identify, categorize, and generate a focus element.
- the user may enter information into the Focus application via the plurality of inputs 615 , such that the device 600 will dynamically identify, categorize, and generate a focus element.
- the user is still able to add focus elements from other applications (e.g., a text message application) when the device is constantly analyzing information for identification, categorization, and generation of focus elements as previously described above.
- the user can add information in a number of different ways (e.g., typed data, copy and paste data, audio data, image data, video data, and location data).
- device 600 includes a plurality of previously created focus elements 630 in a list.
- Device 600 also includes the data entry field 610 .
- the data entry field 610 is a text entry area on the device 600 .
- the user may enter text into the data entry field 610 (e.g., “Buy Stamps”).
- the device 600 will identify information in the data entry field 610 via pattern matching (as discussed above) in order to categorize a focus element type.
- the user has the ability to set or change the focus element type for the data entry field 610 through use of the plurality of inputs 615 .
- Data in the data entry field 610 can be categorized by the user, using the plurality of inputs 615 to represent each of the focus element types. Likewise, the user can select one of the plurality of previously created focus elements 630 and re-categorize the element, using the plurality of inputs 615 . In this way, the user has the capability to override the pattern matching typically done by the device 600 .
- the Focus application interface gives the user the ability to categorize focus elements, and re-categorize previous focus elements. While pattern matching is an ideal way to dynamically categorize information, the user may prefer certain information or focus elements to be categorized in a different way.
- the Focus application interface gives the user the ability to customize focus elements to the user's individual preferences.
- the Focus application interface allows the user to access, through the user selection command, third party applications that are linked to focus elements and associated with specific focus element types. By providing focus elements in a central location, and allowing the user to take action with respect to individual focus elements, the Focus application effectively links important information to the third party applications.
- the device 600 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type.
- the device may re-categorize a default “note” based on additional information that is added by the user (e.g., the note changes to maps, once the user types an address).
- the user may have the capability to override the pattern matching through a plurality of selectable elements via the Focus application.
- FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments.
- focus element 701 is categorized as the focus element type of a note.
- focus element 702 is categorized as the focus element type of an event.
- focus element 703 is categorized as the focus element type of a contact.
- focus element 704 is categorized as the focus element type of a website.
- focus element 705 is categorized as the focus element type of an audio recording.
- focus element 706 is categorized as the focus element type of a location.
- focus element 707 is categorized as the focus element type of a photo.
- focus element 708 is categorized as the focus element type of a task.
- focus element 709 is categorized as the focus element type of a message.
- focus element 710 is categorized as the focus element type of a barcode.
- FIG. 8 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments.
- process 800 is described with reference to the flowchart illustrated in FIG. 8 , it will be appreciated that many other processes of performing the acts associated with the process 800 may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, and some of the blocks described are optional.
- the process 800 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both.
- Process 800 may be performed by a device such as device 100 of FIG. 1A .
- the device 100 will detect an input at block 810 .
- the input may be any type of data.
- the input is received at least one graphical entry space for entry of focus elements on a display of the device.
- this graphical entry space is an area where the user can type or paste information.
- the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data.
- the device will match at least a portion of the input with a data pattern at block 820 .
- This matching process requires a determination of whether the input can be assigned one of the focus element types.
- pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode.
- the particular focus element type assigned to an input is determined, by the device 100 , through pattern matching. For example, the device 100 will match at least a portion of the input to one of the pre-defined focus element types.
- the device 100 determines at decision block 825 whether, in fact, the input is assignable. Responsive to determining that the input is assignable, the device 100 assigns, to the input, a focus element type and a graphical symbol at block 830 . The graphical symbol is a symbol associated with a specific type of focus element. Responsive to determining that the input is not assignable, the device 100 skips block 830 . In an embodiment, if the input is not assignable, the device 100 assigns, to the input, a focus element type of a “note”; in this embodiment, the device 100 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type. In an embodiment, while assignment is made by the device 100 , it should be noted that the user can modify the assignment, for any input, to a different focus element type.
- the device 100 displays the input with the graphical symbol.
- display of the input with the graphical symbol is characterized as display of the focus element.
- the device 100 now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above.
- the device will identify a user storage command at block 850 .
- the user swipes the focus element on the device 100 to the right to store the focus element to the device 100 .
- Alternative commands to store a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.
- the device 100 will store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on the device 100 .
- the device will display the input list.
- display of the input list includes display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, and display of additional focus elements.
- this graphical representation of the focus element is presented dynamically, in another application on the device 100 .
- the focus element could be shown by the device while the user is accessing a text message application.
- this graphical representation of the focus element is presented alone.
- this graphical representation of the focus element is presented in a list.
- the list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on the device 100 .
- the device 100 identifies a user selection command at block 880 .
- the user swipes the focus element to the right on the device 100 , for user selection of the focus element.
- Alternative commands for selection of a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.
- user selection e.g., block 880
- the device 100 will transfer the focus element to a third party application at block 890 .
- the device 100 post-user selection command, transfers the data input for the selected focus element to a third party application.
- the third party application is associated with the focus element type for the selected focus element. For example, if the selected focus element is a website focus element type, the third party application associated with the selected focus element would be a web-browser application on the device 600 .
- the device 600 displays the application associated with the selected focus element on the display of the device 600 .
- the third party application is located on the device 100 (e.g., an app on the device). In a different embodiment, the third party application is located on an external network (e.g., the Internet).
- the third party application may be one of a notepad application, calendar application, contacts application, web browser application, microphone application, camera application, map application, navigation application, task list application, email application, text message application, telephone application, and bar code reader application.
- FIGS. 5-8 illustrate selection of a focus element within the Focus application, interface
- the user can select a focus element at any other point in time, from any other interface (e.g., from a text message application) so long as the focus element has been identified, categorized, and generated.
- the user receives the information “Becky: 555-5555” from another person, via a received SMS message.
- the device will identify the data input, categorize the input based on a focus element type (e.g., contacts), and assign a graphical symbol (e.g., a face).
- a focus element has been generated even though the user is still in the text message application. Responsive to a focus element being generated, the user can take a number of additional actions.
- One action that the user can take is to store the focus element in a list, within the Focus application.
- Another action that the user can take is to immediately act on the focus element, through user selection. For example, if the user takes immediate action with the “Becky: 555-5555” focus element, which is a “contact” focus element, the device, will transition directly to the contacts application for the device. The user does not have to go through the process of selecting the text message information, copying the text message information, leaving the text message application, opening the contacts application, and then saving the text message information. Rather, by taking action on the focus element, the user immediately transitions the information to the appropriate application (as dictated by the focus element type).
- Gestures for user selection of the focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons.
- different gestures can trigger different actions for focus elements. Actions may include saving the focus element to the focus element list, transitioning to the Focus application, saving the focus element and transitioning to the Focus application, taking immediate action with the focus element, saving the focus element and taking immediate action with the focus element, etc.
- different gestures can trigger different applications that might be related.
- swiping a first direction may send the focus element to the contacts application
- swiping a second direction may send the focus element to the telephone application
- swiping a third direction may send the focus element to the Focus application
- swipes could trigger different functionalities and applications. It should be appreciated that various different actions for focus elements can be combined as well.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims priority to U.S. Provisional Application No. 62/183,613 titled SYSTEM AND METHODS FOR A USER INTERFACE AND DEVICE OPERATION filed on Jun. 23, 2015, and U.S. Provisional Application No. 62/184,476 titled SYSTEM AND METHODS FOR A USER INTERFACE AND DEVICE OPERATION filed on Jun. 25, 2015, the contents of which are expressly incorporated by reference in their entirety.
- The present disclosure relates to operation of computing devices with displays, and in particular, detection and handling of focus elements associated with an application.
- Information associated with electronic devices, and in particular personal electronic devices is both numerous and varied. Many electronic devices can be used to view or access hundreds, if not thousands (or more), of instances of applications and websites every day. Information is varied in that the type of information received and processed by personal electronic devices can be from any number of sources such as text, communication, location data, photographs, web browsing, etc. Beyond various types of information, information can vary in its degree of importance or priority; some pieces of information are more important than others. There exists a need for device configuration to allow for access to and storage of information based on priority or importance.
- Tracking information and inputs across a device can be overwhelming with conventional devices and methods. Conventional devices typically store information based on the type. For example, contacts may be stored in a particular application of a device. As such, with conventional devices, users actively determine particular types of information for storage. In addition, conventional applications are configured to receive only a particular type of input. For example, a photo application for a device is not capable of saving, processing, and interacting with text information, address information, contact information, etc. Each of these particular types of data inputs requires its own additional application, focused on those particular types of data inputs. Storage of data inputs, and accessibility of stored data inputs, becomes more and more difficult as the number of different types of data inputs grows. Furthermore, as the number of different types of data input grows so too does the number of applications with which the user must interact. For these reasons, there exists a need for devices to detect information and allow for characterization. There also exists a need to address storage and access to information within a device that addresses the user interface deficiencies of devices. While conventional computing devices allow for file folders and conventional mobile devices provide user interface layouts, these configurations fail are limited in the presentation and access of inputs. There is a desire for devices and methods that detect and characterize input to a device.
- Disclosed and claimed herein are methods and devices for detection and handling of focus elements associated with an application. In one embodiment, a method for detection and handling of focus elements associated with an application includes presenting, by a device, at least one graphical entry space for entry of focus elements on a display of the device. The method also includes detecting, by the device, an input to the at least one graphical entry space. The method also includes categorizing, by the device, the input. Categorizing includes determining a focus element type for the input and assigning the focus element type to the input. The method also includes creating, by the device, a focus element based on the input and the focus element type. The method also includes displaying, by the device, a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type. The graphical representation of the focus element is presented in a list of one or more focus elements.
- In one embodiment, the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
- In one embodiment, the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
- In one embodiment, the graphical entry space includes a text entry area on the display of the device.
- In one embodiment, the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
- In one embodiment, categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
- In one embodiment, categorizing further includes updating the graphical representation of the focus element.
- In one embodiment, creating includes storing, by the device, the focus element, the input, and the focus element type, in an input list.
- In one embodiment, displaying includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input and the at least one graphical symbol identifying the focus element type.
- In one embodiment, the method also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
- Another embodiment is directed to a device including an input, a display configured for presentation of a user interface, and a controller configured to communicate with the input and the display. The controller is further configured to control presentation of at least one graphical entry space for entry of focus elements on the display. The controller is further configured to detect the input to the at least one graphical entry space. The controller is further configured to categorize the input, wherein categorizing includes determining a focus element type for the input and assigning the focus element type to the input. The controller is further configured to control creation of a focus element based on the input and the focus element type. The controller is further configured to control display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, wherein the graphical representation of the focus element is presented in a list of one or more focus elements.
- In one embodiment, the input is one of typed data, copy and paste data, audio data, image data, video data, and location data.
- In one embodiment, the focus element type is one of one of a note, event, contact, website, audio recording, location, photo, video, task, message, and barcode.
- In one embodiment, the graphical entry space includes a text entry area on the display of the device.
- In one embodiment, the graphical entry space includes a plurality of selectable elements, wherein each selectable element is associated with one of a plurality of predefined focus element types.
- In one embodiment, categorizing further includes matching at least a portion of the input to one or more data patterns associated with a plurality of predefined focus element types.
- In one embodiment, categorizing further includes updating the graphical representation of the focus element.
- In one embodiment, controlling creation includes storing the focus element, the input, and the focus element type, in an input list.
- In one embodiment, controlling display includes displaying the graphical representation of the focus element in addition to the plurality of previously created focus elements, wherein each of the plurality of previously created focus elements includes the input the at least one graphical symbol identifying the focus element type.
- In one embodiment, controlling also includes detecting a selection of the graphical representation of the focus element and transferring an input for a selected focus element to an application, wherein the application is associated with the focus element type for the selected focus element.
- Other aspects, features, and techniques will be apparent to one skilled in the relevant art in view of the following detailed description of the embodiments.
- The features, objects, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:
-
FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments; -
FIG. 2 depicts a graphical representation of a device with a focus element and a list of focus elements according to one or more embodiments; -
FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments; -
FIG. 4 depicts a simplified diagram of a device according to one or more embodiments; -
FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments; -
FIG. 6 depicts a graphical representation of the focus application according to one or more embodiments; -
FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments; and -
FIG. 8 depicts a process of detection and handling of focus elements according to one or more embodiments. - Disclosed herein are methods and devices for the detection and handling of focus elements. One aspect of this disclosure relates to detection and handling of inputs, including data and inputs which can vary both in type and in priority. Personal electronic devices, such as phones, tablets, laptops, personal computers, televisions, gaming systems and other electronic display devices, can receive a massive amount of information or data input every day. Inputs to a device may be detected and one or more focus elements may be generated based on the inputs.
- In one embodiment, inputs relate to any particular information to be stored on a device. For example, inputs can include typed data, copy and paste data, audio data, image data, video data, and location data. Data can be user generated, received from other users, or obtained from other sources (e.g., from the Internet). In certain embodiments, inputs relate to entries and/or data supplied to a particular application of a device, such as a Focus application. In other embodiments, inputs in general to the device and/or data that is associated with Focus application types may be stored as focus elements. In one embodiment, various inputs are identified and categorized into particular focus element types. For example, focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. In one embodiment, from this categorization, focus elements are generated.
- Another aspect of this disclosure relates to an application for detection and handling of focus elements. In one embodiment, the Focus application detects and handles focus elements that are processed on a device. Implementation may be system-wide, across both the device and all related applications on the device. Input detection is built into the system, such that the device can dynamically identify, categorize, and generate focus elements. In an embodiment, the Focus application is running underneath the typical user interface of the device. This allows the Focus application to operate while the device is running other applications on the user interface. In another embodiment, the Focus application runs as a full application on the device. Likewise, the user has the ability to transition between these different embodiments.
- Another aspect of this disclosure relates to a device including a display configured for presentation of the user interface, and a controller configured to communicate with the display. In an embodiment, the device detects and handles focus elements through implementation of the Focus application. In a different embodiment, the Focus application operates across the devices. For example, the user can save pertinent information, derived from inputs on a mobile device with a network list on the Focus application.
- Through implementation of the Focus application, a device can identify, for the user, different types of information. Generation and use of focus elements, including graphical symbols, allows the user to quickly assess information in an efficient manner. For example, the user is no longer required to self-identify whether text is merely text, or whether it includes a web address. Through the Focus application, the device will identify the important aspects of a given piece of information. As important information is identified, the device provides for recordation in a central location. The user is no longer burdened by having to save information to its respective application location (e.g., saving a picture to the photo application); likewise, the user is no longer burdened by having to transition between a multitude of different applications. The user can quickly and efficiently record important information to a centralized location. Providing a centralized location allows for the user to recover previously saved information, without having to search for where it is located. Categorizing of saved information, into different focus element types, allows for quick and efficient navigation. Additionally, saving all pertinent information to a centralized location acts as a timeline of relevant content, as dictated by the user. In this way, the Focus application acts as an aggregator for important user-specific information.
- As used herein, a focus element is derived from information that is on a device. More particularly, focus elements include a data input, which is a portion of relevant data associated with a particular focus element type. The data input can be user generated or can be received from other sources, both within the device and from sources beyond the device. Graphically, a focus element will include the data input and a graphical symbol. The graphical symbol, like the focus element itself, is associated with a particular focus element type.
- As used herein, a focus element type is a category of focus element. Focus elements are grouped into specific categories, or types. As an example, focus element types can include note, event, contact, website, audio recording, location, photo, video, task, message, and barcode. Focus element types are used with subsequent interaction of focus elements.
- As used herein, the Focus application is source of identification, categorization, and generation for all focus elements. Likewise, the Focus application is the centralized location from where the list of focus elements is stored. The Focus application may be run as a discrete application, accessed like any other typical application on a device. Alternatively, the Focus application may be constantly running underneath the typical user interface of the device. Updates to the Focus application can implement changes to the identification, categorization, and generation for focus elements. For example, application updates can add additional focus element types to the Focus application, add additional pattern matching parameters to improve categorization accuracy, etc.
- As used herein, the terms “a” or “an” shall mean one or more than one. The term “plurality” shall mean two or more than two. The term “another” is defined as a second or more. The terms “including” and/or “having” are open ended (e.g., comprising). The term “or” as used herein is to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
- Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner on one or more embodiments without limitation.
- Referring now to the figures,
FIGS. 1A-1E depict graphical representations of a device with focus element entry according to one or more embodiments. As depicted inFIG. 1A ,device 100 may be configured for presentation of focus elements on adisplay 101. Thedevice 100 may be any one of a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device.Device 100 includes a controller (not shown).Display 101 may additionally include a plurality ofinputs 115.Device 100 may further include adata entry field 110 configured to allow for the user to enter information into thedevice 100. - According to one embodiment,
device 100 is configured to detect and characterize inputs to the device, including text information (e.g., txt format), websites (e.g., html format), pictures (e.g., jpeg file), etc. Unlike typical applications for personal electronic devices are designed solely for particular types of data inputs, such as a photo application designed to save, process, and interact with pictures (e.g., jpeg files) which only accept a single type of input (e.g., images),device 100 may be configured to collect different types of input in a single application. In addition to collection,device 100 may be configured to detect and characterize the inputs.Device 100 is configured to process data inputs across a variety of different types.Device 100 can categorize different types of data inputs and provide a central location from which other applications can be conveniently accessed. - As depicted in
FIG. 1B , thedisplay 101 may additionally include a data input interface 102 (e.g., keyboard, free-form pad, etc.). Thedata input interface 102 can be a part of the display 101 (e.g., touch-screen keyboard). Alternatively, thedata input interface 102 can be separate from the display 101 (e.g., a physical keyboard). Information can be entered into thedevice 100, at thedata entry field 110, via the data input interface 102 (e.g., typed data). For example, the text “office meeting” is added into thedata entry field 110. Alternatively, information can be entered into thedevice 100 via the plurality of inputs 115 (e.g. audio data, image data, video data, location data, etc.). Likewise, information can be entered into thedevice 100 via copy and paste data. This information entered into thedata entry field 110 is processed, by thedevice 100, into afocus element 120. - As depicted in
FIG. 1C , information entered intodata entry field 110 was processed, by thedevice 100, intofocus element 120 1. Thefocus element 120 1 includes adata input 121 1 and agraphical symbol 122 1, which is associated with a specific type of focus element. For example, the text “office meeting at 11:30 a.m.” is thedata input 121 1 for thefocus element 120 1. The symbol of a calendar is thegraphical symbol 122 1 forfocus element 120 1. In an embodiment,data input 121 1 is one of typed data, copy and paste data, audio data, image data, video data, and location data.Data input 121 1 can be user generated (e.g., via the keyboard), entered into the device 100 (e.g., via the plurality of inputs 115) or can be received from other sources (e.g., via a received SMS text message). - As depicted in
FIGS. 1D-1E , thedevice 100 will detect thedata input 121 2, which is at least a portion of information in thedata entry field 110. Thisdata input 121 2 is subsequently used by thedisplay device 100 to generate thefocus element 120 2. For example, inFIG. 1D the information “frank's phone 416-358-8543” is entered into thedata entry field 110. InFIG. 1E , this information is subsequently converted to thedata input 121 2 of thefocus element 120 2. - To generate the
focus element 120 2, thedevice 100 must categorize thedata input 121 2. Categorizing includes determining a specific type of focus element for thedata input 121 2. There are a number of specific types of focus elements, from which thedata input 121 2 may be associated. In an embodiment, focus element type includes notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. A camera on thedevice 100 can dynamically determine whether something being viewed is a standard photo or, alternatively, a barcode. Barcode information is translated into other ASCI information or textual information. To determine a specific type of focus element for thedata input 121 2, thedevice 100 may match at least a portion of the input to one or more data patterns. These data patterns can be associated with a plurality of predefined focus element types. For example, names and phone numbers are associated with contacts; city/states are associated with locations, etc. In this way, through pattern matching, thedevice 100 can determine a specific type of focus element for thedata input 121 2. Once a specific type of focus element has been determined for thedata input 121 2, the device assigns the focus element type to thedata input 121 2. Using this newly determined information: thedata input 121 2 and the focus element type, the device generates afocus element 120 2. - Once generated, the
focus element 120 2 is depicted graphically on thedevice 100. The graphical representation of thefocus element 120 2 includes thedata input 121 2 and thegraphical symbol 122 2. Thegraphical symbol 122 2 is used to identify the focus element type. More particularly, thegraphical symbol 122 2 helps the user quickly identify the type of focus element through visual cues. For example, the user can quickly determine thatfocus element 120 1 is the focus element type of events, by seeing thegraphical symbol 122 1 of a calendar. Likewise, the user can quickly determine thatfocus element 120 2 is the focus element type of contacts, by seeing thegraphical symbol 122 2 of a face. - In an embodiment, as new information is added to the
data entry field 110, thefocus element 120 may be updated. For example, as information is first added to thedata entry field 110, thedevice 100 will dynamically identify, categorize, and generate focus elements (as described above and in greater detail below). Imagine the user enters the information “Becky . . . ” into thedata entry field 110. Thedevice 100 may determine that thedata input 121 is a name: Becky. Through pattern matching, thedevice 100 may assign the focus element type of contact, and subsequently assign thegraphical symbol 122 of a face, associated with the focus element type of contacts. Thus, afocus element 120 has been dynamically created, based off information the user has added to thedata entry field 110. However, the user continues to enter more information into the samedata entry field 110. Imagine that “Becky . . . ” is now changed, by the user, to “Becky, 1600 Pennsylvania Ave., Washington D.C.” Thedevice 100 may determine that thedata input 121, which was previously a name, is now actually an address. Through pattern matching, thedevice 100 may assign a new focus element type of location, and subsequently assign thegraphical symbol 122 of a moon, associated with the focus element type of location. Thus, thefocus element 120 that was dynamically created, as a contact, has now been dynamically re-categorized. The graphical representation of thefocus element 120 is updated to reflect this dynamic re-categorization. - In certain embodiments, new information is not necessarily required to be entered into
data entry field 110 by the user, in order to be identified, categorized, and generated intofocus element 120. User entry is only one way that data can be processed bydevice 100. Beyond typed data, by the user, information can be entered into thedevice 100 through the plurality ofinputs 115, including audio data, image data, video data, and location data. Likewise, information can be entered into the device through copy and paste data. In an exemplary scenario,device 100 can detect online article browsing using a web browser application for thedevice 100 including content associated with “The White House” in Washington, D.C. The online article may note in part that “The White House is located at 1600 Pennsylvania Ave.” In response to user action including copy and paste of the address, “1600 Pennsylvania Ave.” to thedevice 100, thedevice 100 may determine that the data input is an address: 1600 Pennsylvania Ave. Through pattern matching, thedevice 100 may assign the focus element type of location, and subsequently assign the graphical symbol of a moon, associated with the focus element type of location. Thus, a focus element has been dynamically created, based on information input by the user via a copy and pasted with thedevice 100. In this way, the device can detect clipboard information to identify, categorize, and generate additional focus elements. Likewise, focus elements can be identified, categorized, and generated through audio data, image data, video data, and location data. This includes data received from other sources, such as the Internet, or from devices of other users. - With the
focus element 120 being presented graphically, thedevice 100 can take a number of different actions. In one embodiment, thefocus element 120 is automatically stored, by thedevice 100, once it is created. In a different embodiment, thefocus element 120 is not stored, by thedevice 100, until the user performs some additional action. For example, the user swipes thefocus element 120 to the right to store thefocus element 120 to thedevice 100. Alternative commands to store afocus element 120 can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. - In an embodiment, focus element identification, categorization, and generation, can continually operate underneath the typical user interface of the device. For example, the interface could be a text message conversation; typically, a text message conversation will take place on a text message application. Though the
device 100 is running a text message application, thedevice 100 may still identify pertinent information and generate afocus element 120. Continual uninterrupted analysis of information, for identification, categorization, and generation of focus elements, is beneficial in many ways. Through continual analysis, thedevice 100 may automatically detect content that is categorized by one of the specific types of focus elements (e.g., addresses, contact information, websites, etc.). Categorization utilizes pattern-matching for all content automatically processed. By graphically categorizing content, including adding graphical symbols for each category, the user can instantly see whether information on the device is relevant (i.e., information is a specific type of focus element) or irrelevant (i.e., information is not a specific type of focus element). Furthermore, dynamic categorization enables the user to push content from specific applications (e.g., a text message application) into a centralized list. Likewise, as discussed below, the user has the ability to modify the categorization for any focus element. The list may be stored on a specific Focus application on thedevice 100. The Focus application is described in greater detail below with respect toFIGS. 5-7 . -
FIG. 2 depicts a graphical representation of a display device with a focus element and a list of focus elements according to one or more embodiments.Device 200 may be configured for presentation of focus elements on adisplay 201 that includes adata input interface 202.Display 201 can additionally include a plurality ofinputs 215.Device 200 may store afocus element 220, including thedata input 221 and agraphical symbol 222 for the focus element type, in an input list. For example,focus element 220 can include agraphical symbol 222 of a face, which is associated with the focus element type of contacts. Often, this input list is displayed on a specific application: the Focus application.Device 200 may display thefocus element 220 in the Focus application. Additionally, by displaying the graphical representation of thefocus element 220, the device may additionally display a plurality of previously createdfocus elements 230 in the Focus application. Each of the plurality of previously createdfocus elements 230 may, likewise, have a data input and a graphical symbol. For example, the plurality of previously createdfocus elements 230 each have a graphical symbol (e.g., cloud, note, moon), which is associated with a respective focus element type (e.g., websites, notes, location). It should be appreciated that the graphical symbols used herein are merely examples. A number of other illustrative graphics and symbols could be used. - By providing the
focus element 220 and the plurality of previously createdfocus elements 230 in a central location (e.g., the list on the Focus application), the device is able to view all information that the user has saved as important. Because focus elements are displayed to include graphical symbols, the user can quickly navigate among all saved information to find specific information. Likewise, and as discussed below, the user is able to interact with all information that the user has saved as important, from one centrally organized location. In this sense, the Focus application acts as a gateway to a number of related applications. The Focus application itself can be invoked, by the user, by clicking an icon on thedevice 200 that is associated with the Focus application (e.g., an app icon). Alternatively, the Focus application can be invoked via swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. -
FIG. 3 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments. Althoughprocess 300 is described with reference to the flowchart illustrated inFIG. 3 , it will be appreciated that many other processes of performing the acts associated with theprocess 300 may be used. Theprocess 300 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both.Process 300 may be performed by a device such asdevice 100 ofFIG. 1A . - At
block 305, at least one graphical entry space for entry of focus elements is presented on a display of the device. In some embodiments, this graphical entry space is an area where the user can type or paste information. In different embodiments, the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data. Atblock 310, an input to the at least one graphical entry space is detected. The input may be any type of data in the graphical entry space, as described above. - The device will then categorize the input. Categorization may involve both a determination and an assignment, by the device, at
block 315. Categorization may include determining a focus element type for the input. Pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. The particular focus element type assigned to an input is determined, by the device, through pattern matching. For example, the device will match at least a portion of the input to one of the pre-defined focus element types. The device may also assign the focus element type to the input. While assignment is made by the device, it should be noted that the user can modify the assignment, for any input, to a different focus element type. - At
block 320, the device will create a focus element, based on the input and the focus element type. At this stage, the device now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above. The device may store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on the device. - At
block 325, the device will display a graphical representation of the focus element. This could include the input and at least one graphical symbol identifying the focus element type. In an example embodiment, this graphical representation of the focus element is presented dynamically, in another application on the device. For example, the focus element could be shown by the device while the user is accessing a text message application. In a different example embodiment, this graphical representation of the focus element is presented alone. In a different example embodiment, this graphical representation of the focus element is presented in a list. The list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on the device. -
FIG. 4 depicts a simplified diagram of a device according to one or more embodiments.Device 400 may relate to one or more devices for providing an application, such as a Focus application. In one embodiment,device 400 relates to a device including a display, such as a phone, tablet, laptop, personal computer, television, gaming system, and other electronic display device. As shown inFIG. 4 ,device 400 includescontroller 405,user interface 410,communications unit 415, andmemory 420. -
Controller 405 may be configured to execute code stored inmemory 420 for operation ofdevice 400 including presentation of a graphical user interface.Controller 405 may include a processor and/or one or more processing elements. In oneembodiment controller 405 may be include one or more of hardware, software, firmware and/or processing components in general. According to one embodiment,controller 405 may be configured to perform one or more processes described herein.Controller 405 may be configured to run a Focus application, the Focus application including one or more focus elements, Focus application user interface configuration. -
User interface 410 may be configured to receive one or more commands via an input/output (I/O)interface 425, which may include one or more inputs or terminals to receive user commands. Whendevice 400 relates to a display device, I/O interface 425 may receive one or more remote control commands. Likewise,graphical user interface 410 may be configured to receive one or more commands from adisplay 430. In one embodiment, commands from thedisplay 430 are sent to thecontroller 405 via user interaction with a touch screen. -
Communications unit 415 may be configured for wired and/or wireless communication with one or more network elements, such as servers.Memory 420 may include non-transitory RAM and/or ROM memory for storing executable instructions, operating instructions and content for display. -
FIG. 5 depicts a graphical representation of the focus application according to one or more embodiments. Focus element identification, categorization, and generation can be triggered, on thedevice 500, by the Focus application. The Focus application provides the user with the ability to view all focus elements, in a list. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location. The Focus application can include afocus menu 510, which acts as a precursor to viewing all focus elements in a list. - The
focus menu 510 provides the user with the opportunity to quickly access the plurality ofinputs 515 1 to 515 5, and subsequently create a focus element by entering data into thedevice 500. In an embodiment, clicking thelocation input button 515 1 would enter data related to the user's current location into thedevice 500 as a focus element. In another embodiment, clicking thephoto input button 515 2 would enter picture data into thedevice 500 as a focus element. In another embodiment, clicking thevideo input button 515 3 would enter video data into thedevice 500 as a focus element. In another embodiment, clicking themicrophone input button 515 4 would input audio data into thedevice 500 as a focus element. In another embodiment, clicking thenote input button 515 5 would input text data into thedevice 500 as a note. It should be appreciated that each of the plurality ofinputs 515 1 to 515 5 discussed herein are merely examples. A number of other illustrative graphics and symbols could be used. -
FIG. 6 depicts a graphical representation of the Focus application according to one or more embodiments. Focus element identification, categorization, and generation can be triggered, on thedevice 600, by the Focus application. The Focus application, as shown graphically on thedisplay 601, provides the user with the ability to view all focus elements, in a list. The Focus application displays, to the user, afocus element 620, including thedata input 621 and agraphical symbol 622 for the focus element type, in an input list. For example,focus element 620 can include agraphical symbol 622 of a calendar, which is associated with the focus element type of events. Because focus elements typically represent key pieces of information, the user may find it useful to view and interact with all key information from one central location. - The Focus application interface allows the user to add additional focus elements directly into the Focus application (e.g., into the list of focus elements). The user may enter information into the Focus application via the
data entry field 610, such that thedevice 600 will dynamically identify, categorize, and generate a focus element. Likewise, the user may enter information into the Focus application via the plurality ofinputs 615, such that thedevice 600 will dynamically identify, categorize, and generate a focus element. By comparison, the user is still able to add focus elements from other applications (e.g., a text message application) when the device is constantly analyzing information for identification, categorization, and generation of focus elements as previously described above. The user can add information in a number of different ways (e.g., typed data, copy and paste data, audio data, image data, video data, and location data). - As shown in
FIG. 6 ,device 600 includes a plurality of previously createdfocus elements 630 in a list.Device 600 also includes thedata entry field 610. Thedata entry field 610 is a text entry area on thedevice 600. The user may enter text into the data entry field 610 (e.g., “Buy Stamps”). In an example embodiment, thedevice 600 will identify information in thedata entry field 610 via pattern matching (as discussed above) in order to categorize a focus element type. In a different example embodiment, the user has the ability to set or change the focus element type for thedata entry field 610 through use of the plurality ofinputs 615. Data in thedata entry field 610 can be categorized by the user, using the plurality ofinputs 615 to represent each of the focus element types. Likewise, the user can select one of the plurality of previously createdfocus elements 630 and re-categorize the element, using the plurality ofinputs 615. In this way, the user has the capability to override the pattern matching typically done by thedevice 600. - More specifically, beyond allowing the user to add additional focus elements directly into a list, the Focus application interface gives the user the ability to categorize focus elements, and re-categorize previous focus elements. While pattern matching is an ideal way to dynamically categorize information, the user may prefer certain information or focus elements to be categorized in a different way. The Focus application interface gives the user the ability to customize focus elements to the user's individual preferences. Finally, the Focus application interface allows the user to access, through the user selection command, third party applications that are linked to focus elements and associated with specific focus element types. By providing focus elements in a central location, and allowing the user to take action with respect to individual focus elements, the Focus application effectively links important information to the third party applications.
- In an example embodiment, the
device 600 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type. Likewise, with dynamic identification, categorization, and generation, the device may re-categorize a default “note” based on additional information that is added by the user (e.g., the note changes to maps, once the user types an address). As previously mentioned, the user may have the capability to override the pattern matching through a plurality of selectable elements via the Focus application. -
FIG. 7 depicts a graphical representation of focus element types according to one or more embodiments. In an embodiment,focus element 701 is categorized as the focus element type of a note. In an embodiment,focus element 702 is categorized as the focus element type of an event. In an embodiment,focus element 703 is categorized as the focus element type of a contact. In an embodiment,focus element 704 is categorized as the focus element type of a website. In an embodiment, focus element 705 is categorized as the focus element type of an audio recording. In an embodiment,focus element 706 is categorized as the focus element type of a location. In an embodiment,focus element 707 is categorized as the focus element type of a photo. In an embodiment,focus element 708 is categorized as the focus element type of a task. In an embodiment,focus element 709 is categorized as the focus element type of a message. In an embodiment,focus element 710 is categorized as the focus element type of a barcode. -
FIG. 8 depicts a graphical representation of a process of detection and handling of focus elements according to one or more embodiments. Althoughprocess 800 is described with reference to the flowchart illustrated inFIG. 8 , it will be appreciated that many other processes of performing the acts associated with theprocess 800 may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, and some of the blocks described are optional. Theprocess 800 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both.Process 800 may be performed by a device such asdevice 100 ofFIG. 1A . - The
device 100 will detect an input atblock 810. The input may be any type of data. In an embodiment, the input is received at least one graphical entry space for entry of focus elements on a display of the device. In some embodiments, this graphical entry space is an area where the user can type or paste information. In different embodiments, the graphical entry space can be either generated or received copy and paste data, audio data, image data, video data, or location data. - The device will match at least a portion of the input with a data pattern at
block 820. This matching process requires a determination of whether the input can be assigned one of the focus element types. In an embodiment, pre-defined focus element types may include notes, events, contacts, websites, audio recordings, location, photo, video, task, message, and barcode. The particular focus element type assigned to an input is determined, by thedevice 100, through pattern matching. For example, thedevice 100 will match at least a portion of the input to one of the pre-defined focus element types. - The
device 100 determines atdecision block 825 whether, in fact, the input is assignable. Responsive to determining that the input is assignable, thedevice 100 assigns, to the input, a focus element type and a graphical symbol atblock 830. The graphical symbol is a symbol associated with a specific type of focus element. Responsive to determining that the input is not assignable, thedevice 100 skips block 830. In an embodiment, if the input is not assignable, thedevice 100 assigns, to the input, a focus element type of a “note”; in this embodiment, thedevice 100 will use the “note” as the default focus element type, if pattern matching does not identify another focus element type. In an embodiment, while assignment is made by thedevice 100, it should be noted that the user can modify the assignment, for any input, to a different focus element type. - At block 840, the
device 100 displays the input with the graphical symbol. In an embodiment, display of the input with the graphical symbol is characterized as display of the focus element. At this stage, thedevice 100 now recognizes a new element: a focus element, which is based on the categorization of the input, as discussed above. - The device will identify a user storage command at
block 850. In an embodiment, the user swipes the focus element on thedevice 100 to the right to store the focus element to thedevice 100. Alternative commands to store a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. At block 760, thedevice 100 will store the input, the focus element type, and the graphical symbol in an input list. In an example embodiment, this input list is accessed, modified, and interacted with via a Focus application that is running on thedevice 100. - At
block 870, the device will display the input list. In an embodiment, display of the input list includes display of a graphical representation of the focus element, including the input and at least one graphical symbol identifying the focus element type, and display of additional focus elements. In an example embodiment, this graphical representation of the focus element is presented dynamically, in another application on thedevice 100. For example, the focus element could be shown by the device while the user is accessing a text message application. In a different example embodiment, this graphical representation of the focus element is presented alone. In a different example embodiment, this graphical representation of the focus element is presented in a list. The list may include a plurality of previously created focus elements. The list may be accessed through the Focus application on thedevice 100. - The
device 100 identifies a user selection command atblock 880. In an embodiment, the user swipes the focus element to the right on thedevice 100, for user selection of the focus element. Alternative commands for selection of a focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. It should be noted that user selection (e.g., block 880), as described herein, is different from the user storage command (e.g., block 850). - Through a user selection command, the
device 100 will transfer the focus element to a third party application atblock 890. Thedevice 100, post-user selection command, transfers the data input for the selected focus element to a third party application. The third party application is associated with the focus element type for the selected focus element. For example, if the selected focus element is a website focus element type, the third party application associated with the selected focus element would be a web-browser application on thedevice 600. Thedevice 600 displays the application associated with the selected focus element on the display of thedevice 600. In an embodiment, the third party application is located on the device 100 (e.g., an app on the device). In a different embodiment, the third party application is located on an external network (e.g., the Internet). - As examples, the third party application may be one of a notepad application, calendar application, contacts application, web browser application, microphone application, camera application, map application, navigation application, task list application, email application, text message application, telephone application, and bar code reader application.
- While
FIGS. 5-8 illustrate selection of a focus element within the Focus application, interface, it should be appreciated that the user can select a focus element at any other point in time, from any other interface (e.g., from a text message application) so long as the focus element has been identified, categorized, and generated. In an example embodiment, suppose that the user is in a text message application and the user types “Becky: 555-5555.” In other embodiments, the user receives the information “Becky: 555-5555” from another person, via a received SMS message. Regardless of information-source, the device will identify the data input, categorize the input based on a focus element type (e.g., contacts), and assign a graphical symbol (e.g., a face). At this point, a focus element has been generated even though the user is still in the text message application. Responsive to a focus element being generated, the user can take a number of additional actions. One action that the user can take is to store the focus element in a list, within the Focus application. Another action that the user can take is to immediately act on the focus element, through user selection. For example, if the user takes immediate action with the “Becky: 555-5555” focus element, which is a “contact” focus element, the device, will transition directly to the contacts application for the device. The user does not have to go through the process of selecting the text message information, copying the text message information, leaving the text message application, opening the contacts application, and then saving the text message information. Rather, by taking action on the focus element, the user immediately transitions the information to the appropriate application (as dictated by the focus element type). - Gestures for user selection of the focus element can include swipes, flicks, taps, on-display gestures, off-to-on display gestures, off-display gestures, and off-display buttons. In other related embodiments, different gestures can trigger different actions for focus elements. Actions may include saving the focus element to the focus element list, transitioning to the Focus application, saving the focus element and transitioning to the Focus application, taking immediate action with the focus element, saving the focus element and taking immediate action with the focus element, etc. Likewise, different gestures can trigger different applications that might be related. For example, with a “contact” focus element, swiping a first direction may send the focus element to the contacts application, swiping a second direction may send the focus element to the telephone application, swiping a third direction may send the focus element to the Focus application, etc. With other types of focus elements, swipes could trigger different functionalities and applications. It should be appreciated that various different actions for focus elements can be combined as well.
- It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer-readable medium, including RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be configured to be executed by a processor, which, when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures.
- It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims
- While this disclosure has been particularly shown and described with references to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the claimed embodiments.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/053,501 US20170024086A1 (en) | 2015-06-23 | 2016-02-25 | System and methods for detection and handling of focus elements |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201562183613P | 2015-06-23 | 2015-06-23 | |
| US201562184476P | 2015-06-25 | 2015-06-25 | |
| US15/053,501 US20170024086A1 (en) | 2015-06-23 | 2016-02-25 | System and methods for detection and handling of focus elements |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20170024086A1 true US20170024086A1 (en) | 2017-01-26 |
Family
ID=57600988
Family Applications (8)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/053,501 Abandoned US20170024086A1 (en) | 2015-06-23 | 2016-02-25 | System and methods for detection and handling of focus elements |
| US15/133,870 Active 2036-12-06 US10241649B2 (en) | 2015-06-23 | 2016-04-20 | System and methods for application discovery and trial |
| US15/133,846 Abandoned US20160381287A1 (en) | 2015-06-23 | 2016-04-20 | System and methods for controlling device operation and image capture |
| US15/133,859 Active 2036-12-30 US10310706B2 (en) | 2015-06-23 | 2016-04-20 | System and methods for touch target presentation |
| US15/169,634 Active 2037-01-10 US10331300B2 (en) | 2015-06-23 | 2016-05-31 | Device and methods for control including presentation of a list of selectable display elements |
| US15/169,642 Abandoned US20160378279A1 (en) | 2015-06-23 | 2016-05-31 | System and methods for device control |
| US15/190,145 Abandoned US20160378281A1 (en) | 2015-06-23 | 2016-06-22 | System and methods for navigation bar presentation and device control |
| US15/190,144 Active 2037-05-03 US10222947B2 (en) | 2015-06-23 | 2016-06-22 | Methods and devices for presenting dynamic information graphics |
Family Applications After (7)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/133,870 Active 2036-12-06 US10241649B2 (en) | 2015-06-23 | 2016-04-20 | System and methods for application discovery and trial |
| US15/133,846 Abandoned US20160381287A1 (en) | 2015-06-23 | 2016-04-20 | System and methods for controlling device operation and image capture |
| US15/133,859 Active 2036-12-30 US10310706B2 (en) | 2015-06-23 | 2016-04-20 | System and methods for touch target presentation |
| US15/169,634 Active 2037-01-10 US10331300B2 (en) | 2015-06-23 | 2016-05-31 | Device and methods for control including presentation of a list of selectable display elements |
| US15/169,642 Abandoned US20160378279A1 (en) | 2015-06-23 | 2016-05-31 | System and methods for device control |
| US15/190,145 Abandoned US20160378281A1 (en) | 2015-06-23 | 2016-06-22 | System and methods for navigation bar presentation and device control |
| US15/190,144 Active 2037-05-03 US10222947B2 (en) | 2015-06-23 | 2016-06-22 | Methods and devices for presenting dynamic information graphics |
Country Status (1)
| Country | Link |
|---|---|
| US (8) | US20170024086A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| USD930698S1 (en) * | 2014-06-01 | 2021-09-14 | Apple Inc. | Display screen or portion thereof with graphical user interface |
Families Citing this family (35)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10155168B2 (en) | 2012-05-08 | 2018-12-18 | Snap Inc. | System and method for adaptable avatars |
| USD738889S1 (en) * | 2013-06-09 | 2015-09-15 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
| US10339365B2 (en) | 2016-03-31 | 2019-07-02 | Snap Inc. | Automated avatar generation |
| US10261666B2 (en) * | 2016-05-31 | 2019-04-16 | Microsoft Technology Licensing, Llc | Context-independent navigation of electronic content |
| US10360708B2 (en) | 2016-06-30 | 2019-07-23 | Snap Inc. | Avatar based ideogram generation |
| US10432559B2 (en) | 2016-10-24 | 2019-10-01 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
| CN106354418B (en) * | 2016-11-16 | 2019-07-09 | 腾讯科技(深圳)有限公司 | A kind of control method and device based on touch screen |
| US10454857B1 (en) | 2017-01-23 | 2019-10-22 | Snap Inc. | Customized digital avatar accessories |
| US10484675B2 (en) * | 2017-04-16 | 2019-11-19 | Facebook, Inc. | Systems and methods for presenting content |
| US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
| CN111010882B (en) | 2017-04-27 | 2023-11-03 | 斯纳普公司 | Location privacy relevance on map-based social media platforms |
| US10212541B1 (en) | 2017-04-27 | 2019-02-19 | Snap Inc. | Selective location-based identity communication |
| US11340925B2 (en) | 2017-05-18 | 2022-05-24 | Peloton Interactive Inc. | Action recipes for a crowdsourced digital assistant system |
| US11043206B2 (en) | 2017-05-18 | 2021-06-22 | Aiqudo, Inc. | Systems and methods for crowdsourced actions and commands |
| US11397558B2 (en) * | 2017-05-18 | 2022-07-26 | Peloton Interactive, Inc. | Optimizing display engagement in action automation |
| US11056105B2 (en) | 2017-05-18 | 2021-07-06 | Aiqudo, Inc | Talk back from actions in applications |
| WO2018213788A1 (en) | 2017-05-18 | 2018-11-22 | Aiqudo, Inc. | Systems and methods for crowdsourced actions and commands |
| US10572107B1 (en) * | 2017-06-23 | 2020-02-25 | Amazon Technologies, Inc. | Voice communication targeting user interface |
| US11379550B2 (en) * | 2017-08-29 | 2022-07-05 | Paypal, Inc. | Seamless service on third-party sites |
| WO2019047189A1 (en) * | 2017-09-08 | 2019-03-14 | 广东欧珀移动通信有限公司 | Message display method and device and terminal |
| WO2019047184A1 (en) * | 2017-09-08 | 2019-03-14 | 广东欧珀移动通信有限公司 | Information display method, apparatus, and terminal |
| CN107547750B (en) * | 2017-09-11 | 2019-01-25 | Oppo广东移动通信有限公司 | Terminal control method, device and storage medium |
| US11307760B2 (en) * | 2017-09-25 | 2022-04-19 | Huawei Technologies Co., Ltd. | Terminal interface display method and terminal |
| CN110494835A (en) * | 2017-12-20 | 2019-11-22 | 华为技术有限公司 | A control method and device |
| CN110442407B (en) * | 2018-05-03 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Application program processing method and device |
| US10936163B2 (en) * | 2018-07-17 | 2021-03-02 | Methodical Mind, Llc. | Graphical user interface system |
| CN109254719A (en) * | 2018-08-24 | 2019-01-22 | 维沃移动通信有限公司 | A kind of processing method and mobile terminal of display interface |
| US11423073B2 (en) * | 2018-11-16 | 2022-08-23 | Microsoft Technology Licensing, Llc | System and management of semantic indicators during document presentations |
| KR102657519B1 (en) * | 2019-02-08 | 2024-04-15 | 삼성전자주식회사 | Electronic device for providing graphic data based on voice and operating method thereof |
| CN110213729B (en) * | 2019-05-30 | 2022-06-24 | 维沃移动通信有限公司 | Message sending method and terminal |
| IL294364B1 (en) * | 2019-12-27 | 2025-12-01 | Methodical Mind Llc | Graphical user interface system |
| CA3168639A1 (en) | 2020-01-22 | 2021-07-29 | Methodical Mind, Llc. | Graphical user interface system |
| CN112433661B (en) * | 2020-11-18 | 2022-02-11 | 上海幻电信息科技有限公司 | Interactive object selection method and device |
| USD1003935S1 (en) * | 2021-03-09 | 2023-11-07 | Huawei Technologies Co., Ltd. | Display screen or portion thereof with animated graphical user interface |
| USD1003911S1 (en) * | 2021-06-04 | 2023-11-07 | Apple Inc. | Display or portion thereof with graphical user interface |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6839669B1 (en) * | 1998-11-05 | 2005-01-04 | Scansoft, Inc. | Performing actions identified in recognized speech |
| US7146381B1 (en) * | 1997-02-10 | 2006-12-05 | Actioneer, Inc. | Information organization and collaboration tool for processing notes and action requests in computer systems |
| US20110202864A1 (en) * | 2010-02-15 | 2011-08-18 | Hirsch Michael B | Apparatus and methods of receiving and acting on user-entered information |
| US20120158472A1 (en) * | 2010-12-21 | 2012-06-21 | Research In Motion Limited | Contextual customization of content display on a communication device |
| US20140062862A1 (en) * | 2012-08-31 | 2014-03-06 | Omron Corporation | Gesture recognition apparatus, control method thereof, display instrument, and computer readable medium |
| US20150019997A1 (en) * | 2013-07-10 | 2015-01-15 | Samsung Electronics Co., Ltd. | Apparatus and method for processing contents in portable terminal |
Family Cites Families (91)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5708709A (en) * | 1995-12-08 | 1998-01-13 | Sun Microsystems, Inc. | System and method for managing try-and-buy usage of application programs |
| US5886698A (en) * | 1997-04-21 | 1999-03-23 | Sony Corporation | Method for filtering search results with a graphical squeegee |
| US7434177B1 (en) * | 1999-12-20 | 2008-10-07 | Apple Inc. | User interface for providing consolidation and access |
| US7730401B2 (en) * | 2001-05-16 | 2010-06-01 | Synaptics Incorporated | Touch screen with user interface enhancement |
| AU2002359001A1 (en) * | 2001-12-28 | 2003-07-24 | Access Co., Ltd. | Usage period management system for applications |
| US7787908B2 (en) * | 2002-03-19 | 2010-08-31 | Qualcomm Incorporated | Multi-call display management for wireless communication devices |
| WO2004042515A2 (en) * | 2002-11-01 | 2004-05-21 | Pocketpurchase, Inc. | Method and system for online software purchases |
| JP4215549B2 (en) * | 2003-04-02 | 2009-01-28 | 富士通株式会社 | Information processing device that operates in touch panel mode and pointing device mode |
| JP5437562B2 (en) * | 2003-08-06 | 2014-03-12 | コーニンクレッカ フィリップス エヌ ヴェ | How to present multiple items |
| US20050055309A1 (en) * | 2003-09-04 | 2005-03-10 | Dwango North America | Method and apparatus for a one click upgrade for mobile applications |
| US8271495B1 (en) * | 2003-12-17 | 2012-09-18 | Topix Llc | System and method for automating categorization and aggregation of content from network sites |
| US20060063590A1 (en) * | 2004-09-21 | 2006-03-23 | Paul Abassi | Mechanism to control game usage on user devices |
| US8102973B2 (en) * | 2005-02-22 | 2012-01-24 | Raytheon Bbn Technologies Corp. | Systems and methods for presenting end to end calls and associated information |
| US9727082B2 (en) * | 2005-04-26 | 2017-08-08 | Apple Inc. | Back-side interface for hand-held devices |
| US8818331B2 (en) * | 2005-04-29 | 2014-08-26 | Jasper Technologies, Inc. | Method for enabling a wireless device for geographically preferential services |
| US7605804B2 (en) * | 2005-04-29 | 2009-10-20 | Microsoft Corporation | System and method for fine cursor positioning using a low resolution imaging touch screen |
| GB0522079D0 (en) * | 2005-10-29 | 2005-12-07 | Griffin Ian | Mobile game or program distribution |
| EP1796000A1 (en) * | 2005-12-06 | 2007-06-13 | International Business Machines Corporation | Method, system and computer program for distributing software products in trial mode |
| US7958456B2 (en) * | 2005-12-23 | 2011-06-07 | Apple Inc. | Scrolling list with floating adjacent index symbols |
| US20070233782A1 (en) * | 2006-03-28 | 2007-10-04 | Silentclick, Inc. | Method & system for acquiring, storing, & managing software applications via a communications network |
| US9395905B2 (en) * | 2006-04-05 | 2016-07-19 | Synaptics Incorporated | Graphical scroll wheel |
| WO2007138429A2 (en) * | 2006-05-25 | 2007-12-06 | Shuki Binyamin | Method and system for efficient remote application provision |
| US8611521B2 (en) * | 2006-07-07 | 2013-12-17 | Verizon Services Corp. | Systems and methods for multi-media control of audio conferencing |
| JP2010507861A (en) * | 2006-10-23 | 2010-03-11 | ウィジン オ | Input device |
| US7961860B1 (en) * | 2006-11-22 | 2011-06-14 | Securus Technologies, Inc. | Systems and methods for graphically displaying and analyzing call treatment operations |
| US20090037287A1 (en) * | 2007-07-31 | 2009-02-05 | Ahmad Baitalmal | Software Marketplace and Distribution System |
| US11126321B2 (en) * | 2007-09-04 | 2021-09-21 | Apple Inc. | Application menu user interface |
| KR20100133945A (en) * | 2007-11-05 | 2010-12-22 | 비스토 코포레이션 | Service management system for providing service-related message prioritization in mobile clients |
| WO2009075602A1 (en) * | 2007-12-13 | 2009-06-18 | Motorola, Inc. | Scenarios creation system for a mobile device |
| JPWO2010032354A1 (en) * | 2008-09-22 | 2012-02-02 | 日本電気株式会社 | Image object control system, image object control method and program |
| US8650290B2 (en) * | 2008-12-19 | 2014-02-11 | Openpeak Inc. | Portable computing device and method of operation of same |
| US8370762B2 (en) * | 2009-04-10 | 2013-02-05 | Cellco Partnership | Mobile functional icon use in operational area in touch panel devices |
| US20100280892A1 (en) * | 2009-04-30 | 2010-11-04 | Alcatel-Lucent Usa Inc. | Method and system for targeted offers to mobile users |
| US20100277422A1 (en) * | 2009-04-30 | 2010-11-04 | Microsoft Corporation | Touchpad display |
| US8346847B2 (en) * | 2009-06-03 | 2013-01-01 | Apple Inc. | Installing applications based on a seed application from a separate device |
| US8448136B2 (en) * | 2009-06-25 | 2013-05-21 | Intuit Inc. | Creating a composite program module in a computing ecosystem |
| US20110087975A1 (en) * | 2009-10-13 | 2011-04-14 | Sony Ericsson Mobile Communications Ab | Method and arrangement in a data |
| US8370142B2 (en) * | 2009-10-30 | 2013-02-05 | Zipdx, Llc | Real-time transcription of conference calls |
| US9912721B2 (en) * | 2010-05-14 | 2018-03-06 | Highlight Broadcast Network, Llc | Systems and methods for providing event-related video sharing services |
| US20110295708A1 (en) * | 2010-05-25 | 2011-12-01 | beonSoft Inc. | Systems and methods for providing software rental services to devices connected to a network |
| US8650558B2 (en) * | 2010-05-27 | 2014-02-11 | Rightware, Inc. | Online marketplace for pre-installed software and online services |
| US20110307354A1 (en) * | 2010-06-09 | 2011-12-15 | Bilgehan Erman | Method and apparatus for recommending applications to mobile users |
| US9864501B2 (en) * | 2010-07-30 | 2018-01-09 | Apaar Tuli | Displaying information |
| US9936333B2 (en) * | 2010-08-10 | 2018-04-03 | Microsoft Technology Licensing, Llc | Location and contextual-based mobile application promotion and delivery |
| US8615772B2 (en) * | 2010-09-28 | 2013-12-24 | Qualcomm Incorporated | Apparatus and methods of extending application services |
| WO2012075629A1 (en) * | 2010-12-08 | 2012-06-14 | Nokia Corporation | User interface |
| US8612874B2 (en) * | 2010-12-23 | 2013-12-17 | Microsoft Corporation | Presenting an application change through a tile |
| US20120209586A1 (en) * | 2011-02-16 | 2012-08-16 | Salesforce.Com, Inc. | Contextual Demonstration of Applications Hosted on Multi-Tenant Database Systems |
| US20120246588A1 (en) * | 2011-03-21 | 2012-09-27 | Viacom International, Inc. | Cross marketing tool |
| JP2012212230A (en) * | 2011-03-30 | 2012-11-01 | Toshiba Corp | Electronic apparatus |
| US8826190B2 (en) * | 2011-05-27 | 2014-09-02 | Google Inc. | Moving a graphical selector |
| US8656315B2 (en) * | 2011-05-27 | 2014-02-18 | Google Inc. | Moving a graphical selector |
| GB201109339D0 (en) * | 2011-06-03 | 2011-07-20 | Firestorm Lab Ltd | Computing device interface |
| US9053750B2 (en) * | 2011-06-17 | 2015-06-09 | At&T Intellectual Property I, L.P. | Speaker association with a visual representation of spoken content |
| US8577737B1 (en) * | 2011-06-20 | 2013-11-05 | A9.Com, Inc. | Method, medium, and system for application lending |
| US20130016129A1 (en) * | 2011-07-14 | 2013-01-17 | Google Inc. | Region-Specific User Input |
| JP5295328B2 (en) * | 2011-07-29 | 2013-09-18 | Kddi株式会社 | User interface device capable of input by screen pad, input processing method and program |
| DE102011118367B4 (en) * | 2011-08-24 | 2017-02-09 | Deutsche Telekom Ag | Method for authenticating a telecommunication terminal comprising an identity module at a server device of a telecommunication network, use of an identity module, identity module and computer program |
| JP2013073330A (en) * | 2011-09-27 | 2013-04-22 | Nec Casio Mobile Communications Ltd | Portable electronic apparatus, touch area setting method and program |
| US8713560B2 (en) * | 2011-12-22 | 2014-04-29 | Sap Ag | Compatibility check |
| TWI470475B (en) * | 2012-04-17 | 2015-01-21 | Pixart Imaging Inc | Electronic system |
| CN102707882A (en) * | 2012-04-27 | 2012-10-03 | 深圳瑞高信息技术有限公司 | Method for converting control modes of application program of touch screen with virtual icons and touch screen terminal |
| US20130326499A1 (en) * | 2012-05-31 | 2013-12-05 | Microsoft Corporation | Automatically installing and removing recommended applications |
| JP6071107B2 (en) * | 2012-06-14 | 2017-02-01 | 裕行 池田 | Mobile device |
| KR20140016454A (en) * | 2012-07-30 | 2014-02-10 | 삼성전자주식회사 | Method and apparatus for controlling drag for moving object of mobile terminal comprising touch screen |
| US9280789B2 (en) * | 2012-08-17 | 2016-03-08 | Google Inc. | Recommending native applications |
| KR20140033839A (en) * | 2012-09-11 | 2014-03-19 | 삼성전자주식회사 | Method??for user's??interface using one hand in terminal having touchscreen and device thereof |
| JP5703422B2 (en) * | 2012-09-13 | 2015-04-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | Portable electronic devices |
| US20140109016A1 (en) * | 2012-10-16 | 2014-04-17 | Yu Ouyang | Gesture-based cursor control |
| US20140184503A1 (en) * | 2013-01-02 | 2014-07-03 | Samsung Display Co., Ltd. | Terminal and method for operating the same |
| US20140278860A1 (en) * | 2013-03-15 | 2014-09-18 | Samsung Electronics Co., Ltd. | Content delivery system with content sharing mechanism and method of operation thereof |
| US9477404B2 (en) * | 2013-03-15 | 2016-10-25 | Apple Inc. | Device, method, and graphical user interface for managing concurrently open software applications |
| US20140330647A1 (en) * | 2013-05-03 | 2014-11-06 | International Business Machines Corporation | Application and service selection for optimized promotion |
| US20140344041A1 (en) * | 2013-05-20 | 2014-11-20 | Cellco Partnership D/B/A Verizon Wireless | Triggered mobile checkout application |
| US8786569B1 (en) * | 2013-06-04 | 2014-07-22 | Morton Silverberg | Intermediate cursor touchscreen protocols |
| US9098366B1 (en) * | 2013-07-11 | 2015-08-04 | Sprint Communications Company L.P. | Virtual pre-installation of applications |
| EP3036923A4 (en) * | 2013-08-22 | 2017-05-10 | Inc. Sensoriant | Method and system for addressing the problem of discovering relevant services and applications that are available over the internet or other communcations network |
| KR102009279B1 (en) * | 2013-09-13 | 2019-08-09 | 엘지전자 주식회사 | Mobile terminal |
| WO2015100746A1 (en) * | 2014-01-06 | 2015-07-09 | 华为终端有限公司 | Application program display method and terminal |
| CN104793774A (en) * | 2014-01-20 | 2015-07-22 | 联发科技(新加坡)私人有限公司 | Electronic device control method |
| KR102105961B1 (en) * | 2014-05-13 | 2020-05-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
| US20150331589A1 (en) * | 2014-05-15 | 2015-11-19 | Todd KAWAKITA | Circular interface for navigating applications and an authentication mechanism on a wearable device |
| JP6328797B2 (en) * | 2014-05-30 | 2018-05-23 | アップル インコーポレイテッド | Transition from using one device to using another device |
| US9749205B2 (en) * | 2014-06-27 | 2017-08-29 | Agora Lab, Inc. | Systems and methods for visualizing a call over network |
| KR20160026141A (en) * | 2014-08-29 | 2016-03-09 | 삼성전자주식회사 | Controlling Method based on a communication status and Electronic device supporting the same |
| US20170220782A1 (en) * | 2014-09-08 | 2017-08-03 | Ali ALSANOUSI | Mobile interface platform systems and methods |
| US10176306B2 (en) * | 2014-12-16 | 2019-01-08 | JVC Kenwood Corporation | Information processing apparatus, evaluation method, and storage medium for evaluating application program |
| US10169474B2 (en) * | 2015-06-11 | 2019-01-01 | International Business Machines Corporation | Mobile application discovery using an electronic map |
| US10628559B2 (en) * | 2015-06-23 | 2020-04-21 | Microsoft Technology Licensing, Llc | Application management |
| KR20170029329A (en) * | 2015-09-07 | 2017-03-15 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
| US20170337214A1 (en) * | 2016-05-18 | 2017-11-23 | Linkedin Corporation | Synchronizing nearline metrics with sources of truth |
-
2016
- 2016-02-25 US US15/053,501 patent/US20170024086A1/en not_active Abandoned
- 2016-04-20 US US15/133,870 patent/US10241649B2/en active Active
- 2016-04-20 US US15/133,846 patent/US20160381287A1/en not_active Abandoned
- 2016-04-20 US US15/133,859 patent/US10310706B2/en active Active
- 2016-05-31 US US15/169,634 patent/US10331300B2/en active Active
- 2016-05-31 US US15/169,642 patent/US20160378279A1/en not_active Abandoned
- 2016-06-22 US US15/190,145 patent/US20160378281A1/en not_active Abandoned
- 2016-06-22 US US15/190,144 patent/US10222947B2/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7146381B1 (en) * | 1997-02-10 | 2006-12-05 | Actioneer, Inc. | Information organization and collaboration tool for processing notes and action requests in computer systems |
| US6839669B1 (en) * | 1998-11-05 | 2005-01-04 | Scansoft, Inc. | Performing actions identified in recognized speech |
| US20110202864A1 (en) * | 2010-02-15 | 2011-08-18 | Hirsch Michael B | Apparatus and methods of receiving and acting on user-entered information |
| US20120158472A1 (en) * | 2010-12-21 | 2012-06-21 | Research In Motion Limited | Contextual customization of content display on a communication device |
| US20140062862A1 (en) * | 2012-08-31 | 2014-03-06 | Omron Corporation | Gesture recognition apparatus, control method thereof, display instrument, and computer readable medium |
| US20150019997A1 (en) * | 2013-07-10 | 2015-01-15 | Samsung Electronics Co., Ltd. | Apparatus and method for processing contents in portable terminal |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| USD930698S1 (en) * | 2014-06-01 | 2021-09-14 | Apple Inc. | Display screen or portion thereof with graphical user interface |
Also Published As
| Publication number | Publication date |
|---|---|
| US20160378278A1 (en) | 2016-12-29 |
| US20160378293A1 (en) | 2016-12-29 |
| US10331300B2 (en) | 2019-06-25 |
| US20160379395A1 (en) | 2016-12-29 |
| US10222947B2 (en) | 2019-03-05 |
| US10241649B2 (en) | 2019-03-26 |
| US20160381287A1 (en) | 2016-12-29 |
| US10310706B2 (en) | 2019-06-04 |
| US20160378281A1 (en) | 2016-12-29 |
| US20160378279A1 (en) | 2016-12-29 |
| US20160378321A1 (en) | 2016-12-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20170024086A1 (en) | System and methods for detection and handling of focus elements | |
| AU2014408285B2 (en) | Device, method, and graphical user interface for presenting and installing applications | |
| CN106095449B (en) | Method and apparatus for providing user interface of portable device | |
| US11416319B1 (en) | User interface for searching and generating graphical objects linked to third-party content | |
| US9622016B2 (en) | Invisiblemask: a tangible mechanism to enhance mobile device smartness | |
| US20140109012A1 (en) | Thumbnail and document map based navigation in a document | |
| JP2025010532A (en) | Method and system for displaying chat threads - Patents.com | |
| CN108139862A (en) | multi-window keyboard | |
| CN110268377B (en) | Apparatus and method for providing user assistance in a computing system | |
| CN103348313A (en) | User interface incorporating sliding panels for listing records and presenting record content | |
| US20250181821A1 (en) | User interface with command-line link creation for generating graphical objects linked to third-party content | |
| CN112083866A (en) | Expression image generation method and device | |
| US11243679B2 (en) | Remote data input framework | |
| US20130113741A1 (en) | System and method for searching keywords | |
| CN108885535A (en) | Multi-window virtual keyboard | |
| CN114846493B (en) | Associating content items with captured images of meeting content | |
| CN111459351A (en) | Quick search method and device for terminal and computer storage medium | |
| US10162492B2 (en) | Tap-to-open link selection areas | |
| US20250190097A1 (en) | Method and application for fast sharing of images between mobile electronic devices using an innovative platform and artificial intelligence | |
| CN120390002A (en) | Information sharing method, device, electronic device and storage medium | |
| KR20170073538A (en) | Method and apparatus for saving web content | |
| CN121326215A (en) | Information prediction methods, devices and electronic equipment | |
| EP3007063A1 (en) | Device, method, and graphical user interface for presenting and installing applications | |
| HK40009709A (en) | Layered content selection | |
| WO2018157361A1 (en) | Interaction control method and terminal |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: JAMDEO CANADA LTD., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIRPAL, SANJIV;DE PAZ, ALEXANDER;SOTO, SALVADOR;REEL/FRAME:038361/0167 Effective date: 20160420 Owner name: HISENSE ELECTRIC CO., LTD, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIRPAL, SANJIV;DE PAZ, ALEXANDER;SOTO, SALVADOR;REEL/FRAME:038361/0167 Effective date: 20160420 Owner name: HISENSE INTERNATIONAL CO., LTD, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIRPAL, SANJIV;DE PAZ, ALEXANDER;SOTO, SALVADOR;REEL/FRAME:038361/0167 Effective date: 20160420 Owner name: HISENSE USA CORP., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIRPAL, SANJIV;DE PAZ, ALEXANDER;SOTO, SALVADOR;REEL/FRAME:038361/0167 Effective date: 20160420 |
|
| AS | Assignment |
Owner name: QINGDAO HISENSE ELECTRONICS CO., LTD., CHINA Free format text: CHANGE OF NAME;ASSIGNOR:HISENSE ELECTRIC CO., LTD.;REEL/FRAME:045546/0277 Effective date: 20170822 |
|
| AS | Assignment |
Owner name: QINGDAO HISENSE ELECTRONICS CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAMDEO CANADA LTD.;HISENSE USA CORP.;HISENSE INTERNATIONAL CO., LTD.;SIGNING DATES FROM 20181114 TO 20181220;REEL/FRAME:047923/0254 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |