US20150194165A1 - Limiting notification interruptions - Google Patents
Limiting notification interruptions Download PDFInfo
- Publication number
- US20150194165A1 US20150194165A1 US14/619,775 US201514619775A US2015194165A1 US 20150194165 A1 US20150194165 A1 US 20150194165A1 US 201514619775 A US201514619775 A US 201514619775A US 2015194165 A1 US2015194165 A1 US 2015194165A1
- Authority
- US
- United States
- Prior art keywords
- notification
- computing device
- time period
- audio
- pattern
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3655—Timing of guidance instructions
Definitions
- the disclosure is directed to a computing device comprising a microphone, an output device, and one or more processors.
- the one or more processors are operable to monitor audio detected by the microphone, wherein the audio is detected during at least a first time period and a second time period.
- the one or more processors are further operable to determine that a notification is scheduled for output by the computing device during a first time period and determine that a pattern of audio detected during the first time period is indicative of human speech.
- the one or more processors are further operable to delay output of the notification during the first time period.
- the one or more processors are further operable to determine that a pattern of audio detected during a second time period is not indicative of human speech and output at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period.
- the disclosure is directed to a computer-readable storage medium encoded with instructions that, when executed by one or more processors of a computing device, cause the one or more processors to determine that a notification is scheduled for output by the computing device during a first time period.
- the instructions further cause the one or more processors to determine that a pattern of audio detected during the first time period is indicative of human speech.
- the instructions further cause the one or more processors to delay output of the notification during the first time period.
- the instructions further cause the one or more processors to determine that a pattern of audio detected during a second time period is not indicative of human speech and output at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period.
- FIG. 1 is a conceptual diagram illustrating an example computing device that is configured to delay output of a notification, in accordance with one or more aspects of the present disclosure.
- FIG. 2 is a block diagram illustrating an example computing device configured to delay output of a notification, in accordance with one or more aspects of the present disclosure.
- FIG. 3 is a block diagram illustrating an example computing device that outputs a notification at an opportune time, in accordance with one or more techniques of the present disclosure.
- a notification may be any output (e.g., visual, auditory, tactile, etc.) that a computing device provides to convey information.
- a computing device may delay outputting a notification at a first time because the computing device determines that detected ambient noise has a pattern indicative of human speech. For example, the computing device does not output an audio notification during a time period in which the computing device determines that speech is taking place in proximity to the computing device.
- the computing device delays outputting the audio notification until the computing device determines that audio data detected during a second time period do not have a pattern indicative of human speech. When the computing device determines there is no detected audio pattern indicative of human speech, the computing device outputs at least part of the notification.
- FIG. 1 is a conceptual diagram illustrating an example computing device 2 that is configured to delay output of a notification, in accordance with one or more aspects of the present disclosure.
- computing device 2 is illustrated as a mobile computing device.
- computing device 2 may be a desktop computer, a mainframe computer, tablet computer, a personal digital assistant (PDA), a laptop computer, a portable gaming device, a portable media player, an e-book reader, a watch, television platform, a global positioning system (GPS) receiver, or another type of computing device.
- PDA personal digital assistant
- Computing device 2 includes one or more user interface devices (UIDs) 4 .
- UIDs user interface devices
- UID 4 of computing device 2 may function as an input device and as an output device for computing device 2 .
- UID 4 may be implemented using various technologies. For instance, UID 4 may function as an input device using a presence-sensitive display, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology.
- a presence-sensitive display such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology.
- UID 4 may function as an output device using any one or more of a liquid crystal display (LCD), plasma display, dot matrix display, light emitting diode (LED) display, organic light-emitting diode (OLED) display, electronic ink, or similar monochrome or color display capable of outputting visible information, such as to a user of computing device 2 .
- LCD liquid crystal display
- LED light emitting diode
- OLED organic light-emitting diode
- electronic ink or similar monochrome or color display capable of outputting visible information, such as to a user of computing device 2 .
- UID 4 of computing device 2 may include a presence-sensitive display that may receive both tactile and motion-based input from, for example, a user of computing device 2 .
- UID 4 may receive indications of the tactile user input by detecting one or more tap and/or non-tap gestures from a user of computing device 2 (e.g., the user touching or pointing to one or more locations of UID 4 with a finger or a stylus pen or the user holding computing device 2 by touching UID 4 ).
- the presence-sensitive display of UID 4 may present output to a user.
- UID 4 may present the output as a user interface which may be related to functionality configured into computing device 2 .
- UID 4 may present various user interfaces of applications (e.g., an electronic message application, an Internet browser application, etc.) executing at computing device 2 .
- applications e.g., an electronic message application, an Internet browser application, etc.
- a user of computing device 2 may interact with one or more of these applications to perform a function with computing device 2 through the respective user interface of each application.
- UID 4 may present various notifications, which may provide information related to one or more applications executing at computing device 2 .
- Computing device 2 may also include one or more microphones 4 and one or more speakers 6 .
- Microphone 4 detects audio incident upon microphone 4 , such as, for example, in an environment of computing device 2 .
- Microphone 4 may detect an ambient noise level of the audio. Further, microphone 4 may detect audio over one or more time periods. In some examples, microphone 4 may continuously monitor for audio. In some examples, microphone 4 monitors audio only when an ambient noise level is above a noise level threshold.
- Computing device 2 may include user interface device module 6 , notification module 10 , and application modules 12 A- 12 N (collectively referred to herein as “application modules 12 ”). Modules 6 , 10 , and 12 may perform operations described using software, hardware, firmware, or a mixture of both hardware, software, and firmware residing in and executing on computing device 2 . Computing device 2 may execute modules 6 , 10 , and 12 with multiple processors. Computing device 2 may execute modules 6 , 10 , and 12 as a virtual machine executing on underlying hardware.
- user interface 14 includes element 20 that is an image of a map having a route imposed thereon.
- Element 22 indicates an upcoming direction that, if taken, follows the route indicated in element 20 .
- element 22 depicts an icon of an arrow pointed to the right, a direction “turn right” and a distance of 0.7 kilometers (“km”) to the turn.
- Computing device 2 may generate an audio notification that includes information indicative of the direction, such as a computer-generated voice that says “turn right.”
- the example elements shown in FIG. 1 are merely included for illustrative purposes; computing device 2 may display other, different elements.
- User interface 14 also includes an icon 24 that is, in the example of FIG. 1 , an image of a microphone. The presence of icon 24 in user interface 14 may indicate that microphone 7 is powered on and currently capable of detecting audio.
- Graphical element 26 functions as a visual signal strength indicator for microphone 7 .
- graphical element 26 indicates the strength or volume of the audio that microphone 7 receives through computing device 2 the color of bars shown in graphical element 26 .
- graphical element 26 is showing four out of five bars with a dark color, indicating that microphone 7 received an ambient noise level that is relatively high. This may be because, for example, a user of computing device 2 may be participating in a conversation near microphone 7 .
- User interface 14 may also include graphical element 28 that provides an indication that an audio notification is delayed.
- graphical element 28 may be a pop-up window that provides information such as “audio output delayed due to detected noise” that is displayed during a time period when computing device 2 is delaying output of the audio notification.
- computing device 2 has determined that the time period, or time instance as it may be, is not suitable for outputting an audio notification.
- computing device 2 may have determined that an ambient noise level is too high at the time, such that the audio notification likely could not be heard if computing device 2 outputted it at that time via one or more speakers 8 .
- computing device 2 temporarily delays output of the audio notification until either computing device 2 determines the ambient noise level is low enough, that the detected audio is not indicative of human speech, or that a time period for maximum delay of the notification has expired.
- an option may be provided to turn off the delay notification functionality of computing device 2 .
- User interface 14 includes graphical element 30 that provides an option to override the delay of the notification.
- graphical element 30 is an interactive graphical element, such as, for example, a touch-target that may be toggled by touching user interface 14 at approximately the location of graphical element 30 .
- computing device 2 may output the audio notification despite the ambient noise level or a detected pattern indicative of human speech.
- the delay notification functionality of computing device 2 may be turned on or off, and/or settings may be adjusted for how computing device 2 performs the delay notification functionality. For example, a maximum time period for which a notification may be delayed may be set by a user.
- user interface 14 may not include any of one or more of graphical elements 20 - 30 . In other examples, user interface 14 may include other graphical elements.
- UID module 6 may act as an intermediary between various components of computing device 2 to make determinations based on input detected by UID 4 and to generate output presented by UID 4 . For instance, UID module 6 may receive an indication of user input received at user interface 14 . UID module 6 may receive, as an input from input module 10 , a sequence of touch events generated from user input detected at UID 4 . UID module 6 may determine, based on the location components in the sequence touch events, which of one or more location components approximate a selection of one or more graphical elements (e.g., UID module 6 may determine the location of one or more of the touch events corresponds to an area of UID 4 that presents graphical element 30 used to override the delayed output of the audio notification).
- UID module 6 may act as an intermediary between various components of computing device 2 to make determinations based on input detected by UID 4 and to generate output presented by UID 4 . For instance, UID module 6 may receive an indication of user input received at user interface 14 . UID module 6 may receive, as
- UID module 6 may provide, as input to notification module 10 , the sequence of touch events received at user interface 14 , including the locations where UID 4 presents each of the graphical elements 20 - 30 .
- UID module 6 may receive, as an output from notification module 10 , instructions for updating user interface 14 based on the indication of user input received at user interface 14 .
- UID module 6 may update user interface 14 to reflect the status of audio notifications.
- UID module 6 may cause UID 4 to present an updated user interface 14 .
- Computing device 2 may further include one or more application modules 12 .
- Application modules 12 may include any other application that computing device 2 may execute in addition to the other modules specifically described in this disclosure.
- application modules 12 may include a web browser, a media player, a file system, a navigation program, a communication program, or any other number of applications or features that computing device 2 may execute.
- Application modules 12 may determine that notifications should be provided related to a particular application.
- an application run via an application module 12 determines a notification should be provided.
- application modules 12 may detect one or more events that trigger output of a notification.
- An event requiring a notification as detected by application module 12 may include, for example, receiving an email message, receiving a text message, receiving a phone call, a clock alarm, a calendar reminder, etc.
- the corresponding notifications may be audio, visual, haptic feedback, or any other form of output.
- notification module 10 may interact with application module 12 of computing device 2 to delay output of the notifications in certain circumstances, as described herein.
- Application module 12 may provide one or more signals to notification module 10 responsive to application module 12 determining a notification to be outputted.
- the one or more signals may include information of what the notification should include and what time to output the notification.
- Notification module 10 delays output of the notification in response to determining that the time period in which the notification is meant to be output overlaps with a time period in which the detected conditions indicate the time period is an inopportune time to output the notification.
- computing device 2 outputs for display a user interface 14 for display at a presence-sensitive display.
- FIG. 1 illustrates an example user interface 14 that provides graphical elements 20 - 30 .
- UID module 6 may generate user interface 14 and include graphical elements 20 - 30 in user interface 14 .
- UID module 6 may send information to UID 4 that includes instructions for displaying user interface 14 at a presence-sensitive device of UID 4 .
- UID 4 may receive the information and cause the presence-sensitive device of UID 4 to present user interface 14 including one or more of graphical elements 20 - 30 .
- notification module 10 of computing device 2 may receive information from microphone 7 about detected audio signals, such as an ambient noise level or a pattern of audio. Based on information about the detected audio, notification module 10 may determine whether to delay a notification. Notification module 10 may further determine, based on additional detected audio, whether to output the delayed notification.
- detected audio signals such as an ambient noise level or a pattern of audio.
- the techniques of the disclosure may enable a computing device to delay outputting notifications to avoid outputting the notifications at inopportune times.
- computing device 2 configured with techniques of the disclosure may delay outputting an audio notification during a time period in which computing device 2 determines that circumstances may prevent the audio from being heard.
- the techniques described herein provide an increased likelihood that computing device 2 would output the audio at a time when a user may hear and understand the notification.
- the techniques described herein may prevent an action performed by a user with computing device 2 , such as a scroll function, from being interrupted with a notification.
- computing device 2 using techniques described herein may provide an improved user experience.
- FIG. 2 is a block diagram illustrating an example computing device 2 configured to delay output of a notification, in accordance with one or more aspects of the present disclosure.
- Computing device 2 of FIG. 2 is described below within the context of FIG. 1 .
- FIG. 2 illustrates only one particular example of computing device 2 , and many other examples of computing device 2 may be used in other instances.
- Other examples of computing device 2 may include a subset of the components included in example computing device 2 or may include additional components not shown in FIG. 2 .
- computing device 2 includes UID 4 , one or more processors 40 , one or more input devices 42 , one or more communication units 44 , one or more output devices 46 , one or more sensors 48 , one or more power sources 52 , and one or more storage devices 60 .
- Storage devices 60 of computing device 2 also include UID module 6 , notification module 10 , application modules 12 A- 12 N, speech patterns database 62 , and one or more operating systems 64 .
- One or more communication channels 50 may interconnect each of the components 4 , 40 , 42 , 44 , 46 , 48 , 52 , and 60 for inter-component communications (physically, communicatively, and/or operatively).
- communication channels 50 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
- One or more input devices 42 of computing device 2 may receive input. Examples of input are tactile, motion, audio, and video input.
- Input devices 42 of computing device 2 includes a presence-sensitive display, touch-sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from, for example, a human or machine.
- an input device 42 is a microphone, such as microphone 7 of FIG. 1 .
- One or more output devices 46 of computing device 2 may generate output. Examples of output are tactile, audio, and video output.
- Output devices 46 of computing device 2 includes a presence-sensitive display, speaker, cathode ray tube (CRT) monitor, liquid crystal display (LCD), motor, actuator, electromagnet, piezoelectric sensor, or any other type of device for generating output to a human or machine.
- Output devices 46 may utilize one or more of a sound card or video graphics adapter card to produce auditory or visual output, respectively.
- an output device 46 is a speaker, such as speaker 8 of FIG. 1 .
- One or more communication units 44 of computing device 2 may communicate with external devices via one or more networks by transmitting and/or receiving network signals on the one or more networks.
- the one or more networks may be, for example, the Internet.
- Computing device 2 may use communication unit 44 to transmit and/or receive radio signals on a radio network such as a cellular radio network.
- communication units 44 may transmit and/or receive satellite signals on a Global Navigation Satellite System (GNNS) network such as the Global Positioning System (GPS).
- GNNS Global Navigation Satellite System
- GPS Global Positioning System
- Examples of communication unit 44 include a network interface card (e.g., an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send or receive information.
- Other examples of communication units 44 may include Bluetooth®, GPS, 3G, 4G, and Wi-Fi® radios found in mobile devices as well as Universal Serial Bus (USB) controllers.
- USB Universal Serial Bus
- UID 4 may include functionality of one or more input devices 42 and/or output devices 46 .
- UID 4 may be or may include a presence-sensitive display 54 .
- presence-sensitive display 54 may detect an object at and/or near presence-sensitive display 54 .
- Presence-sensitive display 54 may detect an object, such as a finger or stylus that is within a specified range of presence-sensitive display 54 .
- Presence-sensitive display 54 may determine a location (e.g., an (x,y) coordinate) of presence-sensitive display 54 at which the object was detected.
- a detectable object may be, for example, graphical element 30 of FIG. 1 .
- Presence-sensitive display 54 may determine the location of presence-sensitive display 54 selected by a user's finger using capacitive, inductive, and/or optical recognition techniques. In some examples, presence-sensitive display 54 provides output to a user using tactile, audio, or video stimuli as described with respect to output device 46 . In the example of FIG. 2 , UID 4 presents a user interface (such as user interface 14 of FIG. 1 ) at presence-sensitive display 54 of UID 4 .
- UID 4 While illustrated as an internal component of computing device 2 , UID 4 also represents an external component that shares a data path with computing device 2 for transmitting and/or receiving input and output. For instance, in one example, UID 4 represents a built-in component of computing device 2 located within and physically connected to the external packaging of computing device 2 (e.g., a screen on a mobile phone). In another example, UID 4 represents an external component of computing device 2 located outside and physically separated from the packaging of computing device 2 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer).
- UID 4 represents a built-in component of computing device 2 located within and physically connected to the external packaging of computing device 2 (e.g., a screen on a mobile phone).
- UID 4 represents an external component of computing device 2 located outside and physically separated from the packaging of computing device 2 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with
- One or more sensor devices 48 of computing device 2 may detect input, which may be user input.
- Example sensor devices 48 include an accelerometer, a gyroscope, an ambient light sensor, a proximity sensor, a barometer, magnetometer, or other sensor devices.
- Computing device 2 may include one or more of each sensor device.
- User input detected by sensor devices 48 may include data related to acceleration, orientation, light intensity, proximity of an object to computing device 2 , an ambient pressure, magnetic field strength and polarity, or other sensor reading.
- sensor devices 48 may be an input device 42 .
- One or more sensor devices 48 may detect user input.
- a gyroscope may detect changes in orientation when computing device 2 is handled by a user interacting with computing device 2 .
- Computing device 2 may include one or more power devices 52 , which may provide power to computing device 2 .
- power device 52 includes one or more batteries included in computing device 2 .
- the one or more batteries may be rechargeable and provide power to computing device 2 .
- the one or more batteries may, in some examples, be made from nickel-cadmium, lithium-ion, or other suitable material.
- power device 52 may be a power source capable of providing stored power or voltage from another power source, which may be external to computing device 2 .
- One or more storage devices 60 within computing device 2 may store information for processing during operation of computing device 2 (e.g., characteristic database 62 of computing device 2 may store data related to characteristics of user inputs and corresponding characteristic threshold information as well as sensor input thresholds, accessed by access module 8 during execution at computing device 2 ).
- storage device 60 functions as a temporary memory, meaning that storage device 60 is not used for long-term storage.
- Storage devices 60 on computing device 2 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
- Storage devices 60 also include one or more computer-readable storage media. Storage devices 60 may be configured to store larger amounts of information than volatile memory. Storage devices 60 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices 60 may store program instructions and/or data associated with UID module 6 , notification module 10 , and application modules 12 .
- processors 40 may implement functionality and/or execute instructions within computing device 2 .
- processors 40 on computing device 2 may receive and execute instructions stored by storage devices 60 that execute the functionality of UID module 6 , notification module 10 , and application modules 12 . These instructions executed by processors 40 may cause computing device 2 to store information within storage devices 60 during program execution.
- Processors 40 may execute instructions in UID module 6 and notification module 10 to cause one or more of application modules 12 to delay output of notifications at inopportune times (such as when a user is speaking or interacting with computing device 2 ).
- computing device 2 of FIG. 2 may output for display at presence-sensitive display 54 of UID 4 , a graphical user interface that indicates information related to an application run at computing device 2 , such as GUI 14 of FIG. 1 .
- notification module 10 of computing device 2 may determine whether the notification should be outputted at the particular time. Whether notification module 10 determines to delay or output the notification, UID module 6 may transmit a display command and data over communication channels 50 to cause UID 4 to a present user interface at presence-sensitive display 54 of UID 4 .
- UID module 6 may send information to UID 4 that includes instructions for displaying user interface 14 at presence-sensitive display 54 .
- UID 4 may receive the display command and data from UID module 6 and cause presence-sensitive display 54 of UID 4 to present a user interface, such as user interface 14 of FIG. 1 .
- Computing device 2 may receive an indication of user input detected at presence-sensitive display 54 of UID 4 .
- Receiving the indication of user input may comprise receiving an indication of one or more gestures, taps, or the like detected at presence-sensitive display 54 .
- receiving the indication of a user input detected at presence-sensitive display 54 of UID 4 may comprise receiving an indication of one or more non-tap gestures detected at presence-sensitive display 54 .
- a user may provide tap and/or non-tap gestures as input to computing device 2 , and computing device 2 may receive either type of input as an indication of user input.
- UID module 6 may receive the indication of user input, analyze and interpret the user input, and provide data related to the received indication of user input to other modules of computing device 2 , such as notification module 10 .
- notification module 10 provides instructions for UID module 6 to output a notification.
- Notification module 10 may provide instructions for a different output device 46 , such as speaker 8 of FIG. 1 , to output the notification.
- Notification module 10 may provide instructions to output the notification based on notification module 10 detecting the occurrence of one or more conditions. These conditions may be, for example, a completion of an interaction with computing device 2 (such as a typed word is completed, a gesture is completed, a scrolling function is completed, etc.), an ambient noise level is below a threshold level, microphone 7 does not detect audio indicative of human speech, or a maximum delay time period is reached.
- one or more storage devices 60 of computing device 2 may include speech pattern database 62 .
- speech pattern database 62 may be stored externally to computing device 2 .
- computing device 2 may access speech pattern database 62 remotely.
- Speech pattern database 62 may contain data related to characteristics of audio that are indicative of human speech. The characteristics may include, for example, tone, sound quality, type of sound, and the like.
- speech pattern database 62 contains information related to an algorithm that may be used to determine if detected audio is indicative of human speech.
- speech pattern database 62 contains data representative of samples of human speech that notification module 10 may use to compare to detected audio to determine if the detected audio is indicative of human speech.
- Speech pattern database 62 may also include selected threshold levels for notification module 10 to match the detected audio to any particular speech pattern. Notification module 10 may use one or more of these selected threshold levels may to determine whether the detected audio is indicative of human speech.
- a threshold level may be any value determined by or set for computing device 2 , and the threshold may be such that if the threshold is exceeded, it is likely that the detected audio is indicative of human speech.
- a value exceeding a threshold may mean the value is less than, less than or equal to, greater than or equal to, or greater than the threshold.
- Notification module 10 may determine when to delay, and for how long, a notification. As such, notification module 10 can enable computing device 2 to provide notifications when they are more likely to be noticed and when they are less likely to interrupt an action being performed at computing device 2 . Notification module 10 may be configured to prevent computing device 2 from outputting audio notifications when the environment, of computing device 2 is noisy. The techniques may further enable computing device 2 to delay outputting pop-up notifications until a time when notification module 10 determines that no user is physically interacting with computing device 2 (such as interacting with presence-sensitive display 54 , scrolling, typing a message, or the like).
- FIG. 3 is a block diagram illustrating an example computing device 100 that outputs graphical content for display at a remote device, in accordance with one or more techniques of the present disclosure.
- Graphical content generally, may include any visual information that may be output for display, such as text, images, a group of moving images, etc.
- the example shown in FIG. 3 includes a computing device 100 , presence-sensitive display 101 , communication unit 110 , projector 120 , projector screen 122 , mobile device 126 , and visual display device 130 .
- a computing device such as computing device 100 may, generally, be any component or system that includes a processor or other suitable computing environment for executing software instructions and, for example, need not include a presence-sensitive display.
- computing device 100 may be a processor that includes functionality as described with respect to one or more processors 40 in FIG. 2 .
- computing device 100 may be operatively coupled to presence-sensitive display 101 by a communication channel 102 A, which may be a system bus or other suitable connection.
- Computing device 100 may also be operatively coupled to communication unit 110 , further described below, by a communication channel 102 B, which may also be a system bus or other suitable connection.
- a communication channel 102 B which may also be a system bus or other suitable connection.
- computing device 100 may be operatively coupled to presence-sensitive display 101 and communication unit 110 by any number of one or more communication channels.
- a computing device may refer to a portable or mobile device such as mobile phones (including smart phones), laptop computers, etc.
- a computing device may be a desktop computers, tablet computers, GPS devices, smart television platforms, cameras, personal digital assistants (PDAs), servers, mainframes, etc.
- PDAs personal digital assistants
- Presence-sensitive display 101 may include display device 103 and presence-sensitive input device 105 .
- Display device 103 may, for example, receive data from computing device 100 and display graphical content associated with the data.
- presence-sensitive input device 105 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at presence-sensitive display 101 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input to computing device 100 using communication channel 102 A.
- presence-sensitive input device 105 may be physically positioned on top of display device 103 such that, when a user positions an input unit over a graphical element displayed by display device 103 , the location at which presence-sensitive input device 105 corresponds to the location of display device 103 at which the graphical element is displayed. In other examples, presence-sensitive input device 105 may be positioned physically apart from display device 103 , and locations of presence-sensitive input device 105 may correspond to locations of display device 103 , such that input can be made at presence-sensitive input device 105 for interacting with graphical elements displayed at corresponding locations of display device 103 .
- computing device 100 may also include and/or be operatively coupled with communication unit 110 .
- Communication unit 110 may include functionality of one or more communication units 44 as described in FIG. 2 .
- Examples of communication unit 110 may include a network interface card, an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information.
- Other examples of such communication units may include Bluetooth, 3G, and WiFi radios, Universal Serial Bus (USB) interfaces, etc.
- Computing device 100 may also include and/or be operatively coupled with one or more other devices, e.g., input devices, output devices, memory, storage devices, and the like, such as those shown in FIGS. 1 and 2 .
- FIG. 3 also illustrates a projector 120 and projector screen 122 .
- projection devices may include electronic whiteboards, holographic display devices, and any other suitable devices for displaying graphical content.
- Projector 120 and projector screen 122 may include one or more communication units that enable the respective devices to communicate with computing device 100 . In some examples, one or more communication units may enable communication between projector 120 and projector screen 122 .
- Projector 120 may receive data from computing device 100 that includes graphical content. Projector 120 , in response to receiving the data, may project the graphical content onto projector screen 122 .
- projector 120 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at projector screen using optical recognition or other suitable techniques and send indications of such user input using one or more communication units to computing device 100 .
- projector screen 122 may be unnecessary, and projector 120 may project graphical content on any suitable medium and detect one or more user inputs using optical recognition or other such suitable techniques.
- Projector screen 122 may include a presence-sensitive display 124 .
- Presence-sensitive display 124 may include a subset of functionality or all of the functionality of UI device 4 as described in this disclosure. In some examples, presence-sensitive display 124 may include additional or different functionality.
- Projector screen 122 e.g., an electronic whiteboard
- Projector screen 122 may receive data from computing device 100 and display the graphical content.
- presence-sensitive display 124 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at projector screen 122 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input using one or more communication units to computing device 100 .
- FIG. 3 also illustrates mobile device 126 and visual display device 130 .
- Mobile device 126 and visual display device 130 may each include computing and connectivity capabilities. Examples of mobile device 126 may include e-reader devices, convertible notebook devices, hybrid slate devices, etc. Examples of visual display device 130 may include other semi-stationary devices such as televisions, computer monitors, etc.
- mobile device 126 may include a presence-sensitive display 128 .
- Visual display device 130 may include a presence-sensitive display 132 . Presence-sensitive displays 128 , 132 may include a subset of functionality or all of the functionality of presence-sensitive display 54 as described in this disclosure. In some examples, presence-sensitive displays 128 , 132 may include additional functionality.
- presence-sensitive display 132 may receive data from computing device 100 and display the graphical content.
- presence-sensitive display 132 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at projector screen 122 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input using one or more communication units to computing device 100 .
- user inputs e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.
- computing device 100 may output graphical content for display at presence-sensitive display 101 that is coupled to computing device 100 by a system bus or other suitable communication channel.
- Computing device 100 may also output graphical content for display at one or more remote devices, such as projector 120 , projector screen 122 , mobile device 126 , and visual display device 130 .
- computing device 100 may execute one or more instructions to generate and/or modify graphical content in accordance with techniques of the present disclosure.
- Computing device 100 may output data that includes the graphical content to a communication unit of computing device 100 , such as communication unit 110 .
- Communication unit 110 may send the data to one or more of the remote devices, such as projector 120 , projector screen 122 , mobile device 126 , and/or visual display device 130 .
- computing device 100 may output the graphical content for display at one or more of the remote devices.
- one or more of the remote devices may output the graphical content at a presence-sensitive display that is included in and/or operatively coupled to the respective remote devices.
- computing device 100 may not output graphical content at presence-sensitive display 101 that is operatively coupled to computing device 100 .
- computing device 100 may output graphical content for display at both a presence-sensitive display 101 that is coupled to computing device 100 by communication channel 102 A, and at one or more remote devices.
- the graphical content may be displayed substantially contemporaneously at each respective device. For instance, some delay may be introduced by the communication latency to send the data that includes the graphical content to the remote device.
- graphical content generated by computing device 100 and output for display at presence-sensitive display 101 may be different than graphical content display output for display at one or more remote devices.
- Computing device 100 may send and receive data using any suitable communication techniques.
- computing device 100 may be operatively coupled to external network 114 using network link 112 A.
- Each of the remote devices illustrated in FIG. 3 may be operatively coupled to network external network 114 by one of respective network links 112 B, 112 C, and 112 D.
- External network 114 may include network hubs, network switches, network routers, etc., that are operatively inter-coupled thereby providing for the exchange of information between computing device 100 and the remote devices illustrated in FIG. 3 .
- network links 112 A- 112 D may be Ethernet, ATM or other network connections. Such connections may be wireless and/or wired connections.
- computing device 100 may be operatively coupled to one or more of the remote devices included in FIG. 3 using direct device communication 118 .
- Direct device communication 118 may include communications through which computing device 100 sends and receives data directly with a remote device, using wired or wireless communication. That is, in some examples of direct device communication 118 , data sent by computing device 100 may not be forwarded by one or more additional devices before being received at the remote device, and vice-versa. Examples of direct device communication 118 may include Bluetooth, Near-Field Communication, Universal Serial Bus, Wi-Fi, infrared, etc.
- One or more of the remote devices illustrated in FIG. 3 may be operatively coupled with computing device 100 by communication links 116 A- 116 D. In some examples, communication links 116 A- 116 D may be connections using Bluetooth, Near-Field Communication, Universal Serial Bus, infrared, etc. Such connections may be wireless and/or wired connections.
- computing device 100 may be operatively coupled to visual display device 130 using external network 114 .
- computing device 100 may output a notification for display at presence-sensitive display 132 when computing device 100 determines the notification may be outputted.
- computing device 100 may send data that includes a representation of a notification to communication unit 110 .
- Communication unit 110 may send the data that includes the representation of the notification to visual display device 130 using external network 114 .
- Visual display device 130 in response to receiving the data using external network 114 , may cause presence-sensitive display 132 to output the notification.
- FIG. 4 is a flowchart illustrating an example operation of a computing device configured to delay output of a notification based at least partially on determination of a pattern of detected audio that indicates human speech, in accordance with one or more aspects of the present disclosure.
- the computing device may be computing device 2 of FIGS. 1 and 2 , or computing device 100 as described herein.
- the example operations include determining, by a computing device, that a notification is scheduled for output by the computing device during a first time period ( 202 ).
- a notification For example, an application run via an application module 12 determines a notification should be provided.
- the notification may be, for example, a pop-up graphical element, an audio-based notification, tactile feedback, or the like.
- an application module 12 of computing device 2 determines that an audio notification is to be outputted.
- a notification may be scheduled for output whenever an application module 12 determines a notification for outputting.
- application module 12 may schedule a notification in advance (e.g., a routine maintenance message, a calendar reminder, etc.) or may schedule a notification in an ad hoc manner (e.g., when computing device 2 receives a message, when a battery level reaches a charge threshold, etc.).
- a notification in advance (e.g., a routine maintenance message, a calendar reminder, etc.) or may schedule a notification in an ad hoc manner (e.g., when computing device 2 receives a message, when a battery level reaches a charge threshold, etc.).
- the example operations further include determining, by the computing device, that a pattern of audio detected during the first time period is indicative of human speech ( 204 ).
- notification module 10 of computing device 2 determines that audio detected via microphone 7 is indicative of human speech.
- notification module 10 queries a database, such as speech pattern database 62 , for one or more stored human speech patterns.
- Notification module 10 compares the pattern of audio detected during the first time period with the one or more stored human speech patterns.
- Notification module 10 determines that the pattern of audio detected during the first time period is indicative of human speech when the pattern of audio detected during the first time period matches one of the one or more stored human speech patterns within a threshold matching level.
- the example operations also include delaying, by the computing device, output of the notification during the first time period ( 206 ). That is, for example, notification module 10 delays output of the notification during the first time period because notification module 10 determined that there was an audio pattern indicative of human speech detected proximate to computing device 2 during the first time period.
- the operations further include determining, by the computing device, that a pattern of audio detected during the second time period is not indicative of human speech ( 208 ).
- notification module 10 determines that the pattern of audio detected during the second time period is not indicative of human speech when the pattern of audio detected during the second time period does not match one of the one or more stored human speech patterns of speech pattern database 62 within the threshold matching level.
- the example operations may further include outputting, by the computing device, at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period ( 210 ).
- notification module 10 instructs one or more output devices 46 , such as speaker 8 , to output at least a part of the notification.
- the operation may further include determining, by the computing device, which portion of the notification to output based at least in part on the pattern of audio detected during the second time period. For example, if notification module 10 determined there was a lull in a detected conversation during the second time period, then notification module 10 may determine to only output a portion of the notification that would fit within the lull.
- the operations may further include monitoring, by the computing device, audio detected by a microphone, wherein the audio is detected during at least the first time period and the second time period.
- computing device 2 receives audio data in the environment of computing device 2 via microphone 7 .
- the audio may be detected during at least a first time period and a second time period.
- microphone 7 may detect the audio continuously, discontinuously, or sporadically during the first and second time periods.
- a computing device 2 may include a separate module for detecting speech in the audio data.
- the third time period is a selected maximum delay period.
- Notification module 10 may determine a type of the notification, and the duration of the third time period is based on the determined type of the notification.
- the notification is an audio notification of a turn from a navigation application
- the third time period may be based on a time it will take for a vehicle transporting computing device 2 to arrive at a turn.
- notification module 10 may determine that the pattern of audio detected during the second time period indicates the second time period is suitable for outputting the entire notification, wherein outputting at least the portion of the notification comprises outputting the entire notification. In some examples, outputting at least the portion of the notification further comprises outputting at least the portion of the notification when the ambient noise level is below a threshold noise level.
- speaker 8 may output least a portion of the audio notification.
- computing device 2 executes a navigation application using an application module 12 .
- the navigation application may generate, at a beginning of the first time period via application module 12 , the audio notification that includes directional information.
- Notification module 10 may determine the third time period based on the beginning of the first time period and a time when at least part of the directional information would become incorrect. For example, a GPS system in a car detects if the passengers are speaking, and alerts the driver of upcoming directions when the passengers in the car are silent so that it does not interrupt anyone (if the GPS system still has enough time to deliver the message).
- notification module 10 may differentiate between voices of different speakers that microphone 7 detects, and may use this information to determine when to output the notification.
- the notification may be a notification for an incoming request for a communication, such as a received phone call, in some examples.
- Computing device 2 may receive an incoming request for communication.
- Computing device 2 may generate, at a beginning of the first time period, the audio notification that indicates the request for communication has been received.
- Notification module 10 may determine the third time period based on the beginning of the first time period and a time when the request for communication would expire.
- computing device 2 does not record the audio detected via microphone 7 .
- computing device 2 does not store a copy of the audio in any storage device, such as storage device 60 . In this sense, computing device 2 is not “listening” to any conversations or determining the content of the conversations. Instead, even though microphone 7 may be receiving audio data, computing device 2 does not make any records or determines the contents of the audio data.
- notification module 10 may output portions of the notification to alert the user of the information to reduce a likelihood of interrupting the user. For example, notification module 10 may output part of a notification during a quiet moment of a conversation when computing device 2 is functioning as a cell phone.
- notification module 10 delays outputting the notification until the user stops typing.
- an additional pause threshold such as thirty seconds or one minute, may be reached before notification module 10 instructs an output device to output the notification.
- notification module 10 defers interruption if a user is in the midst of performing an operation, such as a scroll operation, and has not yet completed the operation.
- a computing device 2 configured to perform techniques described herein may wait to give audio notifications, such as driving directions, until determining that people have stopped talking, to reduce the likelihood that someone speaks over the instructions.
- Computing device 2 may interrupt user actions, such as scrolling or speaking, upon certain conditions, such as before a vehicle transporting computing device 2 passes a road to turn onto indicated in the instructions.
- computing device 2 may perform an intelligent interruption through receiving audio data via microphone 7 and analyzing the audio data to determine if someone is speaking proximate to computing device 2 .
- Notification module 10 may not deliver a notification immediately upon its generation and instead may take into account an environment of computing device 2 or how a user is interacting with computing device 2 . Notifications may not interrupt the user as often, which causes UID 4 to behave in a more predictable manner and leads to improved user experience. In contrast, for example, if a user is trying to tap on a particular graphical element displayed on GUI 14 and, while the user is moving a finger to tap on the graphical element, notification module 10 causes a notification to be outputted over the graphical element, the user may tap on the notification instead of the graphical element. Furthermore, computing device 2 does not merely schedule notifications to appear at a certain time, because there is no guarantee that a user will not be attempting to interact with computing device 2 at the scheduled time.
- a computing device configured according to techniques described herein may output notifications in response to determining that there is a pause in the user's interactions with other users or with the computing device.
- the computing device may output notifications that require immediate output right way while delaying the output of other, less urgent, notifications for up to some period of time (e.g., 0.1 seconds, 1 second, etc.).
- Each different notification type may have a different delay threshold (e.g., 100 milliseconds for a text message, 2 seconds for an email, 5 seconds for navigation directions, etc.). If the condition causing the delay of the notification is still present at the expiration of the delay window, the computing device outputs the notification.
- techniques described herein provide a way for computing device 2 to minimize the distractions caused by notifications (e.g., by not interrupting a current user task) while also increasing the likelihood that the user will actually receive the notification.
- Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
- computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave.
- Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
- a computer program product may include a computer-readable medium.
- such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- any connection is properly termed a computer-readable medium.
- a computer-readable medium For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
- DSL digital subscriber line
- Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field programmable logic arrays
- processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
- the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
- the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set).
- IC integrated circuit
- a set of ICs e.g., a chip set.
- Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a Continuation of U.S. application Ser. No. 14/150,391, filed on Jan. 08, 2014, the entire content of which is hereby incorporated by reference.
- Computing devices (e.g., mobile phones, tablet computers, global positioning system (GPS) devices, etc.) can generally perform various functions, such as executing applications stored thereon and outputting information. For example, an application may output documents, e-mails, pictures, etc. for display on a screen and/or audio for output by a speaker. Certain computing devices output notifications that provide information related to, for example, a condition of the computing device, one or more applications executed by the computing device, a time, or a location of the computing device. In some examples, these notifications are outputted at an inopportune time and may interrupt a function of the computing device or a non-computing task currently being engaged in by a user of the computing device.
- In one example, the disclosure is directed to a method determining, by a computing device, that a notification is scheduled for output by the computing device during a first time period. The method also includes determining, by the computing device, that a pattern of audio detected during the first time period is indicative of human speech and delaying, by the computing device and in response to determining that the pattern of audio detected during the first time is indicative of human speech, output of the notification during the first time period. The method additionally includes determining, by the computing device, that a pattern of audio detected during a second time period is not indicative of human speech. The method further includes outputting, by the computing device, at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period.
- In another example, the disclosure is directed to a computing device comprising a microphone, an output device, and one or more processors. The one or more processors are operable to monitor audio detected by the microphone, wherein the audio is detected during at least a first time period and a second time period. The one or more processors are further operable to determine that a notification is scheduled for output by the computing device during a first time period and determine that a pattern of audio detected during the first time period is indicative of human speech. In response to determining that the pattern of audio detected during the first time is indicative of human speech, the one or more processors are further operable to delay output of the notification during the first time period. The one or more processors are further operable to determine that a pattern of audio detected during a second time period is not indicative of human speech and output at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period.
- In another example, the disclosure is directed to a computer-readable storage medium encoded with instructions that, when executed by one or more processors of a computing device, cause the one or more processors to determine that a notification is scheduled for output by the computing device during a first time period. The instructions further cause the one or more processors to determine that a pattern of audio detected during the first time period is indicative of human speech. In response to determining that the pattern of audio detected during the first time is indicative of human speech, the instructions further cause the one or more processors to delay output of the notification during the first time period. The instructions further cause the one or more processors to determine that a pattern of audio detected during a second time period is not indicative of human speech and output at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period.
- The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
-
FIG. 1 is a conceptual diagram illustrating an example computing device that is configured to delay output of a notification, in accordance with one or more aspects of the present disclosure. -
FIG. 2 is a block diagram illustrating an example computing device configured to delay output of a notification, in accordance with one or more aspects of the present disclosure. -
FIG. 3 is a block diagram illustrating an example computing device that outputs a notification at an opportune time, in accordance with one or more techniques of the present disclosure. -
FIG. 4 is a flowchart illustrating an example operation of a computing device configured to delay output of a notification based at least partially on determination of a pattern of detected audio indicates human speech, in accordance with one or more aspects of the present disclosure. - Techniques of this disclosure are directed to a computing device configured to delay output of a notification. A notification may be any output (e.g., visual, auditory, tactile, etc.) that a computing device provides to convey information. In some implementations, a computing device may delay outputting a notification at a first time because the computing device determines that detected ambient noise has a pattern indicative of human speech. For example, the computing device does not output an audio notification during a time period in which the computing device determines that speech is taking place in proximity to the computing device. In some implementations, the computing device delays outputting the audio notification until the computing device determines that audio data detected during a second time period do not have a pattern indicative of human speech. When the computing device determines there is no detected audio pattern indicative of human speech, the computing device outputs at least part of the notification.
-
FIG. 1 is a conceptual diagram illustrating anexample computing device 2 that is configured to delay output of a notification, in accordance with one or more aspects of the present disclosure. In the example ofFIG. 1 ,computing device 2 is illustrated as a mobile computing device. However, in other examples,computing device 2 may be a desktop computer, a mainframe computer, tablet computer, a personal digital assistant (PDA), a laptop computer, a portable gaming device, a portable media player, an e-book reader, a watch, television platform, a global positioning system (GPS) receiver, or another type of computing device. -
Computing device 2 includes one or more user interface devices (UIDs) 4. For clarity, singular terms may be used herein for features where, in some examples, there may be two or more of those features. UID 4 ofcomputing device 2 may function as an input device and as an output device forcomputing device 2. UID 4 may be implemented using various technologies. For instance, UID 4 may function as an input device using a presence-sensitive display, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology. UID 4 may function as an output device using any one or more of a liquid crystal display (LCD), plasma display, dot matrix display, light emitting diode (LED) display, organic light-emitting diode (OLED) display, electronic ink, or similar monochrome or color display capable of outputting visible information, such as to a user ofcomputing device 2. - UID 4 of
computing device 2 may include a presence-sensitive display that may receive both tactile and motion-based input from, for example, a user ofcomputing device 2.UID 4 may receive indications of the tactile user input by detecting one or more tap and/or non-tap gestures from a user of computing device 2 (e.g., the user touching or pointing to one or more locations ofUID 4 with a finger or a stylus pen or the user holdingcomputing device 2 by touching UID 4). The presence-sensitive display ofUID 4 may present output to a user. UID 4 may present the output as a user interface which may be related to functionality configured intocomputing device 2. For example, UID 4 may present various user interfaces of applications (e.g., an electronic message application, an Internet browser application, etc.) executing atcomputing device 2. A user ofcomputing device 2 may interact with one or more of these applications to perform a function withcomputing device 2 through the respective user interface of each application. Furthermore, UID 4 may present various notifications, which may provide information related to one or more applications executing atcomputing device 2. -
Computing device 2 may also include one ormore microphones 4 and one ormore speakers 6. Microphone 4 detects audio incident upon microphone 4, such as, for example, in an environment ofcomputing device 2. Microphone 4 may detect an ambient noise level of the audio. Further, microphone 4 may detect audio over one or more time periods. In some examples, microphone 4 may continuously monitor for audio. In some examples,microphone 4 monitors audio only when an ambient noise level is above a noise level threshold. -
Computing device 2 may include userinterface device module 6,notification module 10, andapplication modules 12A-12N (collectively referred to herein as “application modules 12”).Modules computing device 2.Computing device 2 may executemodules Computing device 2 may executemodules -
UID module 6 may causeUID 4 to present graphical user interface 14 (referred to herein as “user interface 14”).User interface 14 includesgraphical elements FIG. 1 illustrates anexample user interface 14 providing an indication thatcomputing device 2 has delayed a notification for a navigation application that computingdevice 2 is executing. - In the example of
FIG. 1 ,user interface 14 includeselement 20 that is an image of a map having a route imposed thereon.Element 22 indicates an upcoming direction that, if taken, follows the route indicated inelement 20. For example,element 22 depicts an icon of an arrow pointed to the right, a direction “turn right” and a distance of 0.7 kilometers (“km”) to the turn.Computing device 2 may generate an audio notification that includes information indicative of the direction, such as a computer-generated voice that says “turn right.” The example elements shown inFIG. 1 are merely included for illustrative purposes; computingdevice 2 may display other, different elements. -
User interface 14 also includes anicon 24 that is, in the example ofFIG. 1 , an image of a microphone. The presence oficon 24 inuser interface 14 may indicate thatmicrophone 7 is powered on and currently capable of detecting audio.Graphical element 26 functions as a visual signal strength indicator formicrophone 7. In the example ofFIG. 1 ,graphical element 26 indicates the strength or volume of the audio thatmicrophone 7 receives throughcomputing device 2 the color of bars shown ingraphical element 26. For example,graphical element 26 is showing four out of five bars with a dark color, indicating thatmicrophone 7 received an ambient noise level that is relatively high. This may be because, for example, a user ofcomputing device 2 may be participating in a conversation nearmicrophone 7. -
User interface 14 may also includegraphical element 28 that provides an indication that an audio notification is delayed. For example,graphical element 28 may be a pop-up window that provides information such as “audio output delayed due to detected noise” that is displayed during a time period when computingdevice 2 is delaying output of the audio notification. In that example,computing device 2 has determined that the time period, or time instance as it may be, is not suitable for outputting an audio notification. For example,computing device 2 may have determined that an ambient noise level is too high at the time, such that the audio notification likely could not be heard ifcomputing device 2 outputted it at that time via one ormore speakers 8. Thus,computing device 2 temporarily delays output of the audio notification until eithercomputing device 2 determines the ambient noise level is low enough, that the detected audio is not indicative of human speech, or that a time period for maximum delay of the notification has expired. - In some examples, an option may be provided to turn off the delay notification functionality of
computing device 2.User interface 14 includesgraphical element 30 that provides an option to override the delay of the notification. In some examples,graphical element 30 is an interactive graphical element, such as, for example, a touch-target that may be toggled by touchinguser interface 14 at approximately the location ofgraphical element 30. When computingdevice 2 determinesgraphical element 30 has been interacted with,computing device 2 may output the audio notification despite the ambient noise level or a detected pattern indicative of human speech. - In some examples, the delay notification functionality of
computing device 2 may be turned on or off, and/or settings may be adjusted for howcomputing device 2 performs the delay notification functionality. For example, a maximum time period for which a notification may be delayed may be set by a user. Additionally, in some examples,user interface 14 may not include any of one or more of graphical elements 20-30. In other examples,user interface 14 may include other graphical elements. -
UID module 6 may act as an intermediary between various components ofcomputing device 2 to make determinations based on input detected byUID 4 and to generate output presented byUID 4. For instance,UID module 6 may receive an indication of user input received atuser interface 14.UID module 6 may receive, as an input frominput module 10, a sequence of touch events generated from user input detected atUID 4.UID module 6 may determine, based on the location components in the sequence touch events, which of one or more location components approximate a selection of one or more graphical elements (e.g.,UID module 6 may determine the location of one or more of the touch events corresponds to an area ofUID 4 that presentsgraphical element 30 used to override the delayed output of the audio notification).UID module 6 may provide, as input tonotification module 10, the sequence of touch events received atuser interface 14, including the locations whereUID 4 presents each of the graphical elements 20-30. In response,UID module 6 may receive, as an output fromnotification module 10, instructions for updatinguser interface 14 based on the indication of user input received atuser interface 14.UID module 6 may updateuser interface 14 to reflect the status of audio notifications.UID module 6 may causeUID 4 to present an updateduser interface 14. -
Computing device 2 may further include one or more application modules 12. Application modules 12 may include any other application thatcomputing device 2 may execute in addition to the other modules specifically described in this disclosure. For example, application modules 12 may include a web browser, a media player, a file system, a navigation program, a communication program, or any other number of applications or features thatcomputing device 2 may execute. Application modules 12 may determine that notifications should be provided related to a particular application. In some examples, an application run via an application module 12 determines a notification should be provided. In some examples, application modules 12 may detect one or more events that trigger output of a notification. An event requiring a notification as detected by application module 12 may include, for example, receiving an email message, receiving a text message, receiving a phone call, a clock alarm, a calendar reminder, etc. The corresponding notifications may be audio, visual, haptic feedback, or any other form of output. In accordance with the techniques of this disclosure,notification module 10 may interact with application module 12 ofcomputing device 2 to delay output of the notifications in certain circumstances, as described herein. Application module 12 may provide one or more signals tonotification module 10 responsive to application module 12 determining a notification to be outputted. The one or more signals may include information of what the notification should include and what time to output the notification.Notification module 10 delays output of the notification in response to determining that the time period in which the notification is meant to be output overlaps with a time period in which the detected conditions indicate the time period is an inopportune time to output the notification. - In the example of
FIG. 1 ,computing device 2 outputs for display auser interface 14 for display at a presence-sensitive display. As described above,FIG. 1 illustrates anexample user interface 14 that provides graphical elements 20-30.UID module 6 may generateuser interface 14 and include graphical elements 20-30 inuser interface 14.UID module 6 may send information toUID 4 that includes instructions for displayinguser interface 14 at a presence-sensitive device ofUID 4.UID 4 may receive the information and cause the presence-sensitive device ofUID 4 to presentuser interface 14 including one or more of graphical elements 20-30. - While
computing device 2 presentsuser interface 14,notification module 10 ofcomputing device 2 may receive information frommicrophone 7 about detected audio signals, such as an ambient noise level or a pattern of audio. Based on information about the detected audio,notification module 10 may determine whether to delay a notification.Notification module 10 may further determine, based on additional detected audio, whether to output the delayed notification. - In this way, the techniques of the disclosure may enable a computing device to delay outputting notifications to avoid outputting the notifications at inopportune times. For example,
computing device 2 configured with techniques of the disclosure may delay outputting an audio notification during a time period in whichcomputing device 2 determines that circumstances may prevent the audio from being heard. The techniques described herein provide an increased likelihood thatcomputing device 2 would output the audio at a time when a user may hear and understand the notification. Additionally, in some examples, the techniques described herein may prevent an action performed by a user withcomputing device 2, such as a scroll function, from being interrupted with a notification. Thus,computing device 2 using techniques described herein may provide an improved user experience. -
FIG. 2 is a block diagram illustrating anexample computing device 2 configured to delay output of a notification, in accordance with one or more aspects of the present disclosure.Computing device 2 ofFIG. 2 is described below within the context ofFIG. 1 .FIG. 2 illustrates only one particular example ofcomputing device 2, and many other examples ofcomputing device 2 may be used in other instances. Other examples ofcomputing device 2 may include a subset of the components included inexample computing device 2 or may include additional components not shown inFIG. 2 . - As shown in the example of
FIG. 2 ,computing device 2 includesUID 4, one ormore processors 40, one ormore input devices 42, one ormore communication units 44, one ormore output devices 46, one ormore sensors 48, one ormore power sources 52, and one ormore storage devices 60.Storage devices 60 ofcomputing device 2 also includeUID module 6,notification module 10,application modules 12A-12N,speech patterns database 62, and one ormore operating systems 64. One ormore communication channels 50 may interconnect each of thecomponents communication channels 50 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. - One or
more input devices 42 ofcomputing device 2 may receive input. Examples of input are tactile, motion, audio, and video input.Input devices 42 ofcomputing device 2, in one example, includes a presence-sensitive display, touch-sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from, for example, a human or machine. In some examples, aninput device 42 is a microphone, such asmicrophone 7 ofFIG. 1 . - One or
more output devices 46 ofcomputing device 2 may generate output. Examples of output are tactile, audio, and video output.Output devices 46 ofcomputing device 2, in one example, includes a presence-sensitive display, speaker, cathode ray tube (CRT) monitor, liquid crystal display (LCD), motor, actuator, electromagnet, piezoelectric sensor, or any other type of device for generating output to a human or machine.Output devices 46 may utilize one or more of a sound card or video graphics adapter card to produce auditory or visual output, respectively. In some examples, anoutput device 46 is a speaker, such asspeaker 8 ofFIG. 1 . - One or
more communication units 44 ofcomputing device 2 may communicate with external devices via one or more networks by transmitting and/or receiving network signals on the one or more networks. The one or more networks may be, for example, the Internet.Computing device 2 may usecommunication unit 44 to transmit and/or receive radio signals on a radio network such as a cellular radio network. Likewise,communication units 44 may transmit and/or receive satellite signals on a Global Navigation Satellite System (GNNS) network such as the Global Positioning System (GPS). Examples ofcommunication unit 44 include a network interface card (e.g., an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send or receive information. Other examples ofcommunication units 44 may include Bluetooth®, GPS, 3G, 4G, and Wi-Fi® radios found in mobile devices as well as Universal Serial Bus (USB) controllers. -
Computing device 2 also includesUID 4, which may include functionality of one ormore input devices 42 and/oroutput devices 46. In the example ofFIG. 2 ,UID 4 may be or may include a presence-sensitive display 54. In some examples, presence-sensitive display 54 may detect an object at and/or near presence-sensitive display 54. As one example range, presence-sensitive display 54 may detect an object, such as a finger or stylus that is within a specified range of presence-sensitive display 54. Presence-sensitive display 54 may determine a location (e.g., an (x,y) coordinate) of presence-sensitive display 54 at which the object was detected. A detectable object may be, for example,graphical element 30 ofFIG. 1 . Presence-sensitive display 54 may determine the location of presence-sensitive display 54 selected by a user's finger using capacitive, inductive, and/or optical recognition techniques. In some examples, presence-sensitive display 54 provides output to a user using tactile, audio, or video stimuli as described with respect tooutput device 46. In the example ofFIG. 2 ,UID 4 presents a user interface (such asuser interface 14 ofFIG. 1 ) at presence-sensitive display 54 ofUID 4. - While illustrated as an internal component of
computing device 2,UID 4 also represents an external component that shares a data path withcomputing device 2 for transmitting and/or receiving input and output. For instance, in one example,UID 4 represents a built-in component ofcomputing device 2 located within and physically connected to the external packaging of computing device 2 (e.g., a screen on a mobile phone). In another example,UID 4 represents an external component ofcomputing device 2 located outside and physically separated from the packaging of computing device 2 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer). - One or
more sensor devices 48 ofcomputing device 2 may detect input, which may be user input.Example sensor devices 48 include an accelerometer, a gyroscope, an ambient light sensor, a proximity sensor, a barometer, magnetometer, or other sensor devices.Computing device 2 may include one or more of each sensor device. User input detected bysensor devices 48 may include data related to acceleration, orientation, light intensity, proximity of an object tocomputing device 2, an ambient pressure, magnetic field strength and polarity, or other sensor reading. In some examples,sensor devices 48 may be aninput device 42. One ormore sensor devices 48 may detect user input. For example, a gyroscope may detect changes in orientation when computingdevice 2 is handled by a user interacting withcomputing device 2. -
Computing device 2 may include one ormore power devices 52, which may provide power tocomputing device 2. In one example,power device 52 includes one or more batteries included incomputing device 2. The one or more batteries may be rechargeable and provide power tocomputing device 2. The one or more batteries may, in some examples, be made from nickel-cadmium, lithium-ion, or other suitable material. In other examples,power device 52 may be a power source capable of providing stored power or voltage from another power source, which may be external tocomputing device 2. - One or
more storage devices 60 withincomputing device 2 may store information for processing during operation of computing device 2 (e.g.,characteristic database 62 ofcomputing device 2 may store data related to characteristics of user inputs and corresponding characteristic threshold information as well as sensor input thresholds, accessed byaccess module 8 during execution at computing device 2). In some examples,storage device 60 functions as a temporary memory, meaning thatstorage device 60 is not used for long-term storage.Storage devices 60 oncomputing device 2 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. -
Storage devices 60, in some examples, also include one or more computer-readable storage media.Storage devices 60 may be configured to store larger amounts of information than volatile memory.Storage devices 60 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.Storage devices 60 may store program instructions and/or data associated withUID module 6,notification module 10, and application modules 12. - One or
more processors 40 may implement functionality and/or execute instructions withincomputing device 2. For example,processors 40 oncomputing device 2 may receive and execute instructions stored bystorage devices 60 that execute the functionality ofUID module 6,notification module 10, and application modules 12. These instructions executed byprocessors 40 may causecomputing device 2 to store information withinstorage devices 60 during program execution.Processors 40 may execute instructions inUID module 6 andnotification module 10 to cause one or more of application modules 12 to delay output of notifications at inopportune times (such as when a user is speaking or interacting with computing device 2). - In accordance with aspects of this disclosure,
computing device 2 ofFIG. 2 may output for display at presence-sensitive display 54 ofUID 4, a graphical user interface that indicates information related to an application run at computingdevice 2, such asGUI 14 ofFIG. 1 . For example, when a notification is generated,notification module 10 ofcomputing device 2 may determine whether the notification should be outputted at the particular time. Whethernotification module 10 determines to delay or output the notification,UID module 6 may transmit a display command and data overcommunication channels 50 to causeUID 4 to a present user interface at presence-sensitive display 54 ofUID 4.UID module 6 may send information toUID 4 that includes instructions for displayinguser interface 14 at presence-sensitive display 54.UID 4 may receive the display command and data fromUID module 6 and cause presence-sensitive display 54 ofUID 4 to present a user interface, such asuser interface 14 ofFIG. 1 . -
Computing device 2 may receive an indication of user input detected at presence-sensitive display 54 ofUID 4. Receiving the indication of user input may comprise receiving an indication of one or more gestures, taps, or the like detected at presence-sensitive display 54. Alternatively, receiving the indication of a user input detected at presence-sensitive display 54 ofUID 4 may comprise receiving an indication of one or more non-tap gestures detected at presence-sensitive display 54. In other words, a user may provide tap and/or non-tap gestures as input tocomputing device 2, andcomputing device 2 may receive either type of input as an indication of user input. In some examples,UID module 6 may receive the indication of user input, analyze and interpret the user input, and provide data related to the received indication of user input to other modules ofcomputing device 2, such asnotification module 10. - In some examples,
notification module 10 provides instructions forUID module 6 to output a notification.Notification module 10 may provide instructions for adifferent output device 46, such asspeaker 8 ofFIG. 1 , to output the notification.Notification module 10 may provide instructions to output the notification based onnotification module 10 detecting the occurrence of one or more conditions. These conditions may be, for example, a completion of an interaction with computing device 2 (such as a typed word is completed, a gesture is completed, a scrolling function is completed, etc.), an ambient noise level is below a threshold level,microphone 7 does not detect audio indicative of human speech, or a maximum delay time period is reached. - As shown in the example of
FIG. 2 , one ormore storage devices 60 ofcomputing device 2 may includespeech pattern database 62. In some examples,speech pattern database 62 may be stored externally tocomputing device 2. In such an example,computing device 2 may accessspeech pattern database 62 remotely.Speech pattern database 62 may contain data related to characteristics of audio that are indicative of human speech. The characteristics may include, for example, tone, sound quality, type of sound, and the like. In some examples,speech pattern database 62 contains information related to an algorithm that may be used to determine if detected audio is indicative of human speech. In another example,speech pattern database 62 contains data representative of samples of human speech thatnotification module 10 may use to compare to detected audio to determine if the detected audio is indicative of human speech.Speech pattern database 62 may also include selected threshold levels fornotification module 10 to match the detected audio to any particular speech pattern.Notification module 10 may use one or more of these selected threshold levels may to determine whether the detected audio is indicative of human speech. A threshold level may be any value determined by or set forcomputing device 2, and the threshold may be such that if the threshold is exceeded, it is likely that the detected audio is indicative of human speech. A value exceeding a threshold may mean the value is less than, less than or equal to, greater than or equal to, or greater than the threshold. -
Notification module 10 may determine when to delay, and for how long, a notification. As such,notification module 10 can enablecomputing device 2 to provide notifications when they are more likely to be noticed and when they are less likely to interrupt an action being performed atcomputing device 2.Notification module 10 may be configured to preventcomputing device 2 from outputting audio notifications when the environment, ofcomputing device 2 is noisy. The techniques may further enablecomputing device 2 to delay outputting pop-up notifications until a time whennotification module 10 determines that no user is physically interacting with computing device 2 (such as interacting with presence-sensitive display 54, scrolling, typing a message, or the like). -
FIG. 3 is a block diagram illustrating anexample computing device 100 that outputs graphical content for display at a remote device, in accordance with one or more techniques of the present disclosure. Graphical content, generally, may include any visual information that may be output for display, such as text, images, a group of moving images, etc. The example shown inFIG. 3 includes acomputing device 100, presence-sensitive display 101,communication unit 110,projector 120,projector screen 122,mobile device 126, andvisual display device 130. Although shown for purposes of example inFIGS. 1 and 2 as a stand-alone computing device 2, a computing device such ascomputing device 100 may, generally, be any component or system that includes a processor or other suitable computing environment for executing software instructions and, for example, need not include a presence-sensitive display. - As shown in the example of
FIG. 3 ,computing device 100 may be a processor that includes functionality as described with respect to one ormore processors 40 inFIG. 2 . In such examples,computing device 100 may be operatively coupled to presence-sensitive display 101 by acommunication channel 102A, which may be a system bus or other suitable connection.Computing device 100 may also be operatively coupled tocommunication unit 110, further described below, by acommunication channel 102B, which may also be a system bus or other suitable connection. Although shown separately as an example inFIG. 3 ,computing device 100 may be operatively coupled to presence-sensitive display 101 andcommunication unit 110 by any number of one or more communication channels. - In other examples, such as illustrated previously by computing
device 2 inFIGS. 1-2 , a computing device may refer to a portable or mobile device such as mobile phones (including smart phones), laptop computers, etc. In some examples, a computing device may be a desktop computers, tablet computers, GPS devices, smart television platforms, cameras, personal digital assistants (PDAs), servers, mainframes, etc. - Presence-
sensitive display 101, such as an example ofuser interface device 4 as shown inFIG. 1 , may includedisplay device 103 and presence-sensitive input device 105.Display device 103 may, for example, receive data fromcomputing device 100 and display graphical content associated with the data. In some examples, presence-sensitive input device 105 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at presence-sensitive display 101 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input tocomputing device 100 usingcommunication channel 102A. In some examples, presence-sensitive input device 105 may be physically positioned on top ofdisplay device 103 such that, when a user positions an input unit over a graphical element displayed bydisplay device 103, the location at which presence-sensitive input device 105 corresponds to the location ofdisplay device 103 at which the graphical element is displayed. In other examples, presence-sensitive input device 105 may be positioned physically apart fromdisplay device 103, and locations of presence-sensitive input device 105 may correspond to locations ofdisplay device 103, such that input can be made at presence-sensitive input device 105 for interacting with graphical elements displayed at corresponding locations ofdisplay device 103. - As shown in
FIG. 3 ,computing device 100 may also include and/or be operatively coupled withcommunication unit 110.Communication unit 110 may include functionality of one ormore communication units 44 as described inFIG. 2 . Examples ofcommunication unit 110 may include a network interface card, an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. Other examples of such communication units may include Bluetooth, 3G, and WiFi radios, Universal Serial Bus (USB) interfaces, etc.Computing device 100 may also include and/or be operatively coupled with one or more other devices, e.g., input devices, output devices, memory, storage devices, and the like, such as those shown inFIGS. 1 and 2 . -
FIG. 3 also illustrates aprojector 120 andprojector screen 122. Other such examples of projection devices may include electronic whiteboards, holographic display devices, and any other suitable devices for displaying graphical content.Projector 120 andprojector screen 122 may include one or more communication units that enable the respective devices to communicate withcomputing device 100. In some examples, one or more communication units may enable communication betweenprojector 120 andprojector screen 122.Projector 120 may receive data fromcomputing device 100 that includes graphical content.Projector 120, in response to receiving the data, may project the graphical content ontoprojector screen 122. In some examples,projector 120 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at projector screen using optical recognition or other suitable techniques and send indications of such user input using one or more communication units tocomputing device 100. In such examples,projector screen 122 may be unnecessary, andprojector 120 may project graphical content on any suitable medium and detect one or more user inputs using optical recognition or other such suitable techniques. -
Projector screen 122, in some examples, may include a presence-sensitive display 124. Presence-sensitive display 124 may include a subset of functionality or all of the functionality ofUI device 4 as described in this disclosure. In some examples, presence-sensitive display 124 may include additional or different functionality. Projector screen 122 (e.g., an electronic whiteboard), may receive data fromcomputing device 100 and display the graphical content. In some examples, presence-sensitive display 124 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) atprojector screen 122 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input using one or more communication units tocomputing device 100. -
FIG. 3 also illustratesmobile device 126 andvisual display device 130.Mobile device 126 andvisual display device 130 may each include computing and connectivity capabilities. Examples ofmobile device 126 may include e-reader devices, convertible notebook devices, hybrid slate devices, etc. Examples ofvisual display device 130 may include other semi-stationary devices such as televisions, computer monitors, etc. As shown inFIG. 3 ,mobile device 126 may include a presence-sensitive display 128.Visual display device 130 may include a presence-sensitive display 132. Presence-sensitive displays sensitive display 54 as described in this disclosure. In some examples, presence-sensitive displays sensitive display 132, for example, may receive data fromcomputing device 100 and display the graphical content. In some examples, presence-sensitive display 132 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) atprojector screen 122 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input using one or more communication units tocomputing device 100. - As described above, in some examples,
computing device 100 may output graphical content for display at presence-sensitive display 101 that is coupled tocomputing device 100 by a system bus or other suitable communication channel.Computing device 100 may also output graphical content for display at one or more remote devices, such asprojector 120,projector screen 122,mobile device 126, andvisual display device 130. For instance,computing device 100 may execute one or more instructions to generate and/or modify graphical content in accordance with techniques of the present disclosure.Computing device 100 may output data that includes the graphical content to a communication unit ofcomputing device 100, such ascommunication unit 110.Communication unit 110 may send the data to one or more of the remote devices, such asprojector 120,projector screen 122,mobile device 126, and/orvisual display device 130. In this way,computing device 100 may output the graphical content for display at one or more of the remote devices. In some examples, one or more of the remote devices may output the graphical content at a presence-sensitive display that is included in and/or operatively coupled to the respective remote devices. - In some examples,
computing device 100 may not output graphical content at presence-sensitive display 101 that is operatively coupled tocomputing device 100. In other examples,computing device 100 may output graphical content for display at both a presence-sensitive display 101 that is coupled tocomputing device 100 bycommunication channel 102A, and at one or more remote devices. In such examples, the graphical content may be displayed substantially contemporaneously at each respective device. For instance, some delay may be introduced by the communication latency to send the data that includes the graphical content to the remote device. In some examples, graphical content generated by computingdevice 100 and output for display at presence-sensitive display 101 may be different than graphical content display output for display at one or more remote devices. -
Computing device 100 may send and receive data using any suitable communication techniques. For example,computing device 100 may be operatively coupled toexternal network 114 usingnetwork link 112A. Each of the remote devices illustrated inFIG. 3 may be operatively coupled to networkexternal network 114 by one ofrespective network links External network 114 may include network hubs, network switches, network routers, etc., that are operatively inter-coupled thereby providing for the exchange of information betweencomputing device 100 and the remote devices illustrated inFIG. 3 . In some examples, network links 112A-112D may be Ethernet, ATM or other network connections. Such connections may be wireless and/or wired connections. - In some examples,
computing device 100 may be operatively coupled to one or more of the remote devices included inFIG. 3 usingdirect device communication 118.Direct device communication 118 may include communications through whichcomputing device 100 sends and receives data directly with a remote device, using wired or wireless communication. That is, in some examples ofdirect device communication 118, data sent by computingdevice 100 may not be forwarded by one or more additional devices before being received at the remote device, and vice-versa. Examples ofdirect device communication 118 may include Bluetooth, Near-Field Communication, Universal Serial Bus, Wi-Fi, infrared, etc. One or more of the remote devices illustrated inFIG. 3 may be operatively coupled withcomputing device 100 bycommunication links 116A-116D. In some examples,communication links 116A-116D may be connections using Bluetooth, Near-Field Communication, Universal Serial Bus, infrared, etc. Such connections may be wireless and/or wired connections. - In accordance with techniques of the disclosure,
computing device 100 may be operatively coupled tovisual display device 130 usingexternal network 114. For example,computing device 100 may output a notification for display at presence-sensitive display 132 when computingdevice 100 determines the notification may be outputted. For instance,computing device 100 may send data that includes a representation of a notification tocommunication unit 110.Communication unit 110 may send the data that includes the representation of the notification tovisual display device 130 usingexternal network 114.Visual display device 130, in response to receiving the data usingexternal network 114, may cause presence-sensitive display 132 to output the notification. -
FIG. 4 is a flowchart illustrating an example operation of a computing device configured to delay output of a notification based at least partially on determination of a pattern of detected audio that indicates human speech, in accordance with one or more aspects of the present disclosure. The computing device may be computingdevice 2 ofFIGS. 1 and 2 , orcomputing device 100 as described herein. - The example operations include determining, by a computing device, that a notification is scheduled for output by the computing device during a first time period (202). For example, an application run via an application module 12 determines a notification should be provided. The notification may be, for example, a pop-up graphical element, an audio-based notification, tactile feedback, or the like. For example, an application module 12 of
computing device 2 determines that an audio notification is to be outputted. A notification may be scheduled for output whenever an application module 12 determines a notification for outputting. In some examples, application module 12 may schedule a notification in advance (e.g., a routine maintenance message, a calendar reminder, etc.) or may schedule a notification in an ad hoc manner (e.g., when computingdevice 2 receives a message, when a battery level reaches a charge threshold, etc.). - The example operations further include determining, by the computing device, that a pattern of audio detected during the first time period is indicative of human speech (204). For example,
notification module 10 ofcomputing device 2 determines that audio detected viamicrophone 7 is indicative of human speech. For example,notification module 10 queries a database, such asspeech pattern database 62, for one or more stored human speech patterns.Notification module 10 compares the pattern of audio detected during the first time period with the one or more stored human speech patterns.Notification module 10 determines that the pattern of audio detected during the first time period is indicative of human speech when the pattern of audio detected during the first time period matches one of the one or more stored human speech patterns within a threshold matching level. - The example operations also include delaying, by the computing device, output of the notification during the first time period (206). That is, for example,
notification module 10 delays output of the notification during the first time period becausenotification module 10 determined that there was an audio pattern indicative of human speech detected proximate tocomputing device 2 during the first time period. - The operations further include determining, by the computing device, that a pattern of audio detected during the second time period is not indicative of human speech (208). In some examples,
notification module 10 determines that the pattern of audio detected during the second time period is not indicative of human speech when the pattern of audio detected during the second time period does not match one of the one or more stored human speech patterns ofspeech pattern database 62 within the threshold matching level. - The example operations may further include outputting, by the computing device, at least a portion of the notification at an earlier in time of an end of the second time period or an expiration of a third time period (210). For example,
notification module 10 instructs one ormore output devices 46, such asspeaker 8, to output at least a part of the notification. The operation may further include determining, by the computing device, which portion of the notification to output based at least in part on the pattern of audio detected during the second time period. For example, ifnotification module 10 determined there was a lull in a detected conversation during the second time period, thennotification module 10 may determine to only output a portion of the notification that would fit within the lull. - The operations may further include monitoring, by the computing device, audio detected by a microphone, wherein the audio is detected during at least the first time period and the second time period. For example,
computing device 2 receives audio data in the environment ofcomputing device 2 viamicrophone 7. The audio may be detected during at least a first time period and a second time period. For example,microphone 7 may detect the audio continuously, discontinuously, or sporadically during the first and second time periods. In some examples, acomputing device 2 may include a separate module for detecting speech in the audio data. - In some examples, the third time period is a selected maximum delay period.
Notification module 10 may determine a type of the notification, and the duration of the third time period is based on the determined type of the notification. For an example where the notification is an audio notification of a turn from a navigation application, for example, the third time period may be based on a time it will take for a vehicle transportingcomputing device 2 to arrive at a turn. - Additionally,
notification module 10 may determine that the pattern of audio detected during the second time period indicates the second time period is suitable for outputting the entire notification, wherein outputting at least the portion of the notification comprises outputting the entire notification. In some examples, outputting at least the portion of the notification further comprises outputting at least the portion of the notification when the ambient noise level is below a threshold noise level. - In examples where the notification is an audio notification,
speaker 8 may output least a portion of the audio notification. In some examples,computing device 2 executes a navigation application using an application module 12. The navigation application may generate, at a beginning of the first time period via application module 12, the audio notification that includes directional information.Notification module 10 may determine the third time period based on the beginning of the first time period and a time when at least part of the directional information would become incorrect. For example, a GPS system in a car detects if the passengers are speaking, and alerts the driver of upcoming directions when the passengers in the car are silent so that it does not interrupt anyone (if the GPS system still has enough time to deliver the message). In some examples,notification module 10 may differentiate between voices of different speakers thatmicrophone 7 detects, and may use this information to determine when to output the notification. - The notification may be a notification for an incoming request for a communication, such as a received phone call, in some examples.
Computing device 2 may receive an incoming request for communication.Computing device 2 may generate, at a beginning of the first time period, the audio notification that indicates the request for communication has been received.Notification module 10 may determine the third time period based on the beginning of the first time period and a time when the request for communication would expire. - In some examples,
computing device 2 does not record the audio detected viamicrophone 7. For example,computing device 2 does not store a copy of the audio in any storage device, such asstorage device 60. In this sense,computing device 2 is not “listening” to any conversations or determining the content of the conversations. Instead, even thoughmicrophone 7 may be receiving audio data,computing device 2 does not make any records or determines the contents of the audio data. - In another example, if a user is speaking near
computing device 2,notification module 10 may output portions of the notification to alert the user of the information to reduce a likelihood of interrupting the user. For example,notification module 10 may output part of a notification during a quiet moment of a conversation when computingdevice 2 is functioning as a cell phone. In another example, if an email application receives an email into an inbox while a user is composing an email,notification module 10 delays outputting the notification until the user stops typing. In some examples, an additional pause threshold, such as thirty seconds or one minute, may be reached beforenotification module 10 instructs an output device to output the notification. In another example,notification module 10 defers interruption if a user is in the midst of performing an operation, such as a scroll operation, and has not yet completed the operation. - Thus, a
computing device 2 configured to perform techniques described herein may wait to give audio notifications, such as driving directions, until determining that people have stopped talking, to reduce the likelihood that someone speaks over the instructions.Computing device 2 may interrupt user actions, such as scrolling or speaking, upon certain conditions, such as before a vehicle transportingcomputing device 2 passes a road to turn onto indicated in the instructions. Thus,computing device 2 may perform an intelligent interruption through receiving audio data viamicrophone 7 and analyzing the audio data to determine if someone is speaking proximate tocomputing device 2. - Thus, computing devices implementing techniques described herein can suppress notifications until a suitable output time is determined.
Notification module 10 may not deliver a notification immediately upon its generation and instead may take into account an environment ofcomputing device 2 or how a user is interacting withcomputing device 2. Notifications may not interrupt the user as often, which causesUID 4 to behave in a more predictable manner and leads to improved user experience. In contrast, for example, if a user is trying to tap on a particular graphical element displayed onGUI 14 and, while the user is moving a finger to tap on the graphical element,notification module 10 causes a notification to be outputted over the graphical element, the user may tap on the notification instead of the graphical element. Furthermore,computing device 2 does not merely schedule notifications to appear at a certain time, because there is no guarantee that a user will not be attempting to interact withcomputing device 2 at the scheduled time. - A computing device configured according to techniques described herein may output notifications in response to determining that there is a pause in the user's interactions with other users or with the computing device. The computing device may output notifications that require immediate output right way while delaying the output of other, less urgent, notifications for up to some period of time (e.g., 0.1 seconds, 1 second, etc.). Each different notification type may have a different delay threshold (e.g., 100 milliseconds for a text message, 2 seconds for an email, 5 seconds for navigation directions, etc.). If the condition causing the delay of the notification is still present at the expiration of the delay window, the computing device outputs the notification. Thus, techniques described herein provide a way for computing
device 2 to minimize the distractions caused by notifications (e.g., by not interrupting a current user task) while also increasing the likelihood that the user will actually receive the notification. - In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
- By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
- Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
- The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
- Various examples have been described in this disclosure. These and other examples are within the scope of the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/619,775 US20150194165A1 (en) | 2014-01-08 | 2015-02-11 | Limiting notification interruptions |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/150,391 US9037455B1 (en) | 2014-01-08 | 2014-01-08 | Limiting notification interruptions |
US14/619,775 US20150194165A1 (en) | 2014-01-08 | 2015-02-11 | Limiting notification interruptions |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/150,391 Continuation US9037455B1 (en) | 2014-01-08 | 2014-01-08 | Limiting notification interruptions |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150194165A1 true US20150194165A1 (en) | 2015-07-09 |
Family
ID=52272948
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/150,391 Active US9037455B1 (en) | 2014-01-08 | 2014-01-08 | Limiting notification interruptions |
US14/619,775 Abandoned US20150194165A1 (en) | 2014-01-08 | 2015-02-11 | Limiting notification interruptions |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/150,391 Active US9037455B1 (en) | 2014-01-08 | 2014-01-08 | Limiting notification interruptions |
Country Status (3)
Country | Link |
---|---|
US (2) | US9037455B1 (en) |
EP (1) | EP2894560A1 (en) |
CN (1) | CN104765447B (en) |
Cited By (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150363155A1 (en) * | 2013-01-18 | 2015-12-17 | Denso Corporation | Audio output control device |
US20160095083A1 (en) * | 2014-09-26 | 2016-03-31 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling notification in electronic device |
US9779592B1 (en) | 2013-09-26 | 2017-10-03 | Apple Inc. | Geared haptic feedback element |
WO2017151916A3 (en) * | 2016-03-04 | 2017-10-19 | Apple Inc. | Situationally-aware haptic alerts |
US9830782B2 (en) | 2014-09-02 | 2017-11-28 | Apple Inc. | Haptic notifications |
US9886093B2 (en) | 2013-09-27 | 2018-02-06 | Apple Inc. | Band with haptic actuators |
US9911553B2 (en) | 2012-09-28 | 2018-03-06 | Apple Inc. | Ultra low travel keyboard |
US9928950B2 (en) | 2013-09-27 | 2018-03-27 | Apple Inc. | Polarized magnetic actuators for haptic response |
US9934661B2 (en) | 2009-09-30 | 2018-04-03 | Apple Inc. | Self adapting haptic device |
US10013058B2 (en) | 2010-09-21 | 2018-07-03 | Apple Inc. | Touch-based user interface with haptic feedback |
US10069392B2 (en) | 2014-06-03 | 2018-09-04 | Apple Inc. | Linear vibrator with enclosed mass assembly structure |
WO2018169572A1 (en) * | 2017-03-14 | 2018-09-20 | Google Llc | Outputting reengagement alerts by a computing device |
US10118696B1 (en) | 2016-03-31 | 2018-11-06 | Steven M. Hoffberg | Steerable rotating projectile |
US10120446B2 (en) | 2010-11-19 | 2018-11-06 | Apple Inc. | Haptic input device |
US10126817B2 (en) | 2013-09-29 | 2018-11-13 | Apple Inc. | Devices and methods for creating haptic effects |
US10210111B2 (en) * | 2017-04-10 | 2019-02-19 | Dell Products L.P. | Systems and methods for minimizing audio glitches when incurring system management interrupt latency |
EP3443442A1 (en) * | 2016-06-12 | 2019-02-20 | Apple Inc. | Devices, methods, and graphical user interfaces for providing haptic feedback |
US10236760B2 (en) | 2013-09-30 | 2019-03-19 | Apple Inc. | Magnetic actuators for haptic response |
US10268272B2 (en) | 2016-03-31 | 2019-04-23 | Apple Inc. | Dampening mechanical modes of a haptic actuator using a delay |
US10276001B2 (en) | 2013-12-10 | 2019-04-30 | Apple Inc. | Band attachment mechanism with haptic response |
US10353467B2 (en) | 2015-03-06 | 2019-07-16 | Apple Inc. | Calibration of haptic devices |
US10459521B2 (en) | 2013-10-22 | 2019-10-29 | Apple Inc. | Touch surface for simulating materials |
US10481691B2 (en) | 2015-04-17 | 2019-11-19 | Apple Inc. | Contracting and elongating materials for providing input and output for an electronic device |
US10545604B2 (en) | 2014-04-21 | 2020-01-28 | Apple Inc. | Apportionment of forces for multi-touch input devices of electronic devices |
US10566888B2 (en) | 2015-09-08 | 2020-02-18 | Apple Inc. | Linear actuators for use in electronic devices |
US10599223B1 (en) | 2018-09-28 | 2020-03-24 | Apple Inc. | Button providing force sensing and/or haptic output |
US10622538B2 (en) | 2017-07-18 | 2020-04-14 | Apple Inc. | Techniques for providing a haptic output and sensing a haptic input using a piezoelectric body |
US10691211B2 (en) | 2018-09-28 | 2020-06-23 | Apple Inc. | Button providing force sensing and/or haptic output |
WO2020226784A1 (en) * | 2019-05-06 | 2020-11-12 | Apple Inc. | Spoken notifications |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11217251B2 (en) * | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11380470B2 (en) | 2019-09-24 | 2022-07-05 | Apple Inc. | Methods to control force in reluctance actuators based on flux related parameters |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US11438078B2 (en) * | 2018-07-24 | 2022-09-06 | Comcast Cable Communications, Llc | Controlling vibration output from a computing device |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US11712637B1 (en) | 2018-03-23 | 2023-08-01 | Steven M. Hoffberg | Steerable disk or ball |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11809631B2 (en) | 2021-09-21 | 2023-11-07 | Apple Inc. | Reluctance haptic engine for an electronic device |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11977683B2 (en) | 2021-03-12 | 2024-05-07 | Apple Inc. | Modular systems configured to provide localized haptic feedback using inertial actuators |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US12254887B2 (en) | 2017-05-16 | 2025-03-18 | Apple Inc. | Far-field extension of digital assistant services for providing a notification of an event to a user |
US12260234B2 (en) | 2017-01-09 | 2025-03-25 | Apple Inc. | Application integration with a digital assistant |
US12301635B2 (en) | 2020-05-11 | 2025-05-13 | Apple Inc. | Digital assistant hardware abstraction |
US12431128B2 (en) | 2022-08-05 | 2025-09-30 | Apple Inc. | Task flow identification based on user intent |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8073681B2 (en) | 2006-10-16 | 2011-12-06 | Voicebox Technologies, Inc. | System and method for a cooperative conversational voice user interface |
US7818176B2 (en) | 2007-02-06 | 2010-10-19 | Voicebox Technologies, Inc. | System and method for selecting and presenting advertisements based on natural language processing of voice-based input |
US8140335B2 (en) | 2007-12-11 | 2012-03-20 | Voicebox Technologies, Inc. | System and method for providing a natural language voice user interface in an integrated voice navigation services environment |
US8495505B2 (en) * | 2008-01-10 | 2013-07-23 | International Business Machines Corporation | Perspective based tagging and visualization of avatars in a virtual world |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8326637B2 (en) | 2009-02-20 | 2012-12-04 | Voicebox Technologies, Inc. | System and method for processing multi-modal device interactions in a natural language voice services environment |
JP6065369B2 (en) * | 2012-02-03 | 2017-01-25 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US9750084B2 (en) * | 2014-04-30 | 2017-08-29 | Samsung Electronics Co., Ltd. | Apparatus, method, and system for desynchronizing notifications across multiple devices |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
CN107003999B (en) | 2014-10-15 | 2020-08-21 | 声钰科技 | System and method for subsequent response to a user's prior natural language input |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
WO2016157658A1 (en) * | 2015-03-31 | 2016-10-06 | ソニー株式会社 | Information processing device, control method, and program |
US10417447B2 (en) * | 2015-06-15 | 2019-09-17 | Arris Enterprises Llc | Selective display of private user information |
KR20170099188A (en) * | 2016-02-23 | 2017-08-31 | 엘지전자 주식회사 | Driver Assistance Apparatus and Vehicle Having The Same |
DE102016202968B4 (en) * | 2016-02-25 | 2024-05-02 | Bayerische Motoren Werke Aktiengesellschaft | Acoustic reproduction of a digital audio medium in a motor vehicle |
US10055006B2 (en) * | 2016-03-29 | 2018-08-21 | Microsoft Technology Licensing, Llc | Reducing system energy consumption through event trigger coalescing |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
US11140116B2 (en) | 2017-01-31 | 2021-10-05 | Samsung Electronics Co., Ltd | Method for providing notification to uncover execution screen and electronic apparatus for performing same |
US20180350360A1 (en) * | 2017-05-31 | 2018-12-06 | Lenovo (Singapore) Pte. Ltd. | Provide non-obtrusive output |
WO2019005227A1 (en) | 2017-06-30 | 2019-01-03 | Google Llc | Methods, systems, and media for voice-based call operations |
WO2019005233A1 (en) | 2017-06-30 | 2019-01-03 | Google Llc | METHODS, SYSTEMS, AND MEDIA FOR CONNECTING AN IoT DEVICE TO A CALL |
US10248379B2 (en) | 2017-07-27 | 2019-04-02 | Motorola Solutions, Inc. | Automatic and selective context-based gating of a speech-output function of an electronic digital assistant |
US10498685B2 (en) * | 2017-11-20 | 2019-12-03 | Google Llc | Systems, methods, and apparatus for controlling provisioning of notifications based on sources of the notifications |
US11340962B2 (en) * | 2018-09-11 | 2022-05-24 | Apple Inc. | Multiple notification user interface |
US11164577B2 (en) | 2019-01-23 | 2021-11-02 | Cisco Technology, Inc. | Conversation aware meeting prompts |
US12046235B2 (en) * | 2021-07-29 | 2024-07-23 | Lenovo (Singapore) Pte. Ltd. | Unmuted microphone notification |
US12090924B2 (en) * | 2021-12-21 | 2024-09-17 | Gm Cruise Holdings Llc | Conversation detector to insert audible announcements |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5373330A (en) * | 1993-01-19 | 1994-12-13 | Smart Vcr Limited Partnership | Remote-controlled VCR using an associated TV for audible feedback |
US20010042136A1 (en) * | 2000-03-17 | 2001-11-15 | David Guedalia | On-the -fly message notification system and methodology |
US20090238386A1 (en) * | 2007-12-25 | 2009-09-24 | Personics Holding, Inc | Method and system for event reminder using an earpiece |
US20090240497A1 (en) * | 2007-12-25 | 2009-09-24 | Personics Holding, Inc. | Method and system for message alert and delivery using an earpiece |
US8023661B2 (en) * | 2007-03-05 | 2011-09-20 | Simplexgrinnell Lp | Self-adjusting and self-modifying addressable speaker |
US20120215537A1 (en) * | 2011-02-17 | 2012-08-23 | Yoshihiro Igarashi | Sound Recognition Operation Apparatus and Sound Recognition Operation Method |
US8452266B2 (en) * | 2004-07-12 | 2013-05-28 | Research In Motion Limited | Delayed user notification of events in a mobile device |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62220998A (en) * | 1986-03-22 | 1987-09-29 | 工業技術院長 | Voice recognition equipment |
US5765130A (en) * | 1996-05-21 | 1998-06-09 | Applied Language Technologies, Inc. | Method and apparatus for facilitating speech barge-in in connection with voice recognition systems |
JP3507686B2 (en) * | 1998-01-16 | 2004-03-15 | 日本電気株式会社 | Communication device having voice recognition function and communication method |
JPH11224179A (en) * | 1998-02-05 | 1999-08-17 | Fujitsu Ltd | Interactive interface system |
US6393272B1 (en) * | 1999-06-02 | 2002-05-21 | Agere Systems Guardian Corp. | Wireless answer and hold feature |
US20030055768A1 (en) * | 1999-07-02 | 2003-03-20 | Anaya Ana Gabriela | Alert delivery and delivery performance in a monitoring system |
GB9928011D0 (en) * | 1999-11-27 | 2000-01-26 | Ibm | Voice processing system |
US7243130B2 (en) * | 2000-03-16 | 2007-07-10 | Microsoft Corporation | Notification platform architecture |
US8701027B2 (en) * | 2000-03-16 | 2014-04-15 | Microsoft Corporation | Scope user interface for displaying the priorities and properties of multiple informational items |
US20020087649A1 (en) * | 2000-03-16 | 2002-07-04 | Horvitz Eric J. | Bounded-deferral policies for reducing the disruptiveness of notifications |
US7444383B2 (en) * | 2000-06-17 | 2008-10-28 | Microsoft Corporation | Bounded-deferral policies for guiding the timing of alerting, interaction and communications using local sensory information |
AU2002246550A1 (en) * | 2000-11-30 | 2002-08-06 | Enterprise Integration Group, Inc. | Method and system for preventing error amplification in natural language dialogues |
WO2002052546A1 (en) * | 2000-12-27 | 2002-07-04 | Intel Corporation | Voice barge-in in telephony speech recognition |
US7283808B2 (en) * | 2001-01-18 | 2007-10-16 | Research In Motion Limited | System, method and mobile device for remote control of a voice mail system |
GB0120672D0 (en) * | 2001-08-24 | 2001-10-17 | Mitel Knowledge Corp | Intermediate voice and DTMF detector device for improved speech recognition utilisation and penetration |
US7069221B2 (en) * | 2001-10-26 | 2006-06-27 | Speechworks International, Inc. | Non-target barge-in detection |
US7162421B1 (en) * | 2002-05-06 | 2007-01-09 | Nuance Communications | Dynamic barge-in in a speech-responsive system |
US7024353B2 (en) * | 2002-08-09 | 2006-04-04 | Motorola, Inc. | Distributed speech recognition with back-end voice activity detection apparatus and method |
US20060223502A1 (en) * | 2003-04-22 | 2006-10-05 | Spinvox Limited | Method of providing voicemails to a wireless information device |
KR100546758B1 (en) * | 2003-06-30 | 2006-01-26 | 한국전자통신연구원 | Apparatus and method for determining rate in mutual encoding of speech |
US7392188B2 (en) * | 2003-07-31 | 2008-06-24 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method enabling acoustic barge-in |
US20060123360A1 (en) * | 2004-12-03 | 2006-06-08 | Picsel Research Limited | User interfaces for data processing devices and systems |
US20060262719A1 (en) * | 2005-05-18 | 2006-11-23 | Binshi Cao | Method of blank-and-burst signaling |
US20060286993A1 (en) * | 2005-06-20 | 2006-12-21 | Motorola, Inc. | Throttling server communications in a communication network |
US8305939B2 (en) * | 2005-10-13 | 2012-11-06 | International Business Machines Corporation | Selective teleconference interruption |
US8121653B2 (en) * | 2005-11-19 | 2012-02-21 | Massachusetts Institute Of Technology | Methods and apparatus for autonomously managing communications using an intelligent intermediary |
JP4154616B2 (en) * | 2006-03-14 | 2008-09-24 | 日本電気株式会社 | Advertisement insertion method and server for PoC and PoC extended communication system |
US7606536B1 (en) * | 2006-04-03 | 2009-10-20 | Rockwell Collins, Inc. | Temporal co-site interference reduction |
WO2009090702A1 (en) * | 2008-01-17 | 2009-07-23 | Mitsubishi Electric Corporation | On-vehicle guidance apparatus |
US10375223B2 (en) * | 2008-08-28 | 2019-08-06 | Qualcomm Incorporated | Notifying a user of events in a computing device |
US8412525B2 (en) * | 2009-04-30 | 2013-04-02 | Microsoft Corporation | Noise robust speech classifier ensemble |
WO2011029048A2 (en) * | 2009-09-04 | 2011-03-10 | Massachusetts Institute Of Technology | Method and apparatus for audio source separation |
US8356316B2 (en) * | 2009-12-17 | 2013-01-15 | At&T Intellectual Property I, Lp | Method, system and computer program product for an emergency alert system for audio announcement |
US8793611B2 (en) * | 2010-01-06 | 2014-07-29 | Apple Inc. | Device, method, and graphical user interface for manipulating selectable user interface objects |
US8600754B2 (en) * | 2010-04-19 | 2013-12-03 | Qualcomm Incorporated | System and method of providing voice updates from a navigation system that recognizes an active conversation |
US8619965B1 (en) * | 2010-05-07 | 2013-12-31 | Abraham & Son | On-hold processing for telephonic systems |
US8350681B2 (en) * | 2010-06-02 | 2013-01-08 | Research In Motion Limited | System and method for escalating event alerts |
JP2012141449A (en) * | 2010-12-28 | 2012-07-26 | Toshiba Corp | Voice processing device, voice processing system and voice processing method |
US20120311045A1 (en) * | 2011-05-31 | 2012-12-06 | Dany Sylvain | Notification services to one or more subscriber devices |
US9891800B2 (en) * | 2011-06-17 | 2018-02-13 | Nokia Technologies Oy | Method and apparatus for providing a notification mechanism |
DE112012005270T5 (en) | 2011-12-14 | 2014-10-30 | International Business Machines Corporation | Message exchange system, information processing unit and message exchange method and program |
US8731912B1 (en) * | 2013-01-16 | 2014-05-20 | Google Inc. | Delaying audio notifications |
US20140288939A1 (en) * | 2013-03-20 | 2014-09-25 | Navteq B.V. | Method and apparatus for optimizing timing of audio commands based on recognized audio patterns |
-
2014
- 2014-01-08 US US14/150,391 patent/US9037455B1/en active Active
- 2014-12-30 EP EP14200560.2A patent/EP2894560A1/en not_active Withdrawn
-
2015
- 2015-01-08 CN CN201510009355.1A patent/CN104765447B/en active Active
- 2015-02-11 US US14/619,775 patent/US20150194165A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5373330A (en) * | 1993-01-19 | 1994-12-13 | Smart Vcr Limited Partnership | Remote-controlled VCR using an associated TV for audible feedback |
US20010042136A1 (en) * | 2000-03-17 | 2001-11-15 | David Guedalia | On-the -fly message notification system and methodology |
US8452266B2 (en) * | 2004-07-12 | 2013-05-28 | Research In Motion Limited | Delayed user notification of events in a mobile device |
US8023661B2 (en) * | 2007-03-05 | 2011-09-20 | Simplexgrinnell Lp | Self-adjusting and self-modifying addressable speaker |
US20090238386A1 (en) * | 2007-12-25 | 2009-09-24 | Personics Holding, Inc | Method and system for event reminder using an earpiece |
US20090240497A1 (en) * | 2007-12-25 | 2009-09-24 | Personics Holding, Inc. | Method and system for message alert and delivery using an earpiece |
US20120215537A1 (en) * | 2011-02-17 | 2012-08-23 | Yoshihiro Igarashi | Sound Recognition Operation Apparatus and Sound Recognition Operation Method |
Cited By (173)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US12361943B2 (en) | 2008-10-02 | 2025-07-15 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11043088B2 (en) | 2009-09-30 | 2021-06-22 | Apple Inc. | Self adapting haptic device |
US11605273B2 (en) | 2009-09-30 | 2023-03-14 | Apple Inc. | Self-adapting electronic device |
US12094328B2 (en) | 2009-09-30 | 2024-09-17 | Apple Inc. | Device having a camera used to detect visual cues that activate a function of the device |
US9934661B2 (en) | 2009-09-30 | 2018-04-03 | Apple Inc. | Self adapting haptic device |
US10475300B2 (en) | 2009-09-30 | 2019-11-12 | Apple Inc. | Self adapting haptic device |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US12165635B2 (en) | 2010-01-18 | 2024-12-10 | Apple Inc. | Intelligent automated assistant |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10013058B2 (en) | 2010-09-21 | 2018-07-03 | Apple Inc. | Touch-based user interface with haptic feedback |
US10120446B2 (en) | 2010-11-19 | 2018-11-06 | Apple Inc. | Haptic input device |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9911553B2 (en) | 2012-09-28 | 2018-03-06 | Apple Inc. | Ultra low travel keyboard |
US9997306B2 (en) | 2012-09-28 | 2018-06-12 | Apple Inc. | Ultra low travel keyboard |
US9436430B2 (en) * | 2013-01-18 | 2016-09-06 | Denso Corporation | Audio output control device |
US20150363155A1 (en) * | 2013-01-18 | 2015-12-17 | Denso Corporation | Audio output control device |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US12277954B2 (en) | 2013-02-07 | 2025-04-15 | Apple Inc. | Voice trigger for a digital assistant |
US12009007B2 (en) | 2013-02-07 | 2024-06-11 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US9779592B1 (en) | 2013-09-26 | 2017-10-03 | Apple Inc. | Geared haptic feedback element |
US9928950B2 (en) | 2013-09-27 | 2018-03-27 | Apple Inc. | Polarized magnetic actuators for haptic response |
US9886093B2 (en) | 2013-09-27 | 2018-02-06 | Apple Inc. | Band with haptic actuators |
US10126817B2 (en) | 2013-09-29 | 2018-11-13 | Apple Inc. | Devices and methods for creating haptic effects |
US10651716B2 (en) | 2013-09-30 | 2020-05-12 | Apple Inc. | Magnetic actuators for haptic response |
US10236760B2 (en) | 2013-09-30 | 2019-03-19 | Apple Inc. | Magnetic actuators for haptic response |
US10459521B2 (en) | 2013-10-22 | 2019-10-29 | Apple Inc. | Touch surface for simulating materials |
US10276001B2 (en) | 2013-12-10 | 2019-04-30 | Apple Inc. | Band attachment mechanism with haptic response |
US10545604B2 (en) | 2014-04-21 | 2020-01-28 | Apple Inc. | Apportionment of forces for multi-touch input devices of electronic devices |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10069392B2 (en) | 2014-06-03 | 2018-09-04 | Apple Inc. | Linear vibrator with enclosed mass assembly structure |
US12200297B2 (en) | 2014-06-30 | 2025-01-14 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10490035B2 (en) | 2014-09-02 | 2019-11-26 | Apple Inc. | Haptic notifications |
US9830782B2 (en) | 2014-09-02 | 2017-11-28 | Apple Inc. | Haptic notifications |
US20160095083A1 (en) * | 2014-09-26 | 2016-03-31 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling notification in electronic device |
US9848076B2 (en) * | 2014-09-26 | 2017-12-19 | Samsung Electronics Co., Ltd | Electronic device and method for controlling notification in electronic device |
US10353467B2 (en) | 2015-03-06 | 2019-07-16 | Apple Inc. | Calibration of haptic devices |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US12236952B2 (en) | 2015-03-08 | 2025-02-25 | Apple Inc. | Virtual assistant activation |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US10481691B2 (en) | 2015-04-17 | 2019-11-19 | Apple Inc. | Contracting and elongating materials for providing input and output for an electronic device |
US11402911B2 (en) | 2015-04-17 | 2022-08-02 | Apple Inc. | Contracting and elongating materials for providing input and output for an electronic device |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US12333404B2 (en) | 2015-05-15 | 2025-06-17 | Apple Inc. | Virtual assistant in a communication session |
US12154016B2 (en) | 2015-05-15 | 2024-11-26 | Apple Inc. | Virtual assistant in a communication session |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US12204932B2 (en) | 2015-09-08 | 2025-01-21 | Apple Inc. | Distributed personal assistant |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US12386491B2 (en) | 2015-09-08 | 2025-08-12 | Apple Inc. | Intelligent automated assistant in a media environment |
US10566888B2 (en) | 2015-09-08 | 2020-02-18 | Apple Inc. | Linear actuators for use in electronic devices |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10609677B2 (en) | 2016-03-04 | 2020-03-31 | Apple Inc. | Situationally-aware alerts |
WO2017151916A3 (en) * | 2016-03-04 | 2017-10-19 | Apple Inc. | Situationally-aware haptic alerts |
US10039080B2 (en) | 2016-03-04 | 2018-07-31 | Apple Inc. | Situationally-aware alerts |
US10268272B2 (en) | 2016-03-31 | 2019-04-23 | Apple Inc. | Dampening mechanical modes of a haptic actuator using a delay |
US10118696B1 (en) | 2016-03-31 | 2018-11-06 | Steven M. Hoffberg | Steerable rotating projectile |
US11230375B1 (en) | 2016-03-31 | 2022-01-25 | Steven M. Hoffberg | Steerable rotating projectile |
US10809805B2 (en) | 2016-03-31 | 2020-10-20 | Apple Inc. | Dampening mechanical modes of a haptic actuator using a delay |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US12175977B2 (en) | 2016-06-10 | 2024-12-24 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US12293763B2 (en) | 2016-06-11 | 2025-05-06 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
EP3443442A1 (en) * | 2016-06-12 | 2019-02-20 | Apple Inc. | Devices, methods, and graphical user interfaces for providing haptic feedback |
US12260234B2 (en) | 2017-01-09 | 2025-03-25 | Apple Inc. | Application integration with a digital assistant |
US10938767B2 (en) * | 2017-03-14 | 2021-03-02 | Google Llc | Outputting reengagement alerts by a computing device |
WO2018169572A1 (en) * | 2017-03-14 | 2018-09-20 | Google Llc | Outputting reengagement alerts by a computing device |
US10210111B2 (en) * | 2017-04-10 | 2019-02-19 | Dell Products L.P. | Systems and methods for minimizing audio glitches when incurring system management interrupt latency |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US12254887B2 (en) | 2017-05-16 | 2025-03-18 | Apple Inc. | Far-field extension of digital assistant services for providing a notification of an event to a user |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10622538B2 (en) | 2017-07-18 | 2020-04-14 | Apple Inc. | Techniques for providing a haptic output and sensing a haptic input using a piezoelectric body |
US11712637B1 (en) | 2018-03-23 | 2023-08-01 | Steven M. Hoffberg | Steerable disk or ball |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US12211502B2 (en) | 2018-03-26 | 2025-01-28 | Apple Inc. | Natural assistant interaction |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US12386434B2 (en) | 2018-06-01 | 2025-08-12 | Apple Inc. | Attention aware virtual assistant dismissal |
US12080287B2 (en) | 2018-06-01 | 2024-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US12061752B2 (en) | 2018-06-01 | 2024-08-13 | Apple Inc. | Attention aware virtual assistant dismissal |
US11438078B2 (en) * | 2018-07-24 | 2022-09-06 | Comcast Cable Communications, Llc | Controlling vibration output from a computing device |
US12136956B2 (en) | 2018-07-24 | 2024-11-05 | Comcast Cable Communications, Llc | Controlling vibration output from a computing device |
US11757539B2 (en) | 2018-07-24 | 2023-09-12 | Comcast Cable Communications, Llc | Controlling vibration output from a computing device |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US10599223B1 (en) | 2018-09-28 | 2020-03-24 | Apple Inc. | Button providing force sensing and/or haptic output |
US12367879B2 (en) | 2018-09-28 | 2025-07-22 | Apple Inc. | Multi-modal inputs for voice commands |
US10691211B2 (en) | 2018-09-28 | 2020-06-23 | Apple Inc. | Button providing force sensing and/or haptic output |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US12136419B2 (en) | 2019-03-18 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
US11705130B2 (en) * | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US20230290352A1 (en) * | 2019-05-06 | 2023-09-14 | Apple Inc. | Spoken notifications |
WO2020226784A1 (en) * | 2019-05-06 | 2020-11-12 | Apple Inc. | Spoken notifications |
US11217251B2 (en) * | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US12154571B2 (en) * | 2019-05-06 | 2024-11-26 | Apple Inc. | Spoken notifications |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US20220068278A1 (en) * | 2019-05-06 | 2022-03-03 | Apple Inc. | Spoken notifications |
EP4362440A3 (en) * | 2019-05-06 | 2024-07-24 | Apple Inc. | Spoken notifications |
US12216894B2 (en) | 2019-05-06 | 2025-02-04 | Apple Inc. | User configurable task triggers |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11380470B2 (en) | 2019-09-24 | 2022-07-05 | Apple Inc. | Methods to control force in reluctance actuators based on flux related parameters |
US11763971B2 (en) | 2019-09-24 | 2023-09-19 | Apple Inc. | Methods to control force in reluctance actuators based on flux related parameters |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US12197712B2 (en) | 2020-05-11 | 2025-01-14 | Apple Inc. | Providing relevant data items based on context |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US12301635B2 (en) | 2020-05-11 | 2025-05-13 | Apple Inc. | Digital assistant hardware abstraction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US12219314B2 (en) | 2020-07-21 | 2025-02-04 | Apple Inc. | User identification using headphones |
US11977683B2 (en) | 2021-03-12 | 2024-05-07 | Apple Inc. | Modular systems configured to provide localized haptic feedback using inertial actuators |
US11809631B2 (en) | 2021-09-21 | 2023-11-07 | Apple Inc. | Reluctance haptic engine for an electronic device |
US12431128B2 (en) | 2022-08-05 | 2025-09-30 | Apple Inc. | Task flow identification based on user intent |
Also Published As
Publication number | Publication date |
---|---|
CN104765447B (en) | 2018-05-25 |
CN104765447A (en) | 2015-07-08 |
US9037455B1 (en) | 2015-05-19 |
EP2894560A1 (en) | 2015-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9037455B1 (en) | Limiting notification interruptions | |
US10469430B2 (en) | Predictive forwarding of notification data | |
CA2929140C (en) | Electronic device and method of determining suggested responses to text-based communications | |
EP2985984B1 (en) | Communication device for presenting the communication-log during a do-not-disturb mode | |
US9203252B2 (en) | Redirecting notifications to a wearable computing device | |
US8601561B1 (en) | Interactive overlay to prevent unintentional inputs | |
US10992779B2 (en) | Limiting alerts on a computing device | |
US10129198B2 (en) | Contextually driven messaging system | |
CN106030506A (en) | Context-Based Audio Triggers | |
US20140007115A1 (en) | Multi-modal behavior awareness for human natural command control | |
US11875274B1 (en) | Coherency detection and information management system | |
US20120210277A1 (en) | Usage based screen management | |
US11716414B2 (en) | Context aware airplane mode | |
KR101833944B1 (en) | Mobile terminal and control method therof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FAABORG, ALEXANDER;HARRIS, TRISTAN;ROBISON, AUSTIN;SIGNING DATES FROM 20131223 TO 20140107;REEL/FRAME:034940/0573 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044144/0001 Effective date: 20170929 |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE REMOVAL OF THE INCORRECTLY RECORDED APPLICATION NUMBERS 14/149802 AND 15/419313 PREVIOUSLY RECORDED AT REEL: 44144 FRAME: 1. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:068092/0502 Effective date: 20170929 |