[go: up one dir, main page]

GB2445436A - Mobile device which can sense its present situation - Google Patents

Mobile device which can sense its present situation Download PDF

Info

Publication number
GB2445436A
GB2445436A GB0711759A GB0711759A GB2445436A GB 2445436 A GB2445436 A GB 2445436A GB 0711759 A GB0711759 A GB 0711759A GB 0711759 A GB0711759 A GB 0711759A GB 2445436 A GB2445436 A GB 2445436A
Authority
GB
United Kingdom
Prior art keywords
mobile device
light
sensor
sound
emitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0711759A
Other versions
GB0711759D0 (en
Inventor
Masao Kajihara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symbian Software Ltd
Original Assignee
Symbian Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symbian Software Ltd filed Critical Symbian Software Ltd
Publication of GB0711759D0 publication Critical patent/GB0711759D0/en
Priority to PCT/GB2007/004948 priority Critical patent/WO2008075082A1/en
Publication of GB2445436A publication Critical patent/GB2445436A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M19/00Current supply arrangements for telephone systems
    • H04M19/02Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone
    • H04M19/04Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/3833Hand-held transceivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • H04Q7/32
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/605Portable telephones adapted for handsfree use involving control of the receiver volume to provide a dual operational mode at close or far distance from the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. Transmission Power Control [TPC] or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0251Power saving arrangements in terminal devices using monitoring of local events, e.g. events related to user activity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

A mobile device which uses sensor components such as cameras and microphones, to perform a determination as to the present situation of the device. In one embodiment a camera mounted on the mobile device is able to capture images of the surroundings of the device, and processing of those images then undertaken to determine a present situation of the device. A sound signal captured by a microphone can be processed to provide an estimate of the device's present situation. A stimulus signal for measurement by the sensor can be provided, and this again is preferably provided by a component which is typically standard in such a device. For example, the mobile device screen can be caused to light, to attempt to light up the device's present surroundings. The mobile device ringer volume may be changed, or the magnitude or frequency of vibration of the device vibrator altered.

Description

2445436
Mobile Device and Method of Operation Thereof
Technical Field
The present invention relates to a mobile device, and method of operation thereof, and 5 in particular to a mobile device and method of operation in which the device is able to sense its present situation, and, optionally, adapt its behaviour in dependence thereon.
Background to the Invention
10 Mobile devices such as mobile communications devices are well known in the art. Figure 1 is a block diagram of the internal elements of such a device, being a conventional mobile telecommunications device 10. Here, the mobile telecommunications device 10 comprises RF processor hardware 32, baseband processor hardware 34, and a power regulator 36, all of which deal with the 15 telecommunications operations of the mobile telecommunications device 10, i.e. using telecommunications protocols to make, for example, voice calls, data connections, and the like. The user interface and applications which run on the smart phone are run by the application processor 38, which runs the graphical user interface, as well as any applications requested by the user, and provides an interface to the telecommunications 20 stack provided by the baseband processor. Also typically provided are secondary communication subsystems, such as, for example, Bluetooth subsystem 40. An infrared subsystem may also be provided.
The mobile telecommunications device 10 is further provided with various memory, 25 such as ROM 42, RAM 44 and user data memory 46. The graphical user interface and any data which is required therefor, such as icon images, and the like are stored in ROM 42. RAM 44 typically stores any applications available on the device 10, as well as associated data. User data memory 46 stores data which is accessible by the user, such as contact data, messages, images, user settings data, and the like.
30
Concerning the physical user interface of such a typical mobile telecommunications device 10, and in particular those elements thereof which can be used to alert the user to an incoming call or message, the device 10 is typically provided with a speaker 20 for providing an audio output, and a corresponding microphone 22, which provides an
1
audio input into the device. As is well known, the speaker 20 can be used to provide an audio output during the voice or video call, or can, in some devices of the prior art, also provide other audio output, such as for playing digitally encoded music tracks or the like. The microphone 22 is provided to capture a user's voice during voice or video 5 calls, and can also be used via other applications. To provide a video output, a screen 24 is typically provided and it is also common for a video camera 26, capable of capturing still or moving images, to be provided. Very often, the screen 24 is used to display the image presently being captured by the camera 26 i.e. the signal flow is from the camera 26 to the screen 24.
10
In addition to the above, typical mobile telecommunications devices 10 of the prior art also include a vibrator 29, typically a pancake/coin motor, as is well known in the art. Other components capable of producing a vibration are also known, such as piezoelectric vibration generation devices, which can also be used. The usual operation is for 15 the vibrator to be activated when an incoming call or message is received, to provide a physical movement of the device which can be felt by a user, for example if the device is in a user's pocket or hand.
It has also been proposed within the prior art, although this is not typical, to include an 20 accelerometer 28 within a mobile communications device 10, to detect movement of the device. A prior example of such a mobile telecommunications device incorporating a vibrator and an accelerometer will be discussed next.
Prior Art
25
United States patent application publication number 2006/0172706 describes a wireless communications device having a vibration motor therein, arranged to vibrate the wireless device for a predetermined period. The wireless device is also provided with an accelerometer, at which an acceleration measurement can be taken during the period 30 when the vibration motor is activated. By combining an onboard accelerometer with onboard vibrator, wireless devices are then given the means to detect if they are being held by users. Particularly, if a wireless device is on the table or is in the user's pocket and is not being held by anyone, the accelerometer can predictably measure the acceleration pattern that occurs when a vibrator is turned on. However, the acceleration
2
patterns differ when a wireless device is being held by a person and when it is not being held, and in particular the acceleration patterns that are measured by the accelerometer reflect the effective mass seen by the vibrator. When the wireless device is being held, the effective mass is greater as it includes the mass of the wireless device and that of the 5 user's hand and arm. Thus, the wireless device provided with the vibrator and accelerometer can determine if it is being held by turning on its vibrator for a predetermined period of time and by reading the output of its accelerometer.
US 2006/0172706 therefore describes how, using a vibrator and accelerometer, a 10 wireless device is able to determine information pertaining to its surrounding environment, and adjust its operation accordingly. However, the technique of US 2006/0172706 requires the wireless device to be fitted with an accelerometer, a component which is not usually included within such devices, and which adds to the cost. Moreover, the exclusive use of an accelerometer to detect the device's situation 15 means that only a relatively small number of situations can be detected i.e. whether the device is being held or not. Whilst the ability of a device to adapt its operation depending on its surrounding environment as described in US 2006/0172706 is useful, it would be more advantageous if such sensing of its surrounding environment could be performed without the need for additional components within the device, thus saving 20 cost, and component integration difficulties.
Summary of the Invention
Embodiments of the invention provide a mobile device, such as , for example, a mobile 25 telephone, which uses sensor components which are typically conventionally already found in such mobile devices, such as cameras and microphones, to perform a determination as to the present situation of the device. For example, in one embodiment a camera mounted on the mobile device (and which would typically be provided for other uses) is able to capture images of the surroundings of the device, and processing 30 of those images then undertaken to determine a present situation of the device. Likewise, in another embodiment a sound signal captured by a microphone can be processed to provide an estimate of the device's present situation. In preferred embodiments a stimulus signal for measurement by the sensor can be provided, and this again is preferably provided by a component which is typically standard in such a
3
device. Thus, for example, to aid situation determination by a camera, the mobile device screen can be caused to light, to attempt to light up the device's present surroundings. Similarly, where a microphone is being used as the sensor device, then the mobile device speaker (which would also be typically provided) can emit a test sound, and 5 attenuation, distortion or other changes in the sound as measured by the microphone used to inform the situation determination. In further embodiments, multiple sensors may be used together, and this allows for additional situations to be discriminated between. In preferred embodiments the ability to distinguish between situations allows for the mobile device behaviour to be adapted in dependence on the discrimination. For 10 example, the mobile device ringer volume may be changed, or the magnitude or frequency of vibration of the device vibrator altered.
In view of the above, from a first aspect the present invention provides a mobile device comprising: a light or sound sensor; and a processor for processing output signals from 15 said light or sound sensor; the arrangement being such that said processor determines from said output signals a particular one of a plurality of predetermined possible situations in which said mobile device is presently to be found. Given that light and sound sensors are typically already provided on mobile devices for other uses, the use of such sensors in the context of the present invention means that no additional device 20 componentry is required.
Preferably, the possible situations comprise at least two or more selected from the group comprising: in an enclosed space, in a user's hand, face up on a surface, face down on a surface. These are typical situations in which a mobile device may be found.
25
Preferably, the device behaviour is adapted in dependence on the determination of the present situation. The adaptation may take the form, for example, of adapting a ringer volume, or a vibrator vibration intensity or frequency. Alternatively, the screen brightness may be controlled.
30
In preferred embodiments the light sensor is a camera and the sound sensor is a microphone. Such components are typically already found on mobile devices, and hence the component cost of such a device is kept lower.
4
Preferably in some embodiments the mobile device has light and sound sensors, said determination being performed in dependence on output signals from both such sensors. Using multiple such sensors allows for discrimination of a larger number of situations then with just using one sensor.
5
Preferably the mobile device has a second light sensor in addition to said first light sensor, said determination being performed in dependence on output signals from both such sensors. Again, using two such sensors allows for more situations to be discriminated. Preferably, to aid the discrimination, the second light sensor is provided 10 on a different face of said device than said first light sensor.
In preferred embodiments the mobile device further comprises light or sound emitters, the arrangement being such that output signals from said light or sound sensor are obtained whilst said light or sound emitter is emitting. In particular, preferably said light 15 or sound sensor input signal is compared by said processor with an output signal emitted by said light or sound emitter to perform said situation determination. This allows for a more accurate situation determination to be performed.
In preferred embodiments said light emitter is a display screen of said mobile device 20 and said sound emitter is a speaker of said mobile device. Again, these are components typically found on a mobile device, and hence component count and cost can be kept low.
In preferred embodiments the mobile device further comprises: a vibrator; and an 25 accelerometer arranged to measure vibrations of the said mobile device caused, at least in part, by said vibrator; the arrangement further being such that said processor receives an accelerometer output signal indicative of the measured vibrations, and said situation determination is performed in dependence thereon. Provided such a vibrator and accelerometer sensor combination in addition to the light and/or sound sensors further 30 increases the number of situations which may be discriminated between.
From another aspect the present invention further provides a mobile device having a plurality of sensors for sensing one or more sensor media; a processor for processing signals produced by said sensors indicative of the present state of the sensor media; the
5
arrangement being such that said processor determines from said produced signals a particular present situation of at least three or more predetermined possible situations in which said mobile device is to be presently found. Thus, using multiple sensors a greater number of mobile device situations can be discriminate between than has 5 heretofore been the case.
In preferred embodiments, said processor determines from said produced signals a particular present situation of at least four predetermined possible situations in which said mobile device is to be presently found. The number of situations is therefore further 10 increased.
Preferably, the predetermined possible situations are selected from the group comprising: in an enclosed space, in a user's hand, face up on a surface, face down on a surface. These are typical situations in which a mobile device may be found.
15
In embodiments the multiple sensors preferably comprise two or more sensors selected from the group comprising: a first light sensor, a second light sensor, a sound sensor, and a motion sensor. Preferably said first light sensor is a first camera mounted on a first face of the mobile device. Such components are typical in mobile devices. 20 Likewise, preferably said second light sensor is a second camera mounted on a second face of the mobile device. Again, such components are also common, and hence no additional components are used.
Similarly, preferably said sound sensor is a microphone and said motion sensor is an 25 accelerometer.
In preferred embodiment the mobile device preferably further comprises at least one emitter for emitting energy in the form of at least one of the sensor media detected by at least one of the sensors. This allows for an active investigation of the device's present 30 surroundings to be undertaken, thus increasing the ability and accuracy of the discrimination between situations which is undertaken.
Preferably said produced signals are obtained from at least one of said sensor whilst said at least one emitter is emitting energy. More preferably said determination performed by
6
said processor includes comparing a produced signal from at least one of the sensors which senses the sensor medium in which said emitter emits energy with an output signal of the emitter. Again, such functions allows the number of situations which can be discriminated to be higher than has heretofore been the case.
5
In embodiments the at least one emitter is preferably chosen from a group comprising: a light emitter, a sound emitter, and a motion generator. Preferably the light emitter is a display screen of the mobile device, and preferably the sound emitter is a speaker of the mobile device. Such components are typically provided in mobile devices for other 10 uses, and hence component count and cost is not increased.
From another aspect the invention provides a method of operating a mobile device provided with a light or sound sensor, comprising the steps of: obtaining output signals 15 from said light or sound sensor indicative of the mobile device surroundings; processing said output signals to determine therefrom a particular one of a plurality of predetermined possible situations in which said mobile device is presently to be found.
The same advantages as previously described in respect of the first aspect are obtained. 20 Moreover, the same further features and associated advantages as previously described in respect of the first aspect may also be provided.
From a fourth aspect the invention also provides a method of operating a mobile device having a plurality of sensors for sensing one or more sensor media, comprising the steps 25 of: obtaining signals from said plurality of sensors, said signals being indicative if the present state of the sensor media; and processing said signals to determine therefrom a particular present situation of at least three or more predetermined possible situations in which said mobile device is to be presently found.
30 The same advantages as previously described in respect of the second aspect are obtained. Moreover, the same further features and associated advantages as previously described in respect of the second aspect may also be provided
7
From another aspect the invention additionally provides a computer program or suite of computer programs so arranged such that when executed by a computer processor they cause the computer to perform the steps of any of the third and fourth aspects above. Moreover, additionally provided is a machine readable storage medium storing the 5 computer program or at least one of the suite of computer programs according to the fifth aspect. The machine readable storage medium may be any medium known in the art, such as solid state memory, optical discs, magneto-optical discs, magnetic discs, or the like.
10 Brief Description of the Drawings
Further features and advantages of the present invention will become apparent from the following description of embodiments thereof, presented by way of example only, and by reference to the accompanying drawings, wherein like reference numerals refer to 15 like parts, and wherein: -
Figure 1 is a block diagram of a mobile communications device of the prior art;
Figure 2 is a drawing illustrating a first situation in which such a device may be found; Figure 3 is a drawing illustrating a second situation in which such a device may be 20 found;
Figure 4a is a drawing illustrating a third situation in which such a device may be found; Figure 4b is a drawing illustrating a further situation in which such a device may be found;
Figure 5 is a block diagram of a mobile communications device according to a first 25 embodiment of the invention;
Figure 6 is a diagram illustrating how light emitted from a screen can be reflected into a camera;
Figure 7 is a diagram illustrating how light emitted from a screen can be absorbed by surrounding material;
30 Figure 8 is a flow diagram of a method operation of a mobile communications device according to a first embodiment of the invention;
Figure 9 is a diagram illustrating how sound emitted from a speaker can be picked up by a microphone;
8
Figure 10 is a diagram illustrating how sound emitted via speaker can be absorbed by surrounding material;
Figure 11 is a flow diagram illustrating a method of operation of a mobile communications device according to a second embodiment of the invention;
5 Figure 12 is a table illustrating how various outputs and inputs of a mobile communications device are affected by various mobile device situations;
Figure 13 is a flow diagram of a method of operation of a mobile communications device according to another embodiment of the invention; and
Figure 14 is a flow diagram of a method of operation of a mobile communications 10 device according to yet another embodiment of the invention.
Description of the Embodiments
Embodiments of the invention to be described relate to a mobile device, such as a 15 mobile communications device such as a telephone, or other mobile device such as a PDA, or the like, which is able to sense its surroundings, and then, optionally, adapt its behaviour in dependence on the sensed surroundings. Within the description below focus is made on describing the mobile device as being a telephone, but this is not to be taken as a limiting feature, and in other embodiments of the invention the mobile device 20 may be any other type of device, such as, as mentioned, a PDA, or a laptop, media player, or the like.
Figures 2, 3 and 4 depict particular situations in which a mobile communications device 10 may be found. More particularly, Figure 2 illustrates how a mobile communications 25 device 10 can commonly be found in an enclosed space, such as a user's pocket. In this case, the device 10 is in close contact with the material of the enclosed space 16, such that the material substantially surrounds the device 10. Commonly, the material of the enclosed space 16 would commonly be a type of fabric.
30 Figure 3 illustrates a second situation in which a mobile device 10 may be found, in this case being held in the hand of a user 12. Typically, this would be when the device is being used, or about to be used.
9
Figures 4a and 4b illustrate a third situation which is commonly found, that is where the device 10 is simply placed on a surface, such as table top 14. In this case, as shown in Figure 4a the device may be placed face up, or, as shown in Figure 4b, face down.
5 Depending on which of the above situations a mobile device finds itself, in embodiments of the invention the device may conveniently adapt its behaviour in dependence on the situation. For example, in the case of the device finding itself in an enclosed situation such as shown in Figure 2, where the device is commonly within a user's pocket, then the ringer of the device may be made to be louder, so that the user 10 can hear the ringer more easily. Likewise, any vibrator provided in the device 10 may also be controlled to cause the device to vibrate with a greater magnitude and/or different (preferably higher) frequency, so that the user feels the vibrations caused by the device more clearly. Similarly, however, because the device is in an enclosed space, and the user cannot see the screen of the device 10, it would not be useful for the screen 15 10 to be lit whilst in the enclosed space. Thus, therefore, the device 10, upon detecting that it is in an enclosed space 16, may control the screen so that it is not lit. This has the advantage of saving device battery power, which is an important consideration for mobile devices.
20 When the device finds itself in the second situation shown in Figure 3, wherein the device is in its user's hand, then the user has direct tactile communication with the device 10, which tactile communication can be used to attract the user's attention, for example without having to activate the ringer of the device. Thus, for example, where the device 10 finds itself in a user's hand 10, then the ringer may be controlled so as to 25 be of reduced volume, or to be rendered mute, as not being necessary. Whilst in a user's hand 12, it is likely that the user may be looking at the device, and hence in this case it would be useful for the screen of the device 10 to be lit. Likewise, being in a user's hand, the user will feel any vibrations caused by the vibrator in the device, but given the direct tactile communication between the device and the user's hand, it is not 30 necessary for these vibrations to be of any great magnitude. Thus, for example, the magnitude of vibrations and/or frequency of vibrations produced by a vibrator in the device can be reduced. Again, muting the ring tone, and reducing the magnitude of vibrations produced by the vibrator saves battery power.
10
A third situation in which the device may find itself is that of Figure 4a, i.e. face up on a surface 14, such as a table top, or the like. In this case, the device is not in contact with, or necessarily in close proximity to a user, and hence there is no need to provide a tactile output in the form of the vibrator. Thus, in this situation, the vibrator within the 5 device can be disabled. With respect to the ringer, however, the ringer may be set at its default volume, or may, alternatively, caused to be louder. Likewise, given that the device is face up, it is possible that the user may be able to see the screen of the device, and hence the screen is preferably lit, or made brighter. In this way, only those outputs of the device which are able to attract the user's attention usefully are used.
10
In the case of Figure 4b where the device finds itself face down on the surface, then as in the situation of Figure 4a there is no need to activate the vibrator, as the device is not in tactile communication with a user, and the user will be unable to feel such vibrations. Again as with the situation of Figure 4a, however, it is necessary to activate the ringer, 15 either at the default volume, or, louder if necessary. However, different from Figure 4a, because the device is face down there is no need to activate the screen, as any light emitted by the screen will be blocked by the surface 14. Thus, in the situation of Figure 4b the screen does not need to be lit, thus further saving power.
20 Having described the possible modifications to the mobile communication device's behaviour which can be performed in dependence on the sensing of the device's situation or circumstances, several embodiments illustrating how the device may sense its surroundings will now be described. In particular, embodiments of the invention focus on using components which have become standard in mobile telecommunications 25 devices, such as speakers and microphones, and cameras and screens, to enable sensing of the device's surroundings. In this way, reliance on relatively complicated and expensive components such as accelerometers is reduced.
Figure 5 is a block diagram of a mobile communications device 10 according to the 30 embodiments of the invention. The mobile communications device 10 of Figure 5 is identical to the mobile communications device 10 of Figure 1 described previously, but with the difference that stored in the ROM 42 is an attitude detection program which may be run by the application processor 38 either periodically or constantly to detect the attitude i.e. the environment and circumstances of the mobile device 10, to permit
11
adaptation of the device's outputs. The steps performed by the attitude detection program stored in the ROM 42 when run by the application processor 38 in each of the embodiments will be described later. Note that each of the embodiments to be described is based upon the provision of the attitude detection program in the ROM 42, 5 the differences between each embodiment lying in the steps performed by the mobile device 10 under the control of the attitude detection program of each embodiment, when run by the application processor 38.
In view of the above, a first embodiment will be described with respect to Figures 6 to 10 8. Within the first embodiment, light is used as the sensing medium.
Figures 6 and 7 illustrate how the screen 24, and camera 26 can be used together in the first embodiment to detect the mobile device 10 situation. The first embodiment relies on the ability of the video camera 26 to collect images of the mobile device's situation, 15 and in particular to discriminate whether the device is in a light or a dark place. Additionally, the first embodiment also relies on the use of the screen 24 as a light source, which can be used for lighting the surrounding environs of the device, for viewing by the camera. In particular, light emitted from the screen 24 can reflect off nearby objects and be captured by the camera 26, as well as ambient light, and any 20 information thus obtained can be used to determine the mobile device 10 situation. Such an arrangement is shown in Figure 6, where light emitted by the screen 24 can reflect off any nearby objects, and by captured by the camera 26. In contrast, as shown in Figure 7, when the device 10 finds itself in an enclosed space, and is surrounded by, often dark, material such as that of pocket 16, then in this case the camera 26 will not 25 capture any ambient light, as in the case of Figure 6, and moreover, due to the close proximity of the material 16 to screen 24, any light emitted by the screen 24 is either absorbed by, or reflected with a high degree of attenuation from, the material 16 of the enclosed space, such as a pocket. In this case, even when the screen 24 is lit, the camera 26 is unable to capture a light image. The attitude detection program stored in the ROM 30 42 of the first embodiment is therefore able to use such image information obtainable from the camera to discriminate between the various situations in which the mobile device may find itself. An example operation of the attitude detection program stored in the ROM 42 in performing such a discrimination is shown in Figure 8.
12
More particularly, with reference to Figure 8, the attitude detection program stored in the ROM 42 and run on the application processor 38 within the first embodiment, uses the camera and screen to determine the device 10's situation, by first obtaining an image from the camera at step 8.2. At step 8.4 a determination is performed as to whether the 5 image contains light. This determination may, for example, take the form of a thresholding operation looking at the grey scale values of the pixels of the image, and determining an average grey scale value, and comparing that average to a threshold. Where the average grey scale value is above the threshold (where a higher grey scale value indicates white, and a lower grey scale value indicates black) then the image is 10 determined to contain light, i.e. is a light image. Here, the threshold may for example be set to be the median possible greyscale pixel value.
In this case, if the image from the camera does contain light, i.e. the average greyscale pixel value is greater than the threshold value, then the device knows that it is not in an enclosed space, and must either be in a user's hand, or on a table top, as determined at 15 step 8.18. With this knowledge, at step 8.20 the mobile device preferably adapts its output such as the ringer and/or vibrator and/or screen to the detected hand or table top situation.
In the first embodiment it is not possible using just a camera to distinguish as to whether 20 the device is either in the user's hand, or on the table top surface, but what is known is that at step 8.20 the device is not in an enclosed space such as a pocket. Therefore, the device knows that there is no need to increase the volume of the ringer, as the ringer will not be attenuated by the enclosed space. Therefore, the ringer volume can be kept at the default volume. Similarly, because the device knows that it is not an enclosed 25 space, then there is a possibility of the user being able to see the screen of the device, and hence the device can cause the screen to be lit to alert the user to the receipt of a call or message. With respect to the vibrator, we described previously that when in a user's hand the vibrator can be activated at a reduced setting, and when on the table top it need not be activated at all. Within the first embodiment at step 8.20 a device does not know 30 whether it is either in the user's hand or on a table top, and hence the vibrator may be activated at a reduced setting i.e. taking that result which is most likely to attract the user's attention.
13
Returning to step 8.4, if the latent image captured by the camera at step 8.2 does not contain any light, i.e. the grey scale thresholding operation gives an average value below the threshold value, then at step 8.6 the device determines that it is necessary to perform a night check, and step 8.8 lights the screen of the device. This will cause light 5 to be emitted from the screen 24, as shown in Figures 6 and 7. At the same time as the screen is lit, a second image is obtained from the camera at step 8.10, and an evaluation as to whether the image contains light is performed at step 8.12. The evaluation at step 8.12 may be identical to that performed at step 8.4 i.e. a thresholding operation performed on the average grey scale value of the image pixels. However, in this case 10 the threshold may be set at a different, preferably lower, level than the threshold of step 8.4. The reason for this is that in step 8.4 an evaluation is performed as to whether the image contains ambient light i.e. daylight, which can be expected to be at a relatively high level. However, in step 8.12 an evaluation is being performed as to whether the image contains light emitted from the screen 24, and reflected from nearby objects. 15 Given the power of a typical screen 24 in a mobile device, and the amount of light emitted therefrom, the level of light detected at step 8.12 will therefore be relatively low. Hence, a lower threshold will typically be used at step 8.12 than at step 8.4.
If, however, it is determined that the image does contain light at step 8.12 above the set 20 threshold level, then this is likely because light emitted from the screen 24 has reflected off nearby objects i.e. has not been absorbed or highly attenuated by material in close proximity to the screen, as would be the case if the mobile device 10 was in a user's pocket or the like. Thus, the device can then infer at step 8.22 that it is either in the user's hand, or on the table top. In this case, the device's outputs can be adapted at step 25 8.20, and in the same manner as before.
Returning to step 8.12, if it is determined thereat that the image does not contain any light i.e. that the lower threshold value for the average grey scale of the image pixels is not met, then it is determined at step 8.14 that the phone must be in an enclosed space 30 such as the user's pocket 16. In this case the attitude detection program then adapts the ringer and/or vibrator and/or screen output at step 8.16 to the pocket situation. As described previously, when in an enclosed space such as the user's pocket, the ringer is preferably caused to ring more loudly, and the vibrator to vibrate with a greater
14
magnitude. However, it is not necessary to light the screen to attract the user's attention, as the user will be unable to see such a lit screen.
Thus, according to the first embodiment then using just the camera 26 as a sensor and 5 the screen 24 as a light source it is possible for the mobile device 10 to detect whether or not it is in an enclosed situation such as a user's pocket, or whether it is either in the user's hand or on a surface. The device 10 can then adapt its user alert outputs in dependence on the determined situation. The great advantage of the first embodiment is that conventional mobile communications devices such as mobile telephones are 10 typically provided with screens and cameras already, and hence no additional componentry is required within the device in order to operate according to the first embodiment. Instead, all that is required is additional software to cause the application processor 38 to control the speaker 20 and microphone 22, and to process the signals received therefrom. This is provided by the attitude detection program 42.
15
A second embodiment will now be described with respect to Figures 9 to 11. In the second embodiment, sound is used as the sensing medium for the mobile device 10, in that the speaker 20 of the mobile device 10 can be caused to emit a test signal, which is then recorded by the microphone 22. Depending on a comparison of the signal recorded 20 by the microphone with the expected test pattern emitted by the speaker, an estimate of the environmental situation of the mobile device, and in particular which of the situations noted previously it finds itself in, can be undertaken.
More particularly, Figure 9 illustrates how the speaker 20 can be controlled to emit 25 sound waves, preferably of known characteristics, such as a test tone or the like. When the device 10 is not in an enclosed space such as a user's pocket, then the sound waves emanate cleanly from the speaker 20, and can be picked up by the microphone 22. Here, the signal detected by the microphone 22 should be substantially similar to, or at least a known variation thereof, the emitted test tone.
30
In contrast, as shown in Figure 10, when the device 10 is in an enclosed space, then the sound waves emitted by the speaker 20 due to the test tone are absorbed, attenuated, or otherwise distorted by the material, such that the microphone 22 will pick up an attenuated and/or distorted signal corresponding to the test tone. By comparing the
15
signal recorded by the microphone 22 during the emission of the test tone from the speaker 20 with the known test tone, an estimate can be made as to whether the device 10 is located within the enclosed situation, as shown in Figure 10, or in an open situation, as in Figure 9. Figure 11 is a flow diagram illustrating the operation of the 5 attitude detection program in the ROM 42 according to the second embodiment, in performing such a determination.
More particularly, at step 11.2 the attitude detection program stores the test tone patterns, being the "bright" i.e. clean or undistorted test tone pattern itself, as well as 10 information relating to a low attenuated version of the test tone pattern i.e. corresponding to the test tone having undergone a low degree of attenuation and/or distortion; as well as a high attenuated version of the test tone pattern i.e. a version of the test tone pattern which has undergone a high degree of attenuation and/or distortion. The test tone patterns are stored as part of the data of the attitude detection program in 15 the ROM 42.
In order to detect the situation of the device 10 using the speaker and microphone, at step 11.4 the attitude detection program controls the application processor to cause the speaker 20 to emit the bright version of the test tone from the speaker. At the same 20 time, the microphone 22 is controlled to record its input during the period while the test tone is being emitted, at step 11.6. Then, at step 11.8 the recorded input from the microphone 22 is compared with the test tone patterns stored in the ROM 42, and a determination performed at step 11.10 as to which test tone pattern is most similar. How the comparison and determination steps 11.8 and 11.10 are performed is a matter 25 of implementation detail, and is dependent upon the information representing the test tone patterns. For example, where the stored test tone patterns stored at step 11.2 represent actual signal patterns then pattern matching techniques can be employed, to compare the recorded input with the test tone patterns, and determine the pattern that is most similar. Various conventional pattern matching techniques which may be used in 30 this respect are known in the art, such as those used in speech recognition systems or the like. In other embodiments, a more simple comparison and determination can be performed. For example, the information relating to the low attenuated test tone and high attenuated test tone may, for example, simply be a signal threshold level which determines the degree of attenuation of the signal from the bright or unattenuated
16
version. In this case, the average power, or absolute average signal level of the recorded input can be compared against the low attenuation and high attenuation threshold values, and determination as to the degree of attenuation of the signal determined based on this threshold operation.
5
In another embodiment, a distortion measurement of the signal may be used, rather than power or absolute signal level, with the distortion of the recorded input signal being compared with distortion values for the bright test pattern, the low attenuator test pattern, and the high attenuator test pattern. In further embodiments combinations of 10 these measurements may be used to make the decision.
Howsoever the determination is performed at step 11.10, at step 11.12 an evaluation is performed as to whether the bright pattern is most similar, and if this is not the case, then at step 11.18 an evaluation is performed as to whether the low attenuated test tone 15 pattern was the most similar. If this evaluation returns negative, then at step 11.22 an evaluation as to whether the high attenuated pattern was the most similar is undertaken. Here this must be the case, and hence at step 11.24 the determination is made that given the high degree of attenuation of the signal then the mobile device is in the enclosed situation such as the user's pocket. In such a case, processing proceeds to step 11.6, 20 wherein the behaviour of the mobile device is then adapted to the pocket situation. In this respect, the adaptation of the mobile device outputs to the pocket situation at step 11.26 can be identical to that of step 8.16 of the first embodiment, described previously. That is, when determined to be in the pocket situation, the vibrator and ringer are caused to be of greater magnitude, and the screen is preferably not lit.
25
Returning to step 11.18, here an evaluation was performed as to whether the lower attenuated pattern was most similar. If this is the case, then this is because it is likely that the phone is either in an enclosed space such as a pocket (but the precise arrangement is such that a high attenuation of the signal has not occurred, although 30 some attenuation or distortion has occurred) or that the phone is face down on a surface such as a table top. In this case, if the phone is face down then the speaker will likely be facing into the surface, and hence any sound waves emitted therefrom and subsequently recorded by the microphone will be distorted and/or attenuated by the surface. Thus, at step 11.20 the determination that the mobile device is either in the face down on table
17
top situation, or in an enclosed situation such as a pocket is made but it is not possible to distinguish further between these two situations. In view of these situations, however, in either case there is no need to light the screen of the mobile device, and, if the device is in fact face down on a table top, increasing the magnitude of the ringer and vibrator, 5 as if the phone was in a pocket, will still attract the user's attention (and perhaps even more so). Therefore, in this circumstance where it is not possible to distinguish between the phone being either face down on the table top or in a pocket, it is reasonable to adapt the phones output to the pocket situation, and hence processing proceeds to step 11.26.
10 Returning to step 11.12, if it was determined that the bright pattern was most similar, i.e. the recorded input from the microphone substantially reproduced with very little attenuation or distortion the emitted test tone, then here it is possible for the mobile device to determine that it is either in the user's hand, or face up on a table top, and such determination is made at step 11.14. It is not possible to distinguish further between 15 these two situations, but as in the first embodiment it is possible to reconcile this lack of information and adapt the behaviour of the device accordingly to provide an output profile which is suitable to both the hand or table top situation. Therefore, at step 11.16 the device behaviour is adapted to the hand and table top situation, this adaptation being the same as previously described in step 8.20 of Figure 8 in respect of the first 20 embodiment i.e. the screen is caused to be lit, but both the ringer and vibrator outputs can be reduced in magnitude.
Thus, within the second embodiment a mobile device 10 is able to use sound as a sensing medium using a speaker to emit a known sound, and then recording that sound 25 through the microphone, as modified by the phone's surrounding environment. By then comparing the recorded sound with the original sound a determination can be performed as to the mobile device's situation, and its behaviour is adapted accordingly. Moreover, as with the first embodiment, no additional componentry is required within the mobile device other than that which is conventionally provided. Instead, again as in the first 30 embodiment, all that is required is additional software, in the form of the attitude detection program stored in the ROM 42 and, in this case, the additional data in the form of the test tone patterns or information against which the recorded signal can be compared.
18
Within the first and second embodiments described previously which use respectively light and sound as the sensing medium it was possible to distinguish between some of the possible situations as shown in Figures 2 to 4 in which a mobile device might find itself, but not all of the situations. For example, within the first embodiment it was 5 possible to determine that the phone is either in: i) the user's hand, or face up on the table top; or alternatively, ii) in a user's pocket, or face down on the table top. Similarly, in the second embodiment it was possible to determine whether the phone was either: i) in the user's pocket, or face down on the table top; ii) in the pocket; or iii) in the user's hand or face up on the table top. However, a definitive determination 10 between all four possible situations was not possible.
In further embodiments to be described, however, further discrimination between situations becomes possible by combining the sensors, for example to use two or more sensor media, such as, for example, sound and light. Using multiple sensors in this 15 manner enables a greater degree of differentiation between the mobile device situations to be performed. Before describing such embodiments, reference is first made to Figure 12 which is a table showing the degree to which sensor combinations experience attenuation or distortion depending on the situation in which the mobile device finds itself. For example, as shown in Figure 12 when the mobile device is in an enclosed 20 situation, the sensor combination of speaker output and microphone input will suffer either a low, or a high degree of attenuation or distortion. However, when the device is in a partially enclosed situation such as in a user's hand, then it is likely that the speaker output/microphone input sensor combination will not suffer any attenuation or distortion. Where the device is in an open situation, such as on the table, attenuation or 25 distortion will be experienced depending on whether the device is face up, or face down. Similar considerations can be made for each of the other sensor combinations, as shown in the table.
Additionally, the table also includes in this case the signal attenuation or distortion 30 which would be suffered by a rear camera input provided on the mobile device. In this respect, it is common for many conventional mobile devices to be provided with two cameras, being one on the front face of the device i.e. the same face as the screen, and another on the rear face of the device i.e. the opposite face to the screen. By "rear camera input" in the table, we mean the input image obtained from the rear camera on
19
the opposite face of the device to the screen. In this respect, when the device is in an enclosed space such as a bag or pocket the rear camera input will be highly attenuated and/or distorted i.e. will be dark. This will also be the case when the device is in a partially enclosed situation in a user's hand, as often the user's hand will obscure the 5 rear camera input, and again a dark image will likely be obtained. However, when the device is in an open situation on a surface such as a table, then here the input will be the opposite of the front camera input, i.e. will be highly attenuated when the phone is in the face up position such that the surface on which the rear camera is mounted is face down to the surface, and will not be attenuated at all when the mobile device is in the 10 face down position i.e. the surface on which the rear camera is mounted is facing upwards.
Using the additional input information provided by the rear camera, it is possible to provide further embodiments, based on either of the first and second embodiments, 15 which allow for further distinction between the device situations. For example, as mentioned, in both the first and second embodiments a conclusion can be drawn at step 8.14 (for the first embodiment), or step 11.20 (for the second embodiment) that the device is either in the user's pocket, or face down on the table top, but no further distinction can be drawn therebetween. However, using additionally the information 20 available from the rear camera input, then a further distinction can be drawn between these two situations. Figure 14 therefore illustrates the additional steps to be performed to provide further embodiments, based on either the first or second embodiment, and which uses an input signal from the rear camera to further distinguish between these two situations.
25
More particularly, with respect to Figure 14 following on from either step 8.14 (when based on the first embodiment using light) or step 11.20 (when based on the second embodiment using sound) wherein a determination has been made that the phone is either in the face down position on a surface, or in the user's pocket, then in order to 30 distinguish between these two situations at step 14.2 an image is obtained from the rear camera on the mobile device, where this is provided. An evaluation is then performed as to whether the image from the rear camera is highly attenuated i.e. is the image dark or light, at step 14.4. As in the first embodiment, this evaluation can be performed by taking the grey scale values of the image pixels, and finding the average grey scale
20
value. This can then be compared with a threshold value predetermined in advance as to whether an image is light or dark. Here, typically the threshold will be the median grey scale value as the threshold. If it is determined at step 14.4 that the image is highly attenuated i.e. too dark, then the attitude determination program can conclude that of the 5 two situations i.e. face down on table top, or in pocket, then it is likely that the phone is in the user's pocket, and hence at step 14.12 the phone behaviour can then be adapted to the pocket situation. This adaptation can preferably take the same form as the adaptation used in the same situation in the first and second embodiments, such as at step 11.26 or step 8.16.
10
If, however, at step 14.4 it is determined that the image is not highly attenuated i.e. the image is light, then the attitude determination program can conclude at step 14.6 that the phone is probably face down on the table top, such that the rear camera input is then capturing ambient light. In this case, the program then proceeds, at step 14.8, to adapt 15 the phone to a face down on table top situation. As described previously, here it is not necessary to light the screen, and neither is it necessary to operate the vibrator, as the phone is not in any way in tactile communication with the user. Instead, all that need be activated is the phone ringer, which may either be kept in the default position, or, increased in magnitude.
20
Thus, the further embodiments based on Figure 14 provide the ability to further distinguish between the mobile device situations, and in particular as to whether the device is face down on a table top, or in a user's pocket. Moreover, when based upon the first embodiment then the use of the rear camera means that the sensing medium is 25 always light, as the front camera and screen are used to provide the initial determination, and then the image captured from the rear camera used to provide the secondary determination. Conversely, when based upon the second embodiment, then sound is used as the sensing medium to make the primary determination, and then light in terms of the image captured from the rear camera used to make the secondary sensing 30 determination. In both cases, however, multiple sensing devices are used to further distinguish the mobile device situation.
A final embodiment will now be described with respect to Figure 13. Here, within the embodiment to be described a vibrator and accelerometer sensor combination is used to
21
provide further distinction between the mobile phone situations. By combining the information obtainable by use of a vibrator and accelerometer sensor combination with other sensor combinations, such as the microphone and speaker combination, then full distinction between the four possible situations shown previously in Figures 2 to 4 5 becomes possible. Whilst the use of a vibrator and accelerometer sensor combination was previously disclosed in US 2006/0172706 mentioned previously, therein it was merely used to distinguish between two possible mobile device situations i.e. whether the device was in the user's hand or not. In the present embodiment, however, by combining the information obtainable from the vibrator and accelerometer sensor 10 combination with the information obtainable from other sensor combinations, it becomes possible to distinguish between more than two different mobile device situations, and, as will be shown, in the present embodiment between all four possible situations shown in Figures 2 to 4.
15 Referring now to Figure 13, in the present embodiment firstly the speaker and microphone sensor combination is used to perform a first determination, the results of which are then refined using the vibrator and accelerometer sensor combination. Therefore, at step 13.2 a test is performed using the speaker and microphone sensor combination. This test is essentially the same as the second embodiment described 20 previously, and involves emitting a test tone, recording the microphone input signal whilst the test tone is being emitted, and then comparing the recorded signal with test tone patterns to determine the degree of attenuation or distortion of the signal. At step 13.4 the recorded signal is examined to determine whether it is highly attenuated, and if this is the case it is because the phone is likely in a user's pocket or other enclosed 25 space, and hence at step 13.6 the phone behaviour is adapted to the pocket situation. In this respect, the adaptation to the pocket situation is preferably as described previously in the other embodiments.
If it is not determined at step 13.4 that the sound signal recorded by the microphone was 30 highly attenuated or distorted, then at step 13.8 an evaluation is performed as to whether the signal suffered a low degree of attenuation or distortion. If this was the case, then as described previously in respect of the second embodiment, it is possible to conclude that the phone is either likely in a pocket, or face down on a table. This determination is made at step 13.10. To distinguish between the situations, then as in the embodiment
22
just described it might be possible to use an input image from a rear camera, if provided, to distinguish between the two situations. However, in the present embodiment, the accelerometer and vibrator sensor combination is used to distinguish between the two situations, and this is performed at step 13.12. More particularly, here the test involves, 5 as in the prior art, activating the vibrator for a brief period of time, and at the same time using the accelerometer to record the vibration pattern experienced by the device. If the recorded pattern is highly attenuated compared to what was expected, then it is likely that the phone is being held close to the user's body or is in close contact with other material, and hence is likely to be in a pocket or other enclosed space. In contrast, if a 10 low attenuation of the vibration is perceived by the accelerometer, then it is likely that the phone is on the table top. Examination of the recorded vibration by the accelerometer is performed at step 13.4, and if it is determined that there is a high attenuation then at step 13.6 the determination is made that the phone is in the pocket, and the mobile device behaviour is then adapted to the pocket situation. Similarly, if at 15 step 13.14 it is evaluated that there was a low attenuation or distortion of the vibration, then at step 13.18 the conclusion is reached that the phone is likely face down on the table top, and hence the mobile device behaviour is adapted to this situation, as previously described.
20 Returning to step 13.8, if it is determined that the sound did not suffer even low attenuation or distortion, then at step 13.20 it must be the case that the recorded sound was substantially similar to the test tone, suffering little or no distortion or attenuation. In this case, as in the second embodiment, it is possible to conclude at step 13.22 that the mobile device is either in the user's hand, or face up on a table. However, in the 25 previous embodiments it was not possible to further distinguish between these situations.
In the present embodiment, it is possible to distinguish between these two situations by again using the vibrator and accelerometer sensor combination. Therefore, at step 13.24 30 the mobile device situation is tested with the vibrator and accelerometer sensor combination, the testing process being substantially as described previously, that is substantially the same as in the prior art. If the vibrations produced by the vibrator as measured by the accelerometer have suffered a high degree of attenuation, as evaluated at step 13.26, then, as in the prior art, it is possible to deduce that the phone is likely in
23
the user's hand. In this case, as shown at step 13.28, the mobile device behaviour is then adapted to the hand situation as previously described. That is, because the user is in close tactile contact with the device and is likely looking at the device, to alert the user the screen can be lit, and a low magnitude of vibration produced by the vibrator.
5 Such alerts are probably enough to alert the user, and hence it is not necessary to activate the ringer.
If step 13.26 does not detect a high degree of attenuation of the vibrations produced by the vibrator as measured by the accelerometer, and there is either no attenuation, or a low degree of attenuation, as determined at step 13.30, then it is possible to conclude, as shown at step 13.32, that the phone is likely on the table top face up. In this case, the mobile device behaviour is then adapted to the face-up-on-table situation as described previously i.e. the vibrator is disabled, the screen is caused to be lit, and the ringer can be activated at either the default, or increased, volume.
Thus, within this embodiment, using multiple sensors including the accelerometer it becomes possible to detect all four of the possible mobile device situations noted in Figures 2 to 4 previously. This allows for further and closer control of the mobile device behaviour, as already described. In particular, being able to control the device behaviour in the manners described allows for a more efficient alerting of the user to incoming calls or messages, or the like, whilst adapting the phone behaviour to the situations ensures that no unnecessary output is used, thus saving battery power.
Various further changes or modifications may be made to the above described 25 embodiments to provide further embodiments. For example, within the previously described embodiments we have described adapting either the vibrator magnitude or simply lighting the screen or not. In further embodiments, however, closer control of the screen may be performed, for example to cause the screen to light more brightly in some situations than in others. For example, where the screen is one of the primary 30 means of alerting the user, such as in the case where the phone is in the user's hand, then the screen may be caused to light more brightly then would otherwise be the default case in such a situation, to alert the user.
10
15
20
24
Furthermore, various combinations of sensors may be used to provide further embodiments. Within the final embodiment described a combination of sound and vibration as the sensor media was used to enable detection of all possible mobile phone situations. However, in other embodiments other combinations may be used, such as, 5 for example, light and vibration, or, light, sound and vibration. Examination of the information in the table of Figure 12 will indicate to the skilled person the various combinations of sensors which may be put together, to distinguish between the mobile phone situations. Of course, in further embodiments additional mobile phone situations may be provided, to be distinguished between.
10
Various further modifications and variations may be made to provide further embodiments using the same inventive concept, any and all of which are intended to be encompassed by the appended claims.
25

Claims (1)

  1. Claims
    1. A mobile device comprising: a light or sound sensor; and a processor for processing output signals from said light or sound sensor; the arrangement being such
    5 that said processor determines from said output signals a particular one of a plurality of predetermined possible situations in which said mobile device is presently to be found.
    2. A mobile device according to claim 1, wherein the possible situations comprise at least two or more selected from the group comprising: in an enclosed space, in a
    10 user's hand, face up on a surface, face down on a surface.
    3. A mobile device according to claims 1 or 2, wherein the device behaviour is adapted in dependence on the determination of the present situation.
    15 4. A mobile device according to any of claims 1 to 3, wherein the light sensor is a camera.
    5. A mobile device according to any of claims 1 to 4, wherein the sound sensor is a microphone.
    20
    6. A mobile device according to any of the preceding claims, wherein the mobile device has light and sound sensors, said determination being performed in dependence on output signals from both such sensors.
    25 7. A mobile device according to any of the preceding claims, wherein the mobile device has a second light sensor in addition to said first light sensor, said determination being performed in dependence on output signals from both such sensors.
    8. A mobile device according to claim 7, wherein the second light sensor is 30 provided on a different face of said device than said first light sensor.
    9. A mobile device according to any of the preceding claims, comprising light or sound emitters, the arrangement being such that output signals from said light or sound sensor are obtained whilst said light or sound emitter is emitting.
    26
    10. A mobile device according to claim 9, wherein said light or sound sensor input signal is compared by said processor with an output signal emitted by said light or sound emitter to perform said situation determination.
    5
    11. A mobile device according to claims 9 or 10, wherein said light emitter is a display screen of said mobile device.
    12. A mobile device according to claims 9 to 11, wherein said sound emitter is a 10 speaker of said mobile device.
    13. A mobile device according to any of claims 9 to 12, wherein said mobile device is provided with both a light emitter and a sound emitter.
    15 14. A mobile device according to any of the preceding claims, wherein said mobile device further comprises: a vibrator; and an accelerometer arranged to measure vibrations of the said mobile device caused, at least in part, by said vibrator; the arrangement further being such that said processor receives an accelerometer output signal indicative of the measured vibrations, and said situation determination is 20 performed in dependence thereon.
    15. A mobile device having a plurality of sensors for sensing one or more sensor media; a processor for processing signals produced by said sensors indicative of the present state of the sensor media; the arrangement being such that said processor
    25 determines from said produced signals a particular present situation of at least three or more predetermined possible situations in which said mobile device is to be presently found.
    16. A mobile device according to claim 15, wherein said processor determines from 30 said produced signals a particular present situation of at least four predetermined possible situations in which said mobile device is to be presently found.
    27
    17. A mobile device according to claims 15 or 16, wherein the predetermined possible situations are selected from the group comprising: in an enclosed space, in a user's hand, face up on a surface, face down on a surface.
    5 18. A mobile device according to claims 15 to 17, wherein the multiple sensors comprise two or more sensors selected from the group comprising: a first light sensor, a second light sensor, a sound sensor, and a motion sensor.
    19. A mobile device according to claim 18, wherein said first light sensor is a first 10 camera mounted on a first face of the mobile device.
    20. A mobile device according to claims 18 or 19, wherein said second light sensor is a second camera mounted on a second face of the mobile device.
    15 21. A mobile device according to claims 18 to 20, wherein said sound sensor is a microphone.
    22. A mobile device according to claims 18 to 21, wherein said motion sensor is an accelerometer.
    20
    23. A mobile device according to any of claims 15 to 22, and further comprising at least one emitter for emitting energy in the form of at least one of the sensor media detected by at least one of the sensors.
    25 24. A mobile device according to claim 23, wherein said produced signals are obtained from at least one of said sensor whilst said at least one emitter is emitting energy.
    25. A mobile device according to claims 23 or 24, wherein said determination 30 performed by said processor includes comparing a produced signal from at least one of the sensors which senses the sensor medium in which said emitter emits energy with an output signal of the emitter.
    28
    26. A mobile device according to any of claims 23 to 25, wherein the at least one emitter is chosen from a group comprising: a light emitter, a sound emitter, and a motion generator.
    5 27. A mobile device according to claim 26, wherein the light emitter is a display screen of the mobile device.
    28. A mobile device according to claim 26 or 27, wherein the sound emitter is a speaker of the mobile device.
    10
    29. A mobile device according to claim 26, 27, or 28, wherein the motion generator is a vibrator of the mobile device.
    30. A method of operating a mobile device provided with a light or sound sensor, 15 comprising the steps of:
    obtaining output signals from said light or sound sensor indicative of the mobile device surroundings;
    processing said output signals to determine therefrom a particular one of a plurality of predetermined possible situations in which said mobile device is presently to 20 be found.
    31. A method of operating a mobile device having a plurality of sensors for sensing one or more sensor media, comprising the steps of:
    obtaining signals from said plurality of sensors, said signals being indicative if 25 the present state of the sensor media; and processing said signals to determine therefrom a particular present situation of at least three or more predetermined possible situations in which said mobile device is to be presently found.
    30 32. A computer program or suite of computer programs so arranged such that when executed by a computer processor they cause the computer to perform the steps of any of claims 30 and 31.
    29
    33. A machine readable storage medium storing the computer program or at least one of the suite of computer programs according to claim 32.
    30
GB0711759A 2006-12-21 2007-06-18 Mobile device which can sense its present situation Withdrawn GB2445436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2007/004948 WO2008075082A1 (en) 2006-12-21 2007-12-21 Mobile device and method of operation thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB0625642.4A GB0625642D0 (en) 2006-12-21 2006-12-21 Mobile sensor feedback

Publications (2)

Publication Number Publication Date
GB0711759D0 GB0711759D0 (en) 2007-07-25
GB2445436A true GB2445436A (en) 2008-07-09

Family

ID=37734699

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB0625642.4A Ceased GB0625642D0 (en) 2006-12-21 2006-12-21 Mobile sensor feedback
GB0711759A Withdrawn GB2445436A (en) 2006-12-21 2007-06-18 Mobile device which can sense its present situation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB0625642.4A Ceased GB0625642D0 (en) 2006-12-21 2006-12-21 Mobile sensor feedback

Country Status (1)

Country Link
GB (2) GB0625642D0 (en)

Cited By (158)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8483725B2 (en) 2010-12-03 2013-07-09 Qualcomm Incorporated Method and apparatus for determining location of mobile device
US8606293B2 (en) 2010-10-05 2013-12-10 Qualcomm Incorporated Mobile device location estimation using environmental information
US8812014B2 (en) 2010-08-30 2014-08-19 Qualcomm Incorporated Audio-based environment awareness
WO2014124332A3 (en) * 2013-02-07 2014-10-16 Apple Inc. Voice trigger for a digital assistant
GB2521436A (en) * 2013-12-20 2015-06-24 Nokia Corp Method and apparatus for adaptive feedback
US9143571B2 (en) 2011-03-04 2015-09-22 Qualcomm Incorporated Method and apparatus for identifying mobile devices in similar sound environment
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
AU2015101078B4 (en) * 2013-02-07 2016-04-14 Apple Inc. Voice trigger for a digital assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
WO2016156102A1 (en) * 2015-03-30 2016-10-06 Sony Corporation Portable electrical device and method for controlling a setting or an operation of a portable electrical device
EP3082328A1 (en) * 2009-09-30 2016-10-19 Apple Inc. Self adapting haptic device
WO2016166585A1 (en) * 2015-04-13 2016-10-20 Sony Corporation Mobile device environment detection using cross-device microphones
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9691260B2 (en) 2009-09-30 2017-06-27 Apple Inc. Electronic device with orientation-based alert adjustment
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
AU2015203007B2 (en) * 2009-09-30 2017-07-20 Apple Inc. Electronic Device with Orientation-based Alert Adjustment
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050037815A1 (en) * 2003-08-14 2005-02-17 Mohammad Besharat Ambient light controlled display and method of operation
US20060172706A1 (en) * 2005-01-31 2006-08-03 Research In Motion Limited User hand detection for wireless devices
US20070037610A1 (en) * 2000-08-29 2007-02-15 Logan James D Methods and apparatus for conserving battery power in a cellular or portable telephone
EP1841189A1 (en) * 2006-03-31 2007-10-03 T & A Mobile Phones Limited Mobile phone with sensor for detection of user's handling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070037610A1 (en) * 2000-08-29 2007-02-15 Logan James D Methods and apparatus for conserving battery power in a cellular or portable telephone
US20050037815A1 (en) * 2003-08-14 2005-02-17 Mohammad Besharat Ambient light controlled display and method of operation
US20060172706A1 (en) * 2005-01-31 2006-08-03 Research In Motion Limited User hand detection for wireless devices
EP1841189A1 (en) * 2006-03-31 2007-10-03 T & A Mobile Phones Limited Mobile phone with sensor for detection of user's handling

Cited By (239)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10629060B2 (en) 2009-09-30 2020-04-21 Apple Inc. Self adapting alert device
EP3082328A1 (en) * 2009-09-30 2016-10-19 Apple Inc. Self adapting haptic device
US9984554B2 (en) 2009-09-30 2018-05-29 Apple Inc. Electronic device with orientation-based alert adjustment
AU2015203007B2 (en) * 2009-09-30 2017-07-20 Apple Inc. Electronic Device with Orientation-based Alert Adjustment
US9691260B2 (en) 2009-09-30 2017-06-27 Apple Inc. Electronic device with orientation-based alert adjustment
US10290202B2 (en) 2009-09-30 2019-05-14 Apple Inc. Self adapting alert device
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US12087308B2 (en) 2010-01-18 2024-09-10 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US11410053B2 (en) 2010-01-25 2022-08-09 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US12307383B2 (en) 2010-01-25 2025-05-20 Newvaluexchange Global Ai Llp Apparatuses, methods and systems for a digital conversation management platform
US10984327B2 (en) 2010-01-25 2021-04-20 New Valuexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en) 2010-01-25 2021-04-20 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
US8812014B2 (en) 2010-08-30 2014-08-19 Qualcomm Incorporated Audio-based environment awareness
US8606293B2 (en) 2010-10-05 2013-12-10 Qualcomm Incorporated Mobile device location estimation using environmental information
US8483725B2 (en) 2010-12-03 2013-07-09 Qualcomm Incorporated Method and apparatus for determining location of mobile device
US9143571B2 (en) 2011-03-04 2015-09-22 Qualcomm Incorporated Method and apparatus for identifying mobile devices in similar sound environment
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
KR102103057B1 (en) * 2013-02-07 2020-04-21 애플 인크. Voice trigger for a digital assistant
US11636869B2 (en) 2013-02-07 2023-04-25 Apple Inc. Voice trigger for a digital assistant
CN104969289B (en) * 2013-02-07 2021-05-28 苹果公司 Voice triggers for digital assistants
KR20160127165A (en) * 2013-02-07 2016-11-02 애플 인크. Voice trigger for a digital assistant
AU2015101078B4 (en) * 2013-02-07 2016-04-14 Apple Inc. Voice trigger for a digital assistant
WO2014124332A3 (en) * 2013-02-07 2014-10-16 Apple Inc. Voice trigger for a digital assistant
US11557310B2 (en) 2013-02-07 2023-01-17 Apple Inc. Voice trigger for a digital assistant
US12277954B2 (en) 2013-02-07 2025-04-15 Apple Inc. Voice trigger for a digital assistant
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10714117B2 (en) 2013-02-07 2020-07-14 Apple Inc. Voice trigger for a digital assistant
CN104969289A (en) * 2013-02-07 2015-10-07 苹果公司 Voice triggers for digital assistants
US11862186B2 (en) 2013-02-07 2024-01-02 Apple Inc. Voice trigger for a digital assistant
EP3809407A1 (en) * 2013-02-07 2021-04-21 Apple Inc. Voice trigger for a digital assistant
US12009007B2 (en) 2013-02-07 2024-06-11 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
GB2521436A (en) * 2013-12-20 2015-06-24 Nokia Corp Method and apparatus for adaptive feedback
US10817059B2 (en) 2013-12-20 2020-10-27 Nokia Technologies Oy Method and apparatus for adaptive feedback
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10657966B2 (en) 2014-05-30 2020-05-19 Apple Inc. Better resolution when referencing to concepts
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10714095B2 (en) 2014-05-30 2020-07-14 Apple Inc. Intelligent assistant for home automation
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
WO2016156102A1 (en) * 2015-03-30 2016-10-06 Sony Corporation Portable electrical device and method for controlling a setting or an operation of a portable electrical device
WO2016166585A1 (en) * 2015-04-13 2016-10-20 Sony Corporation Mobile device environment detection using cross-device microphones
US9736782B2 (en) 2015-04-13 2017-08-15 Sony Corporation Mobile device environment detection using an audio sensor and a reference signal
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10847142B2 (en) 2017-05-11 2020-11-24 Apple Inc. Maintaining privacy of personal information
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance

Also Published As

Publication number Publication date
GB0711759D0 (en) 2007-07-25
GB0625642D0 (en) 2007-01-31

Similar Documents

Publication Publication Date Title
GB2445436A (en) Mobile device which can sense its present situation
WO2008075082A1 (en) Mobile device and method of operation thereof
US8401513B2 (en) Proximity sensor, in particular microphone for reception of sound signals in the human audible sound range, with ultrasonic proximity estimation
CN110166890B (en) Audio playing and collecting method and device and storage medium
US20100279661A1 (en) Portable electronic device
CN111857793B (en) Training method, device, equipment and storage medium of network model
US9912797B2 (en) Audio tuning based upon device location
CN111048111A (en) Method, device and equipment for detecting rhythm point of audio frequency and readable storage medium
CN111314560A (en) Method for adjusting sound loudness and communication terminal
CN111586547B (en) Detection method and device of audio input module and storage medium
CN108769327A (en) Method and device for sounding display screen, electronic device and storage medium
EP4583090A1 (en) Screen brightness adjustment method and apparatus, and storage medium and electronic device
CN113542963B (en) Sound mode control method, device, electronic equipment and storage medium
CN108196815A (en) Method for adjusting call sound and mobile terminal
CN110708630A (en) Method, device and equipment for controlling earphone and storage medium
CN111694521A (en) Method, device and system for storing file
WO2022068304A1 (en) Sound quality detection method and device
CN113191198B (en) Fingerprint identification method, fingerprint identification device, mobile terminal and storage medium
CN110659542A (en) Monitoring method and device
CN107705804A (en) A kind of audible device condition detection method and mobile terminal
CN112817554A (en) Alert sound control method, alert sound control device, and storage medium
CN108900688A (en) Sounding control method, device, electronic device and computer-readable medium
CN110392334B (en) A microphone array audio signal adaptive processing method, device and medium
CN107911557A (en) The processing method and mobile terminal of a kind of missed call
JP5183790B2 (en) Mobile terminal device

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20090219 AND 20090225

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)