GB2579780A - Media processing method and apparatus - Google Patents
Media processing method and apparatus Download PDFInfo
- Publication number
- GB2579780A GB2579780A GB1820268.9A GB201820268A GB2579780A GB 2579780 A GB2579780 A GB 2579780A GB 201820268 A GB201820268 A GB 201820268A GB 2579780 A GB2579780 A GB 2579780A
- Authority
- GB
- United Kingdom
- Prior art keywords
- media
- user
- media output
- output device
- media file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2380/00—Specific applications
- G09G2380/16—Digital picture frames
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A method for loading a media file onto an output device wherein a direction from a user device is determined for each output device 202, a representation of a media file is provided on a touchscreen 204 and directional user input is received 206, a difference between the direction of the output devices from the user device and input direction is determined 208 and identifying an output device to receive the media file based on the difference 210. Also disclosed is a method of generating a virtual map of the media output devices by emitting a sound from a user device, receiving a message from each output device indicative of the time taken for the sound to be received at the device and calculating a distance from the user device to the location of the output devices based on this. Also disclosed is a method of selecting files for playback wherein a plurality of media files and metadata are received at an output device, identifying a user present near the output device, classifying each of the media files into playback priority classifications based on the metadata and identified user and selecting a media file for playback based on the classifications.
Description
Intellectual Property Office Application No. GII1820268.9 RTM Date:28 blay 2019 The following terms are registered trade marks and should be read as such wherever they occur in this document: WiFi Bluetooth GP S Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo
MEDIA PROCESSING METHOD AND APPARATUS
The present application relates to methods and devices for displaying and outputting media. In particular, the application relates to processing and displaying media on one or more digital photo frames with communication capability, also known as "smart" photo frames.
Digital photo frames are known and are generally used to display digital photos. Some can rotate the photos on display, on demand or periodically. Some digital photo frames have wireless (e.g. Wi-Fi®) connectivity, via which the photos can be loaded onto the photo frame. Others require a wired or physical connection for loading photos, e.g. from a computer or from a USB stick. Some digital photo frames are not limited to display of photos, but can also play other types of media, e.g. video and/or audio files. Some digital photo frames are capable of motion-sensing as a power-saving feature, to conserve energy when there are no users present to view the frame.
Aspects of the invention are set out in the independent claims and preferable features are set out in the dependent claims.
There is described herein a method for loading a media file onto a media output device using a mobile user device, the method comprising: determining the direction, from a mobile user device, of each of one or more media output devices; providing a representation of the media file on a touchscreen of the mobile user device; receiving a directional user input on the touchscreen to define a user input direction; determining, for each of the one or more media output devices, the difference between the direction of the media output device from the mobile user device and the user input direction; and based on the determined difference(s), identifying one of the one or more media output devices to receive the media file.
This may avoid the need for complex configuration or additional hardware peripherals to provide inputs to determine where to direct input media and may provide a very intuitive and user-friendly method for loading media files onto a media output device.
The direction of the media output device from the mobile user device may be identified in relation to a line on the mobile user device, such as a line of symmetry through the centre of the touch screen of the mobile device.
This has the advantage of being reliable and intuitive and programmatically consistently achievable across a range of devices.
Preferably, the method further comprises: sending the media file to the selected media output device.
In some embodiments, identifying one of the one or more media output devices to receive the media file comprises: selecting the one of the one or more media output devices having the smallest determined difference between the direction of the media output device from the mobile user device and the user input direction (preferably wherein the one or more media output devices comprises at least two media output devices).
In some embodiments, identifying one of the one or more media output devices to receive the media file comprises: selecting the one of the one or more media output devices to receive the media file if the determined difference between the direction of the media output device from the mobile user device and the user input direction is less than a predetermined direction difference threshold.
Preferably direction of the media output device from the mobile user device is measured in degrees. The predetermined direction threshold may be around 5 degrees, 10 degrees, 20 degrees or 30 degrees. Preferably the predetermined direction difference threshold is less than 90 degrees.
In some embodiments, the method further comprises: establishing a short-range wireless connection between the mobile user device and each of the one or more media output devices, preferably wherein determining the direction, from a mobile user device, of each of the one or more media output devices is performed using the short-range wireless connection. A short range wireless connection may be a wireless connection with a range of between 1m and 250m, preferably at least 3m and/or not more than 150m. In some embodiments the range of the shod-range wireless connection is between around 10m and 100m. Example protocols for the shod-range wireless connection are WI-FI TM, BluetoothTM and ZigBeeTM Preferably, determining the direction, from a mobile user device, of each of the one or more media output devices comprises: receiving over the short-range wireless connection a compass reading measured by the media output device; and determining the direction based on the compass reading.
In some embodiments, the method further comprises: establishing a short-range wireless connection between the mobile user device and the selected media output device (preferably after selecting the media output device); and sending the media file to the selected media output device from the mobile user device via the short-range wireless connection.
Preferably, the method comprises establishing short range wireless communication between the mobile user device and the output media device via two different short range communication protocols, such as both a WiFi and a Bluetooth connection, and sending the media file over one of the short range communication protocols.
Optionally, determining the direction, from a mobile user device, of each of one or more media output devices comprises: determining the locations of the one or more media output devices, preferably from a virtual map of the space in which the media output devices are located; and measuring the orientation of the mobile user device using a compass (on the user media device).
Preferably identifying one of the one or more media output devices to receive the media file comprises: comparing the directional user input to the virtual map; and selecting one of the media output devices as a result of the comparison.
In some embodiments, the method further comprises generating a virtual map of the locations of the one or more media output devices in the space by: emitting a sound from the mobile user device; receiving a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculating a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device..
Preferably the sound wave/pulse has a volume of between 50decibels and 120 decibels, a duration of between 5 seconds and 20 seconds and has a frequency of around 20Hz to around 20kHz.
There is also described a method of generating a virtual map of the locations of one or more media output devices in a space by: emitting a sound from a mobile user device; receiving a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculating a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device.
In some embodiments, the method further comprises: receiving from each media output device data relating to the height, or elevation, of the media output device, such as barometer sensor data; and using the data relating to the height to determine the virtual map.
The method may further comprise: receiving from each media output device data relating to the orientation, e.g. a compass reading, of the media output device; and using the data relating to the height to determine the virtual map.
In some embodiments, the directional user input comprises horizontal and vertical directionality, e.g. horizontal directionality by a swipe action on the screen of the user mobile device and vertical directionality by the vertical orientation or tilt in which the user mobile device is held.
Preferably, the method further comprises: sending to the selected media output device media metadata for associating with the media file, the media metadata comprising an identifier of the mobile user device and/or of the user operating the mobile user device.
Optionally, the method further comprises: classifying the media file for media playback based on the media metadata, preferably further comprising: selecting the media file for playback based on the classification.
There is also described herein a method of selecting a media file for playback on a media output device, the method comprising: receiving a plurality of media files and associated media metadata for each media file, the media metadata including information regarding the identity of one or more individuals recorded in the media file; identifying a user present in the vicinity of the media output device; classifying each of the plurality of media files into one of a plurality of playback priority classifications based on the media metadata and the identified user; and selecting from the plurality of media files a media file for playback based on the playback priority classification of the media files. This method may be performed entirely on a media output device. Alternatively, some of the method steps (such as classifying the media files) may be performed on a remote, e.g. cloud server.
In some embodiments, selecting a media file for playback comprises: weighting the plurality of media files according to the playback priority classification of each media file; and then randomly selecting a media file for playback from the weighted plurality of media files, such that media files with a higher playback priority classification have a higher probability of being selected.
Preferably, classifying each media file is based on a comparison of media metadata for the media file with media metadata for the plurality of media files.
Optionally, classifying the media file comprises: identifying linkages between a plurality of individuals recorded in the plurality of media files; classifying the media files into one of a plurality of playback priority classifications based on the identified linkages.
Preferably, classifying each media file comprises: assigning an individual classification to each individual recorded in the media file according to the proportion of the plurality of media files in which both the individual and the identified user are recorded in combination; and assigning to the media file one of the playback priority classifications based on the individual classifications of the one or more individuals recorded in the media file.
Preferably, the step of identifying a user present in the vicinity of the media output device comprises identifying a first user and a second user present in the vicinity of the media output device; and classifying each of the plurality of media files into one of a plurality of playback priority classifications is based on the media metadata and the identified first user and the identified second user Optionally the associated media metadata further comprises one or more of: an identifier of the mobile user device and/or of the user operating the mobile user device; an indicator of the location at which the media file was captured; an identifier of one or more individuals who are recorded in the media file; an identified emotional response of a user to previous playback of the media file.
Preferably, classifying each of the plurality of media files into one of a plurality of playback priority classifications is further based on an identified emotional response of the identified user to previous playback of the media file.
There is also described a method for selecting media files for playback on media output devices, the method comprising: playing back a media file on a media output device; receiving an input indicative of the emotion of the user when the media file is played back on the media output device; classifying the emotion of the user into one of a plurality of emotion classifications; associating the emotion classification with the media file as media metadata; and using the media metadata, including the emotion classification, to assign a playback priority classification to the media file.
The method may further comprise: adjusting a media file selection algorithm based on the classified emotion; and classifying the media file into one of a plurality of playback priority classifications based on the media metadata and the identified user.
There is also described herein a non-transient computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any preceding claim.
There is also described herein a mobile user device for loading a media file onto a media output device, the mobile user device comprising: a memory; a user interface comprising a touchscreen; a processor; wherein the mobile user device is arranged to: determine the direction, from a mobile user device, of each of one or more media output devices; provide a representation of the media file on the touchscreen; receive a directional user input on the touchscreen to define a user input direction; determine, for each of the one or more media output devices, the difference between the direction of the media output device from the mobile user device and the user input direction; and based on the determined difference(s), identify one of the one or more media output devices to receive the media file.
There is also described herein a mobile user device for loading a media file onto a media output device, the mobile user device comprising: a memory; a processor; a speaker; and a communication interface for communicating with one or more media output devices; wherein the mobile user device is arranged to: emit a sound from the speaker; receive a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculate a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device.
There is also described herein a media handling device for processing a media file for display on a media output device, the media handling device comprising: a memory; a processor; and wherein the media handling device is arranged to: receive a plurality of media files and associated media metadata for each media file, the media metadata including information regarding the identity of one or more individuals recorded in the media file; identify a user present in the vicinity of the media output device; classify each of the plurality of media files into one of a plurality of playback priority classifications based on the media metadata and the identified user; and select from the plurality of media files a media file for playback based on the playback priority classification of the media files. The media handling device may be integrated into the media output device.
There is also described herein a media output device comprising: a memory; a processor; and a media output interface, such as a screen or speaker; wherein the media output device is configured to: play back a media file via the media output interface; receive an input indicative of the emotion of the user when the media file is played back on the media output device; classify the emotion of the user into one of a plurality of emotion classifications; associate the emotion classification with the media file as media metadata; and use the media metadata, including the emotion classification, to assign a playback priority classification to the media file.
There is also described herein: a media output device comprising: a memory for storing a plurality of media files for playback; a processor; a media output interface, such as a screen or speaker, for outputting the media files; a plurality of sensors for sensing positional data; and a short-range wireless communication interface for communicating with a user mobile device; =wherein the media output device is configured to: send the sensed positional data to the user mobile device, preferably in response to a request.
The sensors may include one or more of a compass, height sensor/barometer; microphone. Preferably the positional data includes a compass direction (or orientation) and/or an air pressure measurement. The positional data may also or alternatively include the time of receipt of a sound pulse.
There is also described herein: a system comprising: a mobile user device, such as described above; and one or media output devices, such as a described above. Preferably the system further includes a cloud server for storing media files for access by the mobile user device and/or the media output device(s).
The media file may be a visual and/or audio media file, such as a photograph, video or audio file. The media output device is generally a digital photo frame capable of displaying photos. It may also be capable of playing videos and/or of playing audio files. In other embodiments, it could be a (smart) television, or a speaker. The short-range wireless network connection could be via, e.g. Wi-Fi or Bluetooth. The media file may be originally stored on the mobile user device, and then sent from the mobile user device to the media output device via the short-range wireless connection. Alternatively, the media file could be stored on a cloud server and the mobile device only contains a representation of the media file.
The direction from the mobile user device is sometimes defined as a direction in relation to an axis passing through the mobile user device, e.g. in relation to an axis passing through the touchscreen.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
BRIEF DESCRIPTION OF THE FIGURES
Methods and systems for processing and display of are described by way of example only, in relation to the Figures, wherein: Figure 1 shows an exemplary system for outputting media files; Figure 2 shows an exemplary method for loading media files on media output devices; Figure 3 shows an exemplary method for selecting a media file for playback; Figure 4 shows a block view of factors that go into media file selection; Figure 5 shows an exemplary method for building a virtual map of media devices; and Figure 6 shows an exemplary output media device.
DETAILED DESCRIPTION
Figure 1 shows a system 1 for outputting or playing back media to a user. The system 1 includes a first photo frame 12 a second photo frame 14 and third photo frame 18, along with a first speaker 16. These are media output devices and can, for example, play or show photos, play videos and/or play audio. The media output devices 12, 14, 16, 18 are located at a user premises. For example, the user premises maybe a house or it could be a business premises such as an office or a factory. In this example, all of the media output devices 12, 14, 16, 18 are located in the same room. Generally media output devices are located around the periphery of the room, e.g. hanging from the walls.
Photo frames 12, 14, 18 are configured to show photos. For example, photo frames can display digital photos, and they cycle through a set of photos (which may be changed periodically or on a user command). The photos frames 12, 14, 18 are also capable of displaying video files, and include speakers for playing the associated audio part of the video files. In some embodiments the photo frames may be arranged to output audio files which are separate from the picture (photo or video) files being displayed, or may output audio files even when not displaying photo or moving images.
The first speaker 16 is a speaker device which is capable of playing audio files.
Each of the media output devices 12, 14, 16, 18 is capable of short range wireless communication, for example via WI-FI TM, BluetoothTM or ZigBeeTM. In this embodiment the media output devices 12, 14, 16, 18 are connected to the wireless local area network (WLAN) via access point 50, located at the user premises. Access point 50 is connected to modem 60, which provides connectivity with the internet 70.
System 1 also includes various sensors for detecting characters at the premises. In this example system 1 includes a light sensor 20 which is in communication with the first photo frame 12. It also includes a camera 24 which is in communication with the second photo frame 14. System 1 also includes a sound sensor 22 which is in communication with the first speaker 16. In some embodiments the sensors are integrated into each of the media output devices. In other embodiments the sensors may be provided as separate standalone devices. As is described in more detail below, each media output device 12, 14, 16, 18 preferably includes a number of sensors, including one or more of: a microphone, a height sensor (such as a barometer) and a compass.
In this case it may be preferable to provide the sensors with wireless capability so that they can, for example, communicate directly with the media output devices and/or be connected to the WLAN. The system 1 also includes a user mobile device 10. This user mobile device is also capable of short range wireless communication e.g. over WI-Fl TM, BluetoothTM or ZigBeeTM. The user mobile device 10 may be, for example, a smartphone or a tablet. A user may be able to use their mobile device 10 to upload media onto the media output devices and/or to select which media should be played back by the media output devices. The system 1 also includes a media cloud server 80, which is located on an external network remote from the premises. The media cloud server 80 can be in communication with the devices provided at the user premises via the internet 70 and the WLAN. The connection to the internet 70 also allows a remote user device 90 to connect to the devices at the premises.
In this embodiment each of the media output devices 12, 14, 16, 18 comprise a memory for storing media to be played back. Media files may be loaded onto each of the media output devices 12, 14, 16, 18 from the user mobile device or from an external source such as media cloud server 80. Thus, in some embodiments the media output devices 12, 14, 16, 18 do not need a continuous connection to the internet 70 or to a user mobile device 10 in order to playback the media.
A user can select from the user mobile device 10 which media to load onto which media output device. These media files may be stored on the user mobile device before loading onto the media output devices 12, 14, 16, 18, or the media files may be stored on remote media cloud server 18 and the user simply needs to indicate on the user mobile device 10 which media file is to be loaded onto which media output device, which can result in a message to the media cloud server via the internet 70. The media cloud server 80 may then be able to send the media file to the selected media output device via the modem 60 and the WLAN. Where media files are stored on the media cloud server 80 the media output devices need not store the media files themselves, they may simply retrieve them from media cloud server 80 when required for playback or output.
In one embodiment the user can select which media output device to load the media file onto by selecting a representation of the media file in question and then providing an input indicative of which media output device the media file should be loaded on. In one embodiment this can be done by a directional gesture input on the user mobile device 10 in order to indicate the direction, from the user mobile device, of the media output device onto which the media file should be loaded. For example, a representation of the media file (e.g. an image of the photo or a still from a video file) can be provided on a touch screen of the user mobile device 10. The user may then use a swipe or flick gesture in the direction of the media output device on which the media file is intended to be displayed. The user mobile device 10 may be aware of the relative location (the direction, and potentially the distance) of each of the media output devices at the premises, e.g. from a virtual map (see below for more information). The direction may be measured from a particular orientation on the user mobile device, for example, in relation to a line of symmetry through the centre of the touch screen of the mobile device 10. The user mobile device 10 may then compare the direction of each of the media output devices with the direction of the gesture input by user, in order to work out which media output device to send the media file to. The process is described in more detail below.
Once the required media files are loaded onto the media output devices or are available on the cloud server for streaming or otherwise transferring to the media output devices the media output devices may be in a position to select which media to output or playback. It may be possible to use inputs from the sensors 20, 22, 24 in order to determine which media to output at which point. For example, the light sensor 20 can detect light levels in the premises. Based on the light levels media associated with a certain characteristic, for example, ambiance, mood or brightness (e.g. for a visual media file such as a photo or a video), may be selected. Using the camera 24 it may be possible to image the area around the media output devices. For example, the camera 24 may be integrated in a second photo frame 14 and may show a view of the area immediately surrounding (one side) of the second photo frame 14. Thus, the camera 24 may be able to capture an image of any users or individual viewing the second photo frame 14. From this it may be possible to identify the identity of a person in the vicinity of the second photo frame 14 (e.g. via facial recognition). In other embodiments it may be possible to identify a user in the vicinity of the media output devices by detecting the proximity of their user mobile device 10. For example, user mobile device 10 may be associated with a particular user. From the identity of a user viewing or in the vicinity the media output device it is possible to select media for outputting or playing back which is more desirable or preferable for that user to view. For example, photos or videos in which that user appears, or that user's close friends or social connections appear, may be displayed preferentially to that user. Further details of how these may be selected are described below.
Figure 2 shows an exemplary method for identifying to which media output device to load or output a media file. The method 200 can be performed on a mobile device, such as user mobile device 10 shown in Figure 1. The method 200 starts by determining 202 the direction of a plurality of output media devices. The direction of each media output device may be determined with respect to the screen of the user mobile device 10. In one embodiment the determination of the direction may be performed by using knowledge of the location of the media output devices, e.g. a predetermined set of location information which may be derived previously or input by a user, such as a virtual map. In some embodiments the direction of each media output device from the mobile device may be determined by the mobile device itself. In some embodiments the mobile device connects to each media output device e.g. via one or more short range wireless communication protocols such as Wi-FiTM or BluetoothTM. Via this connection(s) it may be possible to determine the angle between a line on the screen e.g. a vertical line on the screen and the direction of the media output device from the user mobile device. For example, the angle of arrival between communications with the media output device and the user mobile device can be determined. In other embodiments, the frame(s) in the room may comprise a barometer sensor to determine the relative height of the frame to other frames in the room and the phone; a compass to determine the direction it is facing, and/or a microphone to receive sound waves/pulses.
At step 204 the user mobile device displays a representation of a media file. For example, in the case of a picture or video the representation may be a thumbnail image of the photograph or of a still from the moving video. The representation of the media file may be a smaller representation of a media file that is stored on the mobile user device or it may be a representation of a media file that is stored on a remote cloud server. In this situation the representation of the media file may be provided to the mobile user device via a long range e.g. cellular, wireless network or via a W-Fi TM or other WLAN.
There may also be provided on the screen of the media output device a representation of one or more of the media output devices, for example a thumbnail representation of a photo frame or a speaker or another media output device may be displayed on the screen, e.g. potential media output devices. The location of the display on the screen may correspond to the location of the media output device in relation to the user mobile device. For example, the location of the representation of the media output device on the screen may be determined from the determination of direction and/or location of each media output device found in step 202.
In step 206 a directional user input is received on the user mobile device. For example, where the user mobile device has a touch screen (preferably wherein the representation of the media file is displayed on the touch screen) the directional user input can be a flick or swipe gesture in the direction of the selected media output device.
At step 208 the difference between the direction of the user input and the direction of the media output device is determined. For example, where there are a plurality of potential media output devices the differences between the direction of the each media output device may be determined. The directions of the user input and of the media output devices may be measured in degrees, e.g. a degree of arc (for rotation at 360°) or may be measured in radians.
At step 210 one (or more) of the plurality of media output devices is selected to receive the media file. This identification of one of the media output devices can be determined based on the difference between the direction of the user input and the direction of the media output device that was found in step 208. For example, a media output device may be selected if the determined difference in directions is less than a certain predetermined direction threshold. Such a direction threshold may be around 5°, around 10° or around 30°. For example, the direction threshold could be greater than 5° and less than 20 it would greater 10 degrees and less than 50° the direction threshold in some of the embodiments is at least 5° and not more than 85°. In preferred embodiments the direction threshold is less than 90°. In some embodiments the identification of one of the media output devices to receive the media file is performed by identifying the media output device which has the smallest difference in direction with the user input on the user mobile device. In some embodiments both criteria are applied. For example, a media output device may be identified for receiving the media file only if that media output device has the smallest difference in direction (as determined in step 208) and if the media output device has a direction that is smaller than the difference direction threshold.
In other embodiments, the phone may use the internal map which it has stored from the interaction between the frames or media output devices. See Figure 5 for how a virtual map may be created.
At step 212 the media file can be sent to the identified media output device. For example, the media file may be sent from the user mobile device where it is stored on the user mobile device. In examples where the media file is stored only on a remote server, sending the media file to be identified media output device may comprise the user mobile device sending a message indicating that the media file should be sent to the identified media output device to the remote server. Then the remote server may transmit the media file to the identified media output device via the internet and the WLAN.
In some embodiments the media file is not sent to the identified media output device immediately. For example, after step 210 of identifying the media output device, the user mobile device may send an indication that the media file should be made available for output by that media output device. Such indication may be sent to the media output device itself (e.g. via the WLAN) or may be sent to a remote server. When sent to a remote server, the remote server may determine which media output device should display or output each media file and when. In such cases the remote server may send media files to the identified media output device only when actually required for output.
In other embodiments after step 210 the media file may be immediately sent to the output device and displayed immediately on that device or otherwise output on that device.
Media files may also be uploaded to media output devices from sources other than the mobile user device itself. For example media files may be loaded onto the media output devices from media sharing or storage platforms for example social media website such as Facebook and Instagram, or cloud storage sites such as iCloud, Gdrive, one drive or drop box. In such situations the user may access the media files via a device such as a personal computer or laptop, mobile phone or tablet e.g. via a web interface. The user may then be able to select which media file should be loaded onto which media output devices or should be able to select which media file should be available for playback on which media output devices.
Figure 3 shows an exemplary method 300 for selecting media files to playback or output on a media output device. This method could be performed on a media output device, such as any of the media output devices 12, 14, 16, or 18 shown in Figure 1. Alternatively the method could be performed at a remote server such as media cloud server 80, and instructions could be sent from media cloud server 80 to the respective media output devices instructing each device as to which media device to output or play. In other embodiments a media control manager could be provided at a user premises, which is in communication with the media output devices, and in this case the method could be performed on the media control device and then instructions sent from the media control device to each of the media output devices as required.
In step 302 the identities of individuals who are present or have been recorded in media files is identified. The individuals may be users. For example the individuals may be friends and or family of the user. A plurality of media files may be analysed and the identities of individuals in each media file may be found. The plurality of media files may be all the media files marked as available for playback on a certain media output device. The identity of individuals in media files can be obtained by facial recognition, for example individuals who have been recorded in a media file or who are present in the media file could be individuals in a photograph or individuals who appear in a video. Facial recognition can be used which analyses the digital image or a video frame and compares facial features in order to (uniquely) identify a person. In some embodiments further information may be known about a person, for example their name or relationship to a user of the system. In other embodiments a distinct individual or person may be identified and allocated a unique identifier as part of step 302. In some embodiments individuals in media files may be identified based on metadata associated with the media file, for example on social networking or media sharing sites the accounts of individuals may be tagged in media files.
For example in some circumstances a user may have manually marked or tagged a photo or video as including one or more of their social media contacts (or may use some other kind of identifier e.g. own name). In step 302 it is preferable that all media files which may be displayed or output on a media output device are analysed in order to identify the individuals recorded in present in or associated with those media files. In some embodiments voice recognition may be used to determine the individuals recorded in or otherwise associated with a video or audio file.
In step 304 the individuals or persons identified in the media files are classified. The classification of individuals can be based on information relevant to all media files in a plurality of media files. In one example, individuals are classified based on the number of occurrences of that individual in the plurality of media files, i.e. the number of media files in the plurality of media files which are of that individual. In some embodiments individuals are classified based on the number or proportion of media files which contain or are of that individual in combination with a specific user.
In step 306 the presence of a user is identified and the user's identity is obtained.
The presence of the user may be the presence of that user in the vicinity of the media output device, for example that user may be identified as being present within a certain radius of the media output device or may be identified as being present directly in front of the media output device. The identity of a user may be found from facial recognition software. For example, a camera integrated in a media output device, or associated with and in communication with the media output device, may be able to capture images of the area surrounding the media output device. From those images facial recognition may be performed in order to identify the user. In some embodiments the presence of a user may be inferred e.g. from the presence of a user's personal mobile device (e.g. phone or tablet). For example a media output device may receive a wireless message containing an identifier of a user's mobile device. For example if a user's mobile device establishes a wireless connection with the media output device then it can be inferred that the user associated with that mobile device is also present in the vicinity of the media output device.
At step 308 the playback plurality of the plurality of media files is classified. This classification is based on any individuals present in the media files and the identity of the present user. Preferably each media file is given a playback priority classification. The playback priority classification for each media file is based on the/any individuals present in that media file. For example individuals may be given higher individual classification if they appear or are present in more media files with the present identified user. Then each media file may be given a higher playback priority classification if one or more individuals in that media file are present in more of the other media files.
Media files may be additionally, or alternatively, classified for playback based on a number of other criteria. For example, media metadata regarding each media file may include one or more of: the location the media was recorded (e.g. a GPS co-ordinate or a place name), the date (and optionally the time) the media was recorded, the author of the media file (e.g. the photographer for a photograph, the musician(s) for a music recording). The author of the media file can be identified by an association with a recording device that originally recorded the media file e.g. a MAC address, or other identifier, of a digital camera or mobile telephone. Each piece of media metadata used for classification may be given a different weighting.
Some of the media files may not include a person or individual (or at least not an identifiable individual, e.g. a photograph of a person from behind), for example photographs of landscapes. In this instance step 308 may involve analysing the media file or its metadata to link the media file to one or more individuals. This may be performed based on analysis of one or more of the remainder of the plurality of media files.
For example, a first media file may include a first individual and the media metadata may indicate the first media file was recorded at a first time in a first location. A second media file may not include any individuals but the media metadata may indicate the second media file was recorded at a second time in a second location. The first and the second location may be compared and the first and the second times compared and if the locations are within a predetermined location threshold and the times are within a predetermined time threshold of one another, it may be determined that the second media file should be associated with the first individual, since it is likely the first individual (who is present in the first media file) was present nearby when the second media file was recorded.
The predetermined location threshold is preferably at least 5m and/or less than 5km, more preferably at least 100m and/or not more than 1km. The predetermined time threshold is preferably at least 1 minute and/or not more than 5 hours, more preferably at least 5 minutes and/or less than 2 hours. In preferred embodiments, the predetermined time threshold is at least 20 minutes and/or less than 1 hour.
An example is if a photo is taken of Individual X at 12pm on Brighton beach, then a photo is taken of Brighton beach without any individual in it at 12:05, the system will associate the second photo with Individual X. In step 310 the media files are weighted based on the classification of playback priority. For example a weighted list of all the media files that are available to playback or output on a certain media output device are weighted according to their playback priority classification and provided in a weighted list.
In step 312 a media file is selected for playback. The media file is selected for playback based on the classification of playback priority. The selection of the media file may be random but media files with a higher weighting or a higher playback priority may be more likely to be selected. By providing a random selection rather than simply a prioritised list of photos or other media files so that the user does not always know or get to learn which media files to expect in which order.
At step 314 the selected media file is played back or output on the media output device. For example a video may be played, a photo may be displayed, or an audio file output or played.
In step 316 the response to the played media file is analysed. For example a photograph may be recorded of a user in the vicinity of the output media device and the facial expression of the user may be analysed to identify whether that user is happy, sad or has another emotion in response to the media file that has been played. The user's response to the played media file may also be monitored by analysing the output of a sound sensor in the vicinity of the media file e.g. by the tone of voice of a user being detected, or even by voice recognition to detect words. In other embodiments the response to the played media file can be monitored by receiving an active user interaction such as a user selecting a like or dislike option, e.g. on the user's mobile device or on or at the user interface of the media output device.
At step 318 the classification of the media file is adjusted based on the monitored response identified in step 316. For example if a particular user who is present appeared to show a positive emotion on viewing media or upon the media file being output in the case of for example an audio file, the playback priority of the media file when that particular user is present may be increased.
In step 308 the classification of the playback priority may be determined based on a variety of other additional factors. In one example outputs from one or more sensors in the vicinity of the media output device can be analysed. Such sensors may include a sound or light sensor or camera. For example images (still photos or moving videos) may be selected to complement the lighting conditions in the area surrounding the media output device, e.g. the room in which the media output device is located. In one example images are moving images with colours similar to the ambient colours in the surroundings may be selected e.g. brighter, lighter or more yellow colours on a sunnier, brighter, lighter day. In some embodiments the playback priority classification may be selected based on the time of day, for example the time of sunset may be known and images or media files having similar colours to sunrise and sunset may be selected preferentially because they are given a higher playback priority. In some embodiments the tone of noises detected by an acoustic sensor can be used to select the playback priority classification.
Although the steps of method 300 are shown in a specific order, other orders are possible.
Figure 4 is a flow diagram showing some of the factors which go into selection of media files to display on a specific media output device. At step 402 the media files that are available for outputting on the media output device are loaded on media device. Such media files may be loaded from social media 404, from a cloud such as a remote cloud 406, or from a user device 408 such as a user mobile device e.g. mobile (smart) phone. At block 410 the sensed ambiance is shown as one of the factors that go into classifying media, for example, lighting and/or sound conditions in the vicinity of the media output device are sensed by one or more sensors e.g. light and/or acoustic or sound sensors, and this information is sent to the media output device for analysis and/or for classifying media recording to the playback prioritisation.
In block 412 it can been seen that user recognition is another factor going into classifying the media by play back prioritisation.
In block 414 the media is classified according to the prioritisation for playing each media file back.
In step 416 the media is selected for play back. The media is selected for playback based on the classification for playback prioritisation of step 414. The selection of the media may be random, but each media file is preferably weighted so that media files with a higher playback prioritisation are more likely to be played back or output on the media output devices.
At step 418 the media files are displayed or otherwise output.
At step 420 the user response to the played back media file is monitored, e.g. via cameras and emotional recognition or by requesting feedback from a user e.g. the selection of various responses to the played back media file.
In some situations it may be identified in step 306 that multiple users e.g. 2 users are present. In such situation the identities of both users may be used to classify the playback priority of media files in step 308. For example a combined scoring system may be used.
The playback priority classification and weighting can be configured so that it is most likely to output media files in which both (in the case of 2 users) users are present or recorded and/or in which people or individuals that appear in more media files with one or both of the users may be selected preferentially or given a higher playback priority classification.
A block diagram of an exemplary output media device 600, such as a smart photoframe, is shown in Figure 6. The media output device 600 comprises a processor 602 and a memory 616 for storing media files (such as photographs and videos for display and sound recordings for output). The media output device 600 has a short range wireless connection interface 604 for connecting to nearby devices, e.g. over a Wireless Local Area Network (WLAN), such as a WiFi or Zigbee, or a Bluetooth or Bluetooth Low Energy connection. In some embodiments the short range wireless interface 604 provides capability for two or more short range wireless protocols, such as both WiFi and Bluetooth, e.g. Airdrop. Preferably the range of the short range wireless interface is greater than 1m and less than 500m, more preferably between around 5m and 200m, more preferably between around 10m and 100m The media output device 600 has a display 612 and a speaker 618 for output of images and sound, respectively.
Each output media device 600 may comprise one or more sensors for making measurements indicative of its position. The sensors may include one or more of a height sensor 606 (such as a barometer sensor), a compass 610 and a microphone 608. Preferably the media output device includes a clock (not shown). The measurements from the sensors can help determine the positioning/orientation of the device. For example, height may be determined from a barometer, and a compass may indicate the direction the media output device is facing in a room. A microphone 608 is able to detect sound waves / pulses, which may be used for determining relative distance to other devices, such as a user mobile device. Data collected from the sensors 606, 608, 610 may be stored, (semi-)permanently or temporarily, in the memory 616, or may be transmitted straight to another device, such as a mobile user device, without being stored on the media output device 600.
In alternative embodiments, the media output device could have only one of the display 612 and speaker 618 output interfaces. In some embodiments only some of the sensors are present, or other types sensors may be integrated into the media output device 600.
Figure 5 is a flow diagram showing the steps of a method 500 of determining a virtual map of a plurality of media output devices and loading them with one or more media files, which may for example be performed at the mobile or user device. In some embodiments the user device can locate output media devices from a virtual map of a room comprising one or more output media devices. One or more of the steps 502 to 508 of the method 500 may be performed as part of step 202 of method 200 (described above), or prior to step 202 and step 202 may comprise determining the direction of media output devices from the virtual map created in method 500.
The method 500 may work best if the user mobile device is positioned substantially in the center of the room. The room generally contains the one or more output media devices intended to be targeted with incoming media from the user device.
In step 502, the one or more output media devices and the user mobile device in the room are linked via a wireless connection, e.g. a short range wireless connection such as WiFi or Zigbee or Bluetooth. In some embodiments two wireless connections, using different wireless protocols, are established between each media output device and the user mobile device. For example, a connection may be established using a first wireless protocol, e.g. Bluetooth, which may be used to set up a connection over a second wireless protocol, e.g. Wi-Fi. In one embodiment the second wireless connection is a point-to-point or peer-to-peer wireless connection (e.g. WIFi).
At step 504 the user mobile device emits a sound wave/pulse. The sound wave/pulse is emitted for a sound time duration. In one embodiment the sound time duration is approximately 15 seconds. The sound is generally audio frequency, for example the frequency may be in the range of20Hz to 20kHz. The frequency may be in the range of 200Hz to 2kHz. Other length sound bursts are contemplated, e.g. having a sound duration of between 3 seconds and 30 seconds, or between 5 seconds and 20 seconds. In some embodiments the sound increases in volume, e.g. from 60 to 90decibels. In other embodiments the sound burst is a constant volume. The volume is preferably selected so the sound will reach media output devices in one room, but not media output devices in a neighbouring room. Thus generally the volume is between 30decibels and 150decibels, preferably between around 50 decibels and 100 decibels, more preferably around 90 decibels. The user mobile device records the time at which it emits the sound pulse, e.g. as an emission time.
This signal is received by the output media device(s) in the room and detected by the microphone 608. In other embodiments the sound wave/pulse is not limited to a sound wave and may be, for example, a light pulse/other wave. Positional data, e.g. the timing of the receipt of the sound, or light, pulse is recorded at the media output device, and optionally stored in memory. Generally the user mobile device records the time at which the sound is emitted.
In step 506 the user mobile device receives positional data from the media output device 600 via the short range wireless connection set up in step 502, e.g. in the form of a message. The positional data includes the timing of the receipt of the sound/light pulse. To calculate the distance between the user media device and the media output device(s), the speed of sound (around 343 m/s) and the time taken for the sound to reach each of the media output devices from the user mobile device are used. The time taken is determined by the emission time from the user mobile device and the receipt time of the sound/light pulse at the respective media output device.
Preferably the positional data also includes information regarding the height and orientation of the media output device, such as measured by the barometer and compass of the media output device.
Preferably the message from the media output device contains some sort of identifier, such as a MAC Address.
In step 508 a virtual map of the location of media output devices in the room is determined. The distance from the user mobile device is determined by calculating the time taken for the sound wave/pulse to reach each output media device (since the speed of sound is known).
The height of each output device can be determined by analysing the barometer reading (this analysis can be performed at the media output device, which would then send the height data to the user device, or the media output device could send the raw barometer data to the user mobile device for analysis).
The barometer readings provide measurements of air pressure (which decreases with altitude). Thus the barometer readings give sufficient information for the user mobile device to determine at least the relative heights of the media output devices to one another. Absolute height readings are not necessarily required.
The compass measurements provide lateral determination of the media output devices. To aid with identifying the positioning of the frames on the internal phone map, it may be assume the media output devices are each facing into the room (as on the wall), and towards the user mobile device. Since each media output device reports the direction it is facing (its compass reading) to the user mobile device, the user mobile device can use this information to determine the direction of the media output device. For example, if the media output device reports it is facing South, the user mobile device may determine the media output device is on the North wall of the room, facing inwards.
The compass direction can be used to identify the orientation of each media output device. The calculated position of the output media device corresponds to a position on the virtual map on the user device, which the user mobile device stores.
The more media output devices there are, the more precise the map will be as more distance measurements are taken into account.
In some embodiments, it may be possible to input virtual map data at the user mobile device, for example based on an estimate of the shape of the room and roughly indicating the locations of the media output devices. The virtual map may then be verified by collecting positional data, as set out above.
In some embodiments steps 502 to 508 are determined periodically, or on demand (e.g. each time an application for transmitting media files to the media output devices is opened on the user mobile device, or in response to an input or selection from the user on the user media device). In alternative embodiments, some of the analysis (e.g. step 508) may be conducted on a remote server, e.g. a cloud server, to which the requisite data is sent. In other embodiments the method is triggered by receiving a communication from a new media output device, e.g. based on a source identifier in the communication, which may indicate a new media output device has been installed.
In step 510, one or more potential media output devices are determined or identified, e.g. potential devices that media files may be transferred to. In some embodiments the potential media output devices comprise all the media output devices within wireless range and which are included in the virtual map. In other embodiments the potential media output devices are a subset of the devices on the virtual map. For example, the orientation (e.g. compass direction) of the user mobile device could be identified and the potential media output devices may include only those media output devices which are in the general direction the user mobile device is pointing in, for example within 20 degrees or within 30 degrees or 45 degrees of the direction the user mobile device is pointing. In some examples media output devices are located in the same direction from the user mobile device, but are vertically displaced from each other, which would be shown by height sensor/barometer data.
In some embodiments, at step 512, a representation of the one or more potential media output devices is displayed to the user on a screen of the mobile user device. This is particularly advantageous where there are more than one (or a plurality of) potential user output devices.
At step 514 a media file to send to the media output device is identified, for example based on a user input selection. In some embodiments a representation of the identified media file is presented on the user device interface, e.g. a thumbnail of the image or video, or a file name.
At step 516 one of the potential output media devices is selected by a user input, e.g. by the user swiping in a direction of the media output device or by pointing the user device at the output media device that the user wishes to display the media file on. The target direction between the user device and the output media device is compared using a compass reading from the compass feature integrated in the user device and a comparison to the stored virtual map, for example the selection may be the user device that is in the direction that aligns most with the direction in which the user mobile device is oriented. In some embodiments, where there is a vertical arrangement of output media devices in the targeted direction of the user device, the frame in the vertical middle of the arrangement may be automatically selected (as a default). Another vertically displaced media output device could be selected based on user input, e.g. by tilting the user mobile device (which can be detected by an accelerometer in the mobile device), or by a touch user input (e.g. selecting the representation of the media output device on the touchscreen). The representation of the selected user output device on the screen of the user mobile device may be distinguished from non-selected of the potential output devices, such as by highlighting (e.g. by turning green).
In some embodiments, a signal may be sent to the media output device to indicate it is selected, which may cause the output media device to identify that it is ready to receive a media file from the user device, e.g. by displaying an icon or light.
A user input indicative of the selection of the media output device could comprise a user swipe/flick in the direction of the media output device to be selected. The swipe/flick motion could be detected on the representation of the media file displayed on the user interface of the mobile device. To select an output media device on a different wall of the room, the user may point the user device in the direction of that wall and the internal compass of the user device will recognise the change and select another output media device, or plurality of potential media output devices, as appropriate. In other embodiments, the user may swipe/flick on the touchscreen of the mobile user device to select the media output device. The direction of the swipe/flick across the touchscreen may determine the selected media output device (e.g. swiping/flicking from top to bottom of the screen selects a media output device on the opposite wall to the wall the mobile device is facing).
In some embodiments, it may be preferable to select an output media device other than the middle vertical device in an arrangement of output media devices on the same wall/in the same or similar compass/horizontal direction within the room. To select another output media device above or below the currently selected one, the user may tilt their user device up or down respectively; the internal accelerometer of the user device detects the motion and correspondingly selects a different output media device in the users intended direction away from the currently selected device.
In step 518, a media file is transferred from the user device to the output media device that the user device has been used to select, e.g. via the short range wireless connection established at step 502 Although the method 500 has been described as being performed by a user mobile device, in alternative embodiments some of the steps (e.g. those for setting up the virtual map 502 to 508) could be performed by a fixed or fixable device, such as a smart device connected to the wall or on the ceiling the room and having short range communication and sound (or light)_emission capability.
While a specific architecture is shown, any appropriate hardware/software architecture may be employed. For example, external communication may be via a wired network connection.
The above embodiments and examples are to be understood as illustrative examples. Further embodiments, aspects or examples are envisaged. It is to be understood that any feature described in relation to any one embodiment, aspect or example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, aspects or examples, or any combination of any other of the embodiments, aspects or examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
Claims (31)
- CLAIMS1. A method for loading a media file onto a media output device using a mobile user device, the method comprising: determining the direction, from a mobile user device, of each of one or more media output devices; providing a representation of the media file on a touchscreen of the mobile user device; receiving a directional user input on the touchscreen to define a user input direction; determining, for each of the one or more media output devices, the difference between the direction of the media output device from the mobile user device and the user input direction; and based on the determined difference(s), identifying one of the one or more media output devices to receive the media file.
- 2. A method according to claim 1, wherein identifying one of the one or more media output devices to receive the media file comprises: selecting the one of the one or more media output devices having the smallest determined difference between the direction of the media output device from the mobile user device and the user input direction.
- 3. A method according to claim 1 or 2, wherein identifying one of the one or more media output devices to receive the media file comprises: selecting the one of the one or more media output devices to receive the media file if the determined difference between the direction of the media output device from the mobile user device and the user input direction is less than a predetermined direction difference threshold.
- 4. A method according to any preceding claim, further comprising: establishing a short-range wireless connection between the mobile user device and each of the one or more media output devices, preferably wherein determining the direction, from a mobile user device, of each of the one or more media output devices is performed using the short-range wireless connection.
- 5. A method according to claim 4, wherein determining the direction, from a mobile user device, of each of the one or more media output devices comprises: receiving over the short-range wireless connection a compass reading measured by the media output device; and determining the direction based on the compass reading.
- 6. A method according to any preceding claim, further comprising: establishing a short-range wireless connection between the mobile user device and the selected media output device; and sending the media file to the selected media output device from the mobile user device via the short-range wireless connection.
- 7. A method according to claim 6, comprising establishing a short range wireless communication between the mobile user device and the output media device via two different short range communication protocols, such as a WiFi and a Bluetooth connection, and sending the media file over one of the short range communication protocols.
- 8. A method according to any preceding claim, wherein determining the direction, from a mobile user device, of each of one or more media output devices comprises: determining the locations of the one or more media output devices, preferably from a virtual map of the space in which the media output devices are located; and measuring the orientation of the mobile user device using a compass (on the user media device).
- 9. A method according to claim 8, further comprising generating a virtual map of the locations of the one or more media output devices in the space by: emitting a sound from the mobile user device; ; receiving a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculating a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device.
- 10. A method of generating a virtual map of the locations of one or more media output devices in a space by: emitting a sound from a mobile user device; receiving a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculating a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device.
- 11. A method according to claim 9 or 10, further comprising: receiving from each media output device data relating to the height, or elevation, of the media output device, such as barometer sensor data; and using the data relating to the height to determine the virtual map.
- 12. A method according to any of claims 9 to 11, further comprising: receiving from each media output device data relating to the orientation, e.g. a compass reading, of the media output device; and using the data relating to the orientation to determine the virtual map.
- 13. A method according to any preceding claim, wherein the directional user input comprises horizontal and vertical directionality.
- 14. A method according to any preceding claim, further comprising: sending to the selected media output device media metadata for associating with the media file, the media metadata comprising an identifier of the mobile user device and/or of the user operating the mobile user device.
- 15. A method according to claim 14, further comprising: classifying the media file for media playback based on the media metadata, preferably further comprising: selecting the media file for playback based on the classification.
- 16. A method of selecting a media file for playback on a media output device, the method comprising: receiving a plurality of media files and associated media metadata for each media file, the media metadata including information regarding the identity of one or more individuals recorded in the media file; identifying a user present in the vicinity of the media output device; classifying each of the plurality of media files into one of a plurality of playback priority classifications based on the media metadata and the identified user; and selecting from the plurality of media files a media file for playback based on the playback priority classification of the media files.
- 17. A method according to claim 15 or 16, wherein selecting a media file for playback comprises: weighting the plurality of media files according to the playback priority classification of each media file; and then randomly selecting a media file for playback from the weighted plurality of media files, such that media files with a higher playback priority classification have a higher probability of being selected, preferably wherein classifying each media file is based on a comparison of media metadata for the media file with media metadata for the plurality of media files.
- 18. A method according to any of claims 15 to 17, wherein classifying the media file comprises: identifying linkages between a plurality of individuals recorded in the plurality of media files; classifying the media files into one of a plurality of playback priority classifications based on the identified linkages.
- 19. A method according to any of claims 16 to 17, wherein classifying each media file comprises: assigning an individual classification to each individual recorded in the media file according to the proportion of the plurality of media files in which both the individual and the identified user are recorded in combination; and assigning to the media file one of the playback priority classifications based on the individual classifications of the one or more individuals recorded in the media file.
- 20. A method according to any of claims 16 to 18, wherein: the step of identifying a user present in the vicinity of the media output device comprises identifying a first user and a second user present in the vicinity of the media output device; and classifying each of the plurality of media files into one of a plurality of playback priority classifications is based on the media metadata and the identified first user and the identified second user.
- 21. A method according to any preceding claim, wherein the associated media metadata further comprises one or more of: an identifier of the mobile user device and/or of the user operating the mobile user device; an indicator of the location at which the media file was captured; an identifier of one or more individuals who are recorded in the media file; an identified emotional response of a user to previous playback of the media file.
- 22. A method according to any of claims 16 to 21, wherein classifying each of the plurality of media files into one of a plurality of playback priority classifications is further based on an identified emotional response of the identified user to previous playback of the media file.
- 23. A method for selecting media files for playback on media output devices, the method comprising: playing back a media file on a media output device; receiving an input indicative of the emotion of the user when the media file is played back on the media output device; classifying the emotion of the user into one of a plurality of emotion classifications; associating the emotion classification with the media file as media metadata; and using the media metadata, including the emotion classification, to assign a playback priority classification to the media file.
- 24. A method according to claim 23, further comprising: adjusting a media file selection algorithm based on the classified emotion; and classifying the media file into one of a plurality of playback priority classifications based on the media metadata and the identified user.
- 25. A non-transient computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any preceding claim.
- 26. A mobile user device for loading a media file onto a media output device, the mobile user device comprising: a memory; a user interface comprising a touchscreen; a processor; wherein the mobile user device is arranged to: determine the direction, from a mobile user device, of each of one or more media output devices; provide a representation of the media file on the touchscreen; receive a directional user input on the touchscreen to define a user input direction; determine, for each of the one or more media output devices, the difference between the direction of the media output device from the mobile user device and the user input direction; and based on the determined difference(s), identify one of the one or more media output devices to receive the media file, preferably wherein the mobile user device is further configured to perform the method of any of claims 2 to 9.
- 27. A mobile user device for loading a media file onto a media output device, the mobile user device comprising: a memory; a processor; a speaker; and a communication interface for communicating with one or more media output devices; wherein the mobile user device is arranged to: emit a sound from the speaker; receive a message from each media output device indicative of the time taken for the sound to be received at the media output device; and calculate a distance from the user mobile device to the location of each of the one or more media output devices from the time taken for the sound to be received at each media output device, preferably wherein the mobile user device is further configured to perform the method of any of claims 10 to 15.
- 28. A media handling device for processing a media file for display on a media output device, the media handling device comprising: a memory; a processor; and wherein the media handling device is arranged to: receive a plurality of media files and associated media metadata for each media file, the media metadata including information regarding the identity of one or more individuals recorded in the media file; identify a user present in the vicinity of the media output device; classify each of the plurality of media files into one of a plurality of playback priority classifications based on the media metadata and the identified user; and select from the plurality of media files a media file for playback based on the playback priority classification of the media, preferably wherein the media handling device is further configured to perform the method of any of claims 17 to 22.
- 29. A media output device comprising: a memory; a processor; and a media output interface; wherein the media output device is configured to: play back a media file via the media output interface; receive an input indicative of the emotion of the user when the media file is played back on the media output device; classify the emotion of the user into one of a plurality of emotion classifications; associate the emotion classification with the media file as media metadata; and use the media metadata, including the emotion classification, to assign a playback priority classification to the media file, preferably wherein the media output device is further configured to perform the method of claim 24.
- 30. A media output device comprising: a memory for storing a plurality of media files for playback; a processor; a media output interface; a plurality of sensors for sensing positional data; and a short-range wireless communication interface for communicating with a user mobile device; wherein the media output device is configured to: send the sensed positional data to the user mobile device, preferably in response to a request.
- 31. A system comprising: a mobile user device, such as a mobile user device according to claim 26 or 27; one or media output devices, such as a media output device according to one of claims 28 to 30.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1820268.9A GB2579780A (en) | 2018-12-12 | 2018-12-12 | Media processing method and apparatus |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1820268.9A GB2579780A (en) | 2018-12-12 | 2018-12-12 | Media processing method and apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB201820268D0 GB201820268D0 (en) | 2019-01-30 |
| GB2579780A true GB2579780A (en) | 2020-07-08 |
Family
ID=65147345
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1820268.9A Withdrawn GB2579780A (en) | 2018-12-12 | 2018-12-12 | Media processing method and apparatus |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2579780A (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110136544A1 (en) * | 2009-12-08 | 2011-06-09 | Hon Hai Precision Industry Co., Ltd. | Portable electronic device with data transmission function and data transmission method thereof |
| WO2013123697A1 (en) * | 2012-02-21 | 2013-08-29 | 海尔集团公司 | Method for determining sharing device, method and system for file transmission |
| JP2013205945A (en) * | 2012-03-27 | 2013-10-07 | Sharp Corp | Data transmission operation equipment and data transmission control method |
| US20140022183A1 (en) * | 2012-07-19 | 2014-01-23 | General Instrument Corporation | Sending and receiving information |
| GB2525902A (en) * | 2014-05-08 | 2015-11-11 | Ibm | Mobile device data transfer using location information |
-
2018
- 2018-12-12 GB GB1820268.9A patent/GB2579780A/en not_active Withdrawn
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110136544A1 (en) * | 2009-12-08 | 2011-06-09 | Hon Hai Precision Industry Co., Ltd. | Portable electronic device with data transmission function and data transmission method thereof |
| WO2013123697A1 (en) * | 2012-02-21 | 2013-08-29 | 海尔集团公司 | Method for determining sharing device, method and system for file transmission |
| JP2013205945A (en) * | 2012-03-27 | 2013-10-07 | Sharp Corp | Data transmission operation equipment and data transmission control method |
| US20140022183A1 (en) * | 2012-07-19 | 2014-01-23 | General Instrument Corporation | Sending and receiving information |
| GB2525902A (en) * | 2014-05-08 | 2015-11-11 | Ibm | Mobile device data transfer using location information |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201820268D0 (en) | 2019-01-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11917495B2 (en) | Detection of a physical collision between two client devices in a location sharing system | |
| CN112906615B (en) | A scheme for retrieving content items and associating them with real-world objects | |
| US20200314586A1 (en) | Points of interest in a location sharing system | |
| US10600224B1 (en) | Techniques for animating stickers with sound | |
| US9747072B2 (en) | Context-aware notifications | |
| CN105432063B (en) | Digital device and its control method | |
| EP3127097B1 (en) | System and method for output display generation based on ambient conditions | |
| US9426551B2 (en) | Distributed wireless speaker system with light show | |
| US11430211B1 (en) | Method for creating and displaying social media content associated with real-world objects or phenomena using augmented reality | |
| US20140129981A1 (en) | Electronic Device and Method for Handling Tags | |
| US20140038560A1 (en) | System for and method of transmitting communication information | |
| US10097591B2 (en) | Methods and devices to determine a preferred electronic device | |
| CN104508699B (en) | Content transmission method, and system, apparatus and computer-readable recording medium using the same | |
| CN108564274B (en) | Guest room booking method and device and mobile terminal | |
| WO2015010571A1 (en) | Method, system, and device for performing operation for target | |
| CN111383251A (en) | A method, device, monitoring device and storage medium for tracking target object | |
| US20200112838A1 (en) | Mobile device that creates a communication group based on the mobile device identifying people currently located at a particular location | |
| WO2019047130A1 (en) | Display picture display method and terminal | |
| US20180367962A1 (en) | Two-way communication interface for vision-based monitoring system | |
| US20220210265A1 (en) | Setting shared ringtone for calls between users | |
| US20250148320A1 (en) | Departure time estimation in a location sharing system | |
| GB2579780A (en) | Media processing method and apparatus | |
| CN108520022A (en) | A kind of method and apparatus of watermark processing | |
| KR102173727B1 (en) | Method for Sharing Information By Using Sound Signal and Apparatus Thereof | |
| CN109272549B (en) | A method and terminal device for determining the location of an infrared hot spot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |