WO2019228969A1 - Displaying a virtual dynamic light effect - Google Patents
Displaying a virtual dynamic light effect Download PDFInfo
- Publication number
- WO2019228969A1 WO2019228969A1 PCT/EP2019/063631 EP2019063631W WO2019228969A1 WO 2019228969 A1 WO2019228969 A1 WO 2019228969A1 EP 2019063631 W EP2019063631 W EP 2019063631W WO 2019228969 A1 WO2019228969 A1 WO 2019228969A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- lighting device
- image
- light effect
- dynamic light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/105—Controlling the light source in response to determined parameters
- H05B47/115—Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
- H05B47/12—Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by detecting audible sound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Recommending goods or services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
- G06Q30/0643—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/155—Coordinated control of two or more light sources
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/175—Controlling the light source by remote control
- H05B47/19—Controlling the light source by remote control via wireless transmission
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/175—Controlling the light source by remote control
- H05B47/196—Controlling the light source by remote control characterised by user interface arrangements
- H05B47/1965—Controlling the light source by remote control characterised by user interface arrangements using handheld communication devices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Definitions
- the invention relates to an electronic device for displaying a virtual light effect.
- the invention further relates to a method of displaying a virtual light effect.
- the invention also relates to a computer program product enabling a computer system to perform such a method.
- Connected lighting has made it possible to easily change the color and brightness of the lights, thereby giving a much richer lighting experience compared to conventional lighting.
- To make connected lighting mainstream consumers need a better understanding of how connected lighting can positively impact their own home. Thus far, this was difficult to achieve, as shopping environments are rather different than a living room environment.
- US 9,910,575 B2 discloses a method of assisting a user in selecting a lighting device design.
- the method involves receiving an image of a scene, e.g. a picture taken by a user, and analyzing this image in order to select or generate a lighting device design.
- the selected lighting design and optionally the lighting effect related to the lighting properties of the selected lighting device design may be applied to the scene and shown to the user.
- a position in the scene for placing a lighting device design may be determined based on an analysis of the image and the user may be provided with a visual clue highlighting this position in the image.
- the electronic device comprises at least one processor configured to obtain an image of at least part of an environment, said image being captured with a camera, determine a potential location for a lighting device by analyzing said image, analyze a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determine a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and display said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
- the inventors have recognized that it is beneficial to show a user what dynamic light effects synchronized to content would look like in his own surroundings and that this is best realized with an augmented reality application. Rendering a virtual dynamic light effect synchronized to content being rendered gives a good impression of what dynamic light effects in general look like.
- the content is typically dynamic media content like audio and/or video content.
- the audio and/or video content may be music, a TV program, a movie or a game, for example.
- the image may be captured by the electronic device of the invention or by another device.
- Said at least one processor may be configured to determine one or more aspects of said content rendering device, select said lighting device from a plurality of lighting devices based on said one or more aspects of said content rendering device, and determine said potential location for said lighting device and/or said virtual dynamic light effect further based on one or more characteristics of said selected lighting device.
- Said one or more aspects of said content rendering device may comprise at least one of a category (e.g. TV or smart speaker), a brand, a model number and information identifying supported functions of said content rendering device. This allows selection of a lighting device which can be controlled by said content rendering device or can be controlled based on output generated by said content rendering device. The best location(s) for a lighting device and/or the dynamic light effect generated by a lighting device typically depend on which lighting device has been selected.
- Said at least one processor may be configured to determine at least one of said one or more aspects of said content rendering device by analyzing said image. For example, the brand, model number or category of the content rendering may be determined by comparing features extracted from the image with features stored in a database.
- Said at least one processor may be configured to use a wireless receiver to receive information identifying at least one of said one or more aspects of said content rendering device. This allows the aspect of the content rendering device to be taken into account even if no image of it has been captured. This is especially beneficial if the content rendering device does not have a display, as it would then be more likely that the content rendering device would be hidden from view in a cabinet.
- Said at least one processor may be configured to determine said potential location for said lighting device by determining a location of said content rendering device in said image and determining said potential location for said lighting device based on said location of said content rendering device.
- the processor may determine the potential location for (placing) said lighting device in said image relative to said location of the content rendering device in said image. If said content rendering comprises a display, this allows said at least one processor to determine said potential location for said lighting device and said virtual dynamic light effect such that said virtual dynamic light effect at least partly surrounds said display, which results in an enhanced display viewing experience.
- Said at least one processor may be configured to determine a location of an already installed further lighting device by analyzing said image and/or determine a potential location for a yet to be installed further lighting device by analyzing said image and determine said potential location for said lighting device based on said location of said already installed further lighting device and/or said potential location determined for said yet to be installed further lighting device. Since spatially overlapping light effects may result in undesired light effects, the locations of further lighting devices may be taken into account when determining the potential location for the light device, e.g. to prevent or limit spatial overlap in light emissions.
- Said at least one processor may be configured to determine one or more characteristics of said already installed further lighting device and/or said yet to be installed further lighting device, e.g. light output and/or beam angle, and determine said potential location for said lighting device further based on said one or more determined characteristics. This makes it easier to prevent or limit spatial overlap in light emissions while limiting the number of areas that are unnecessarily unlit.
- Said at least one processor may be configured to determine said potential location for said lighting device by locating one or more suitable free spaces and/or one or more existing lighting devices in said image. By determining a feasible location for the lighting device, the user will likely appreciate the augmented reality demonstration more, because he would be able to actually install the light source at the shown location if he likes the virtual dynamic light effect.
- Said at least one processor may be configured to display an image representing said lighting device superimposed over said environment at said potential location determined for said lighting device. This allows the user to see what the lighting device itself looks like and decide whether its design matches the user’s preferences.
- Said at least one processor may be configured to control said content rendering device to render said temporally sequential segments of content. This makes it possible to use special content that is especially suited for demonstrations. This special content may be stored on the electronic device itself, for example, or alternatively be retrieved from a network location as specified by the processor. Displaying the content on the content rendering device may result in a better user experience.
- Said content rendering device may comprise a display and said at least one processor may be configured to determine a location of said display of said content rendering device and display a visual portion of said temporally sequential segments of content superimposed over said display of said content rendering device. This is especially advantageous if it is not possible to display the content on the content rendering device itself.
- the method comprises obtaining an image of at least part of an environment, said image being captured with a camera, determining a potential location for a lighting device by analyzing said image, analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determining a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and displaying said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
- Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
- a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided.
- a computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
- a non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations comprising: obtaining an image of at least part of an environment, said image being captured with a camera, determining a potential location for a lighting device by analyzing said image, analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determining a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and displaying said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
- aspects of the present invention may be embodied as a device, a method or a computer program product.
- aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", “module” or “system.”
- Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer.
- aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (FAN) or a wide area network (WAN), or the connection maybe made to an external computer (for example, through the Internet using an Internet Service Provider).
- FAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- Fig. 1 is a block diagram of an embodiment of the electronic device of the invention
- Fig. 2 is a flow diagram of a first embodiment of the method of the invention
- Fig. 3 is a flow diagram of a second embodiment of the method of the invention
- Fig. 4 shows an example of content and a corresponding dynamic light effect superimposed over a view on an environment
- Fig. 5 is a flow diagram of a third embodiment of the method of the invention.
- Fig. 6 shows a representation of an image depicting a room with a TV placed on a TV bench and a cabinet next to the TV;
- Fig. 7 shows representations of a lighting device and a dynamic light effect superimposed over the image represented in Fig.6;
- Fig. 8 is a flow diagram of a fourth embodiment of the method of the invention.
- Fig. 9 shows a representation of an image depicting a room with a TV placed on a TV bench, a cabinet next to the TV and an installed light source in the upper-left comer;
- Fig. 10 shows a representation of a dynamic light effect superimposed over the image represented in Fig.9.
- Fig. 11 is a block diagram of an exemplary data processing system for performing the method of the invention.
- Fig-1 shows a first embodiment of the electronic device of the invention.
- the electronic device of the invention (also referred to as the augmented reality device) is a mobile device 1, e.g. a mobile phone, tablet or pair of augmented reality glasses (possibly part of an augmented reality headset).
- the mobile device 1 comprises a processor 3, a transceiver 5, storage means 7, a camera 8 and a display 9.
- the mobile device 1 is connected to a wireless access point 11.
- a television 19 is also connected to the wireless access point 11.
- the wireless access point 11 may further provide access the Internet 21.
- Fig.l shows the home network system after a bridge 13 and lights 15 and 17 have been installed.
- An app i.e. application running on the mobile device 1 can be used to control lights 15 and 17 (e.g. Philips Hue lights) via the bridge 13.
- the bridge 13 is connected to the wireless access point 11, e.g. via an Ethernet link.
- the bridge 13 is connected to the wireless access point 11, e.g. via an Ethernet link.
- the processor 3 is configured to obtain an image of at least part of an environment. In the embodiment of Fig.l, the image is captured with the camera 8. The processor 3 is further configured to determine a potential location for lights 15 and 17 by analyzing the image. The processor 3 is further configured to analyze a plurality of temporally sequential segments of content being rendered or to be rendered by the mobile device 1 or by the television 19 and determine a virtual dynamic light effect based on the analysis and the potential location determined for the lights 15 and 17.
- the content may be TV/movie content, music content or game content, for example.
- the processor 3 is further configured to display the virtual dynamic light effect superimposed over a view on the environment while the content is being rendered (displaying information superimposed over a view on the environment is also referred to as augmented reality).
- a current part of the virtual dynamic light effect corresponds to a current segment of the content.
- the color and brightness levels of the virtual dynamic light effect may be chosen to match the audio (e.g. as captured by the mobile device 1) and/or video of the content.
- the color and brightness levels of the virtual dynamic light effect may be chosen to match the audio part of a television program or movie.
- the selected color and brightness levels further depend on user preferences (e.g. derived from the user’s most frequently used light scenes). This is especially beneficial if the content is music. Alternatively, specific light scripts or light effects which have been used often or have been created especially for the rendered content may be used.
- the virtual dynamic light effect may be created by obtaining an image representing the area covered by the dynamic light effect from a database (e.g. where it is stored in relation to the selected lighting device) and scaling it to match the scale of the environment, as reflected in the image.
- pixels of the virtual dynamic light effect may be created dynamically, e.g. based on the beam angle, light output and other aspects of the selected lighting device.
- all pixels of the dynamic light effect have the same color (chromaticity and brightness).
- pixels that are closer to the lighting device are brighter than pixels that are farther from the lighting device. For example, a linearly-decreasing light effect may be created.
- properties of other objects are taken into account as well, e.g. the color and pattern of a wall. If a 3D model is used, the virtual dynamic light effect may be determined even more accurately. The light effect may be determined in 3D and then converted to one or two 2D images per displayed frame using known techniques, for example.
- the user maybe asked to scan the room with his augmented reality device (mobile device 1) in order to find a suitable or better potential location for the lights 15 and 17 and/or in order to create a 3D model of (part of) the room.
- the mobile device 1 may comprise one or more image sensors (e.g. multiple two-dimensional cameras and/or depth cameras) configured to acquire image data (e.g. color/grayscale images, depth images, etc.) representing the environment. Creation of a 3D model of the room based on such depth/image data is known in the art, and will therefore not be discussed in detail.
- This 3D model may be used to more accurately determine the virtual dynamic light effect, for example.
- the room may be scanned automatically during normally user behavior, e.g. if the user is just looking around while wearing augmented reality glasses.
- the processor 3 maybe configured to determine the potential location by analyzing images of the room and/or a 3D model of the room.
- the images/3D model may for example be analyzed to determine‘empty’ spaces in the image/3D model, where the lighting device can be positioned. Additionally or alternatively, the images/3D model may for example be analyzed to identify one or more objects (e.g. the content rendering device) in the image or in the 3D model, and the potential location for the lighting device may be selected such that the lighting device is positioned relative to these one or more objects.
- objects e.g. the content rendering device
- the processor 3 maybe configured to generate the virtual dynamic light effect by analyzing the dynamic media content.
- the processor may, for example, analyze the dynamic media content to determine characteristics of the dynamic media content, and generate the virtual dynamic light effect based thereon. If, for example, the dynamic media content is audio content (e.g. a song) the characteristics may relate to the pitch, tempo, timbre, etc. of the audio. If, for example, the dynamic media content is video content, the characteristics may relate to the colors of the images of the video or to temporal effects in the video (e.g. explosions, a rising sun, an approaching vehicle, etc.).
- the virtual dynamic light effect may be pre-scripted and the processor 3 may be configured to determine the virtual dynamic light effect by obtaining (e.g. downloading) a script of the virtual dynamic light effect, the script being associated with the plurality of temporally sequential segments of said dynamic media content.
- the mobile device 1 comprises one processor 3.
- the mobile device 1 comprises multiple processors.
- the processor 3 of the mobile device 1 may be a general-purpose processor, e.g. from Qualcomm or ARM-based, or an application-specific processor.
- the processor 3 of the mobile device 1 may run an Android or iOS operating system for example.
- the storage means 7 may comprise one or more memory units.
- the storage means 7 may comprise one or more hard disks and/or solid-state memory, for example.
- the storage means 7 may be used to store an operating system, applications and application data, for example.
- the transceiver 5 may use one or more wireless communication technologies to communicate with the wireless access point 11, for example.
- the wireless access point 11 for example.
- multiple transceivers are used instead of a single transceiver.
- a receiver and a transmitter have been combined into a transceiver 5.
- the processor 3 may use the transceiver 5, for example, to control the television 19 to render certain content, e.g.
- the processor 3 may use the transceiver 5 to control the lights 15 and 17 via the bridge 13 after the bridge 13 and the lights 15 and 17 have been installed, for example.
- the processor 3 may use the transceiver 5 to receive information identifying one or more aspects of the content rendering device, for example.
- the camera 8 may comprise a CCD or CMOS sensor, for example.
- the display 9 may comprise an LCD or OLED panel, for example.
- the display 9 may comprise one or more projection components, as used in e.g. Microsoft’s HoloLens augmented reality glasses.
- the display 9 may be a touch screen.
- the mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector.
- the invention may be implemented using a computer program running on one or more processors.
- the content rendering device is a television.
- a smart speaker or personal computer may additionally or alternatively act as content rendering device.
- a personal computer may analyze the content, control the lighting device and provide the same content to a monitor or television.
- a step 101 comprises obtaining an image of at least part of an environment.
- Step 101 comprises capturing the image or receiving a previously captured image, for example.
- a step 103 comprises determining a potential location for a lighting device by analyzing the image.
- Step 103 may further comprise determining a potential direction for the lighting device by analyzing the image.
- a step 105 comprises analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by the electronic device and/or by a content rendering device located in the environment.
- a step 107 comprises determining a virtual dynamic light effect based on the analysis and the potential location determined for the lighting device.
- a step 109 comprises displaying the virtual dynamic light effect superimposed over a view on the environment while the content is being rendered.
- a current part of the virtual dynamic light effect corresponds to a current segment of the content.
- superimposed displaying of step 109 can be implemented by realizing a graphical overlay on top of a captured camera image or by having the graphical overlay rendered on a transparent surface (e.g. AR glasses or a shop window), for example.
- a transparent surface e.g. AR glasses or a shop window
- Step 141 comprises determining a location of the display of the content rendering device.
- Step 141 may comprise locating the edges of a display area or the edges of a display device (e.g. a television or monitor). The former may provide a better result, but since display device bezels are relatively small nowadays, the improvement will typically be relatively small as well.
- Step 145 comprises displaying a visual portion of the temporally sequential segments of content superimposed over the display of the content rendering device. The virtual dynamic light effect is displayed at the same time as (and synchronized with) this visual portion of the content (in step 143).
- This second embodiment is illustrated with the help of Fig.4.
- the television 19 has been placed on a TV bench 43. No content is displayed on the television 19 itself. Instead, content 41 is displayed on the display of the mobile device 1. The content is displayed superimposed over the image 49, which provides a view on the environment.
- Potential locations for lights 15 and 17 have been determined such that the virtual dynamic light effect at least partly surrounds the television (e.g. using detected edges of a display area or display device), while taking into account that there are free spaces next to the television 19 on the TV bench 43. In this embodiment, directions have also been determined for the lights 15 and 17.
- images representing the lights 15 and 17 and virtual dynamic light effects 45 and 47 are also been determined for the lights 15 and 17.
- the content is rendered on the augmented reality device.
- the content is rendered on the content rendering device.
- the virtual dynamic light effect(s) may be determined based on an analysis of (temporally sequential segments of) content already being rendered by the content rendering device or the electronic device of the invention may control the content rendering device to render certain (temporally sequential segments of) content. If the augmented reality device selects the content, it may do so based on user preferences, e.g. music preferences.
- FIG.5 A third embodiment of the method of the invention is shown in Fig.5.
- step 103 comprises the sub steps 111 and 117 and step 107 comprises the sub step 119.
- Step 111 comprises determining a location of the content rendering device in the image (i.e. by analyzing the image).
- step 117 the potential location for the lighting device is determined based on the location of the content rendering device. If the content rendering device comprises a display, the potential location for the lighting device (determined in step 117) and the virtual dynamic light effect
- step 119 may be determined such that the virtual dynamic light effect at least partly surrounds the display. This is depicted in Fig.4.
- Step 113 comprises determining one or more aspects of the content rendering device.
- the one or more aspects of the content rendering device may comprise a category, a brand, a model number and/or information identifying supported functions of the content rendering device. These one or more aspects may be determined by analyzing the image and/or by receiving information (e.g. using Bluetooth) identifying at least one of these one or more aspects.
- the augmented reality device may first determine which content rendering devices are nearby and obtain features of these content rendering devices from a database, thereby making it easier to detect those content rendering devices in the image(s).
- Step 115 comprises selecting the lighting device from a plurality of lighting devices based on the one or more aspects of the content rendering device, e.g. from a database. This allows selection of a lighting device which can be controlled by said content rendering device or can be controlled based on output generated by said content rendering device.
- One or more characteristics of the selected lighting device may be taken into account in step 117 and/or in step 119.
- the potential location for the lighting device may be based on the location of the content rendering device and one or more characteristics of the selected lighting device.
- the virtual dynamic light effect may be based on the analysis, the potential location determined for the lighting device and on one or more characteristics of the selected lighting device.
- Determining the potential location for the lighting device in step 103 may additionally or alternatively involve locating one or more suitable free spaces and/or one or more existing lighting devices in the image. Determining the potential location for the lighting device by locating one or more suitable free spaces is illustrated with the help of Figs.6 and 7.
- Fig.6 shows a representation of an image depicting a room with a television 19 placed on a TV bench 43 and a cabinet 61 next to the television 19. Analyzing the image results in detection of a suitable free space on top of cabinet 61.
- Fig.7 shows a representation of an image depicting the lighting device 71 and a virtual dynamic light effect 73 displayed superimposed over the captured image.
- the image representing the lighting device 71 is displayed at the suitable free space on top of cabinet 61, which has been determined as the potential location for the lighting device 71.
- the image representing the lighting device 71 typically needs to be adjusted to match the scale and perspective of the environment as captured by the image so that the lighting device 71 appears to have the proper size and orientation.
- the virtual dynamic light effect 73 has been determined based on this potential location and is displayed around the potential location.
- Fig.9 Determining the potential location for the lighting device by locating one or more existing lighting devices is illustrated with the help of Figs.9 and 10.
- the image represented in Fig.9 is similar to the image represented in Fig.6, except that an installed lighting device 81 is present in the upper-left corner.
- the lighting device 81 is creating a light effect 83.
- Fig.10 shows the virtual dynamic light effect 93 being superimposed over the light effect 83 of Fig.9.
- the lighting device 81 itself is still represented. This is beneficial if only the light bulb needs to be replaced or if the design of the replacement lighting device is not important.
- an image representing the replacement lighting device may be superimposed over the view on the lighting device 81.
- a fourth embodiment of the method of the invention is shown in Fig.8.
- step 103 comprises a sub step 135.
- Step 131 comprises determining a location of an already installed further lighting device by analyzing the image and/or determining a potential location for a yet to be installed further lighting device by analyzing the image. For example, if step 103 has previously been performed for light 15 of Fig.4 and is now being performed for light 17 of Fig.4, the yet to be installed further lighting device is light 15.
- Step 133 comprises determining one or more characteristics of the already installed further lighting device and/or the yet to be installed further lighting device.
- the already installed lighting device may be part of another device like a television, e.g. the Ambilight lights of a Philips Ambilight television.
- Step 135 comprises determining the potential location for the lighting device, e.g. light 17 of Fig.4, based on the location and one or more characteristics of the already installed further lighting device and/or the potential location and one or more characteristics determined for the yet to be installed further lighting device, e.g. light 15 of Fig.4.
- the potential location may further be based on the locations of one or more suitable free spaces, the locations of one or more existing lighting devices and/or the location of a content rendering device, as described previously.
- Fig.ll depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 2,3,5 and 8.
- the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. ln one aspect, the data processing system maybe implemented as a computer that is suitable for storing and/or executing program code ft should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
- the memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310.
- the local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code.
- a bulk storage device may be implemented as a hard drive or other persistent data storage device.
- the processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution.
- the processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
- I/O devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system.
- input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like.
- output devices may include, but are not limited to, a monitor or a display, speakers, or the like.
- Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
- the input and the output devices may be implemented as a combined input/output device (illustrated in Fig.ll with a dashed line surrounding the input device 312 and the output device 314).
- a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”.
- input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
- a network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks.
- the network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks.
- Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
- the memory elements 304 may store an application 318.
- the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices.
- the data processing system 300 may further execute an operating system (not shown in Fig.ll) that can facilitate execution of the application 318.
- the application 318 being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 maybe configured to perform one or more operations or method steps described herein.
- Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein).
- the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal.
- the program(s) can be contained on a variety of transitory computer-readable storage media.
- Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored.
- the computer program may be run on the processor 302 described herein.
Landscapes
- Business, Economics & Management (AREA)
- Finance (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
An electronic device (1) is configured to obtain an image (49) of at least part of an environment, the image being captured with a camera, and determine a potential location for a lighting device (15,17) by analyzing the image. The electronic device is further configured to analyze a plurality of temporally sequential segments of content (41) being rendered or to be rendered by the electronic device and/or by a content rendering device (19) located in the environment, determine a virtual dynamic light effect (45,47) based on the analysis and the potential location determined for the lighting device and display the virtual dynamic light effect superimposed over a view on the environment while the content is being rendered. A current part of the virtual dynamic light effect corresponds to a current segment of the content.
Description
DISPLAYING A VIRTUAL DYNAMIC LIGHT EFFECT
FIELD OF THE INVENTION
The invention relates to an electronic device for displaying a virtual light effect.
The invention further relates to a method of displaying a virtual light effect. The invention also relates to a computer program product enabling a computer system to perform such a method.
BACKGROUND OF THE INVENTION
Connected lighting has made it possible to easily change the color and brightness of the lights, thereby giving a much richer lighting experience compared to conventional lighting. To make connected lighting mainstream, consumers need a better understanding of how connected lighting can positively impact their own home. Thus far, this was difficult to achieve, as shopping environments are rather different than a living room environment.
US 9,910,575 B2 discloses a method of assisting a user in selecting a lighting device design. The method involves receiving an image of a scene, e.g. a picture taken by a user, and analyzing this image in order to select or generate a lighting device design. The selected lighting design and optionally the lighting effect related to the lighting properties of the selected lighting device design may be applied to the scene and shown to the user. In an embodiment, a position in the scene for placing a lighting device design may be determined based on an analysis of the image and the user may be provided with a visual clue highlighting this position in the image.
A drawback of the method described in US 9,910,575 B2 is that it only works for static light effects and consumers may still find it hard to imagine what dynamic light effects would look like in their own surroundings. This especially holds for connected lighting solutions, as smart lights have created many new use cases and options for lighting.
SUMMARY OF THE INVENTION
It is a first object of the invention to provide an electronic device for displaying a virtual light effect, which is able to show a user what a dynamic light effect would look like in his own surroundings.
It is a second object of the invention to provide a method of displaying a virtual light effect, which can be used to show a user what a dynamic light effect would look like in his own surroundings.
In a first aspect of the invention, the electronic device comprises at least one processor configured to obtain an image of at least part of an environment, said image being captured with a camera, determine a potential location for a lighting device by analyzing said image, analyze a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determine a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and display said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
The inventors have recognized that it is beneficial to show a user what dynamic light effects synchronized to content would look like in his own surroundings and that this is best realized with an augmented reality application. Rendering a virtual dynamic light effect synchronized to content being rendered gives a good impression of what dynamic light effects in general look like. The content is typically dynamic media content like audio and/or video content. The audio and/or video content may be music, a TV program, a movie or a game, for example. The image may be captured by the electronic device of the invention or by another device.
Said at least one processor may be configured to determine one or more aspects of said content rendering device, select said lighting device from a plurality of lighting devices based on said one or more aspects of said content rendering device, and determine said potential location for said lighting device and/or said virtual dynamic light effect further based on one or more characteristics of said selected lighting device. Said one or more aspects of said content rendering device may comprise at least one of a category (e.g. TV or smart speaker), a brand, a model number and information identifying supported functions of said content rendering device. This allows selection of a lighting device which can be controlled by said content rendering device or can be controlled based on output
generated by said content rendering device. The best location(s) for a lighting device and/or the dynamic light effect generated by a lighting device typically depend on which lighting device has been selected.
Said at least one processor may be configured to determine at least one of said one or more aspects of said content rendering device by analyzing said image. For example, the brand, model number or category of the content rendering may be determined by comparing features extracted from the image with features stored in a database.
Said at least one processor may be configured to use a wireless receiver to receive information identifying at least one of said one or more aspects of said content rendering device. This allows the aspect of the content rendering device to be taken into account even if no image of it has been captured. This is especially beneficial if the content rendering device does not have a display, as it would then be more likely that the content rendering device would be hidden from view in a cabinet.
Said at least one processor may be configured to determine said potential location for said lighting device by determining a location of said content rendering device in said image and determining said potential location for said lighting device based on said location of said content rendering device. The processor may determine the potential location for (placing) said lighting device in said image relative to said location of the content rendering device in said image. If said content rendering comprises a display, this allows said at least one processor to determine said potential location for said lighting device and said virtual dynamic light effect such that said virtual dynamic light effect at least partly surrounds said display, which results in an enhanced display viewing experience.
Said at least one processor may be configured to determine a location of an already installed further lighting device by analyzing said image and/or determine a potential location for a yet to be installed further lighting device by analyzing said image and determine said potential location for said lighting device based on said location of said already installed further lighting device and/or said potential location determined for said yet to be installed further lighting device. Since spatially overlapping light effects may result in undesired light effects, the locations of further lighting devices may be taken into account when determining the potential location for the light device, e.g. to prevent or limit spatial overlap in light emissions.
Said at least one processor may be configured to determine one or more characteristics of said already installed further lighting device and/or said yet to be installed further lighting device, e.g. light output and/or beam angle, and determine said potential
location for said lighting device further based on said one or more determined characteristics. This makes it easier to prevent or limit spatial overlap in light emissions while limiting the number of areas that are unnecessarily unlit.
Said at least one processor may be configured to determine said potential location for said lighting device by locating one or more suitable free spaces and/or one or more existing lighting devices in said image. By determining a feasible location for the lighting device, the user will likely appreciate the augmented reality demonstration more, because he would be able to actually install the light source at the shown location if he likes the virtual dynamic light effect.
Said at least one processor may be configured to display an image representing said lighting device superimposed over said environment at said potential location determined for said lighting device. This allows the user to see what the lighting device itself looks like and decide whether its design matches the user’s preferences.
Said at least one processor may be configured to control said content rendering device to render said temporally sequential segments of content. This makes it possible to use special content that is especially suited for demonstrations. This special content may be stored on the electronic device itself, for example, or alternatively be retrieved from a network location as specified by the processor. Displaying the content on the content rendering device may result in a better user experience.
Said content rendering device may comprise a display and said at least one processor may be configured to determine a location of said display of said content rendering device and display a visual portion of said temporally sequential segments of content superimposed over said display of said content rendering device. This is especially advantageous if it is not possible to display the content on the content rendering device itself.
In a second aspect of the invention, the method comprises obtaining an image of at least part of an environment, said image being captured with a camera, determining a potential location for a lighting device by analyzing said image, analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determining a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and displaying said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content. Said method may be
performed by software running on a programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations comprising: obtaining an image of at least part of an environment, said image being captured with a camera, determining a potential location for a lighting device by analyzing said image, analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment, determining a virtual dynamic light effect based on said analysis and said potential location determined for said lighting device, and displaying said virtual dynamic light effect superimposed over a view on said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.
Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
Any combination of one or more computer readable medium(s) maybe utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable
computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any
combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (FAN) or a wide area network (WAN), or the connection maybe made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer
program instructions. These computer program instructions maybe provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Fig. 1 is a block diagram of an embodiment of the electronic device of the invention;
Fig. 2 is a flow diagram of a first embodiment of the method of the invention; Fig. 3 is a flow diagram of a second embodiment of the method of the invention;
Fig. 4 shows an example of content and a corresponding dynamic light effect superimposed over a view on an environment;
Fig. 5 is a flow diagram of a third embodiment of the method of the invention; Fig. 6 shows a representation of an image depicting a room with a TV placed on a TV bench and a cabinet next to the TV;
Fig. 7 shows representations of a lighting device and a dynamic light effect superimposed over the image represented in Fig.6;
Fig. 8 is a flow diagram of a fourth embodiment of the method of the invention;
Fig. 9 shows a representation of an image depicting a room with a TV placed on a TV bench, a cabinet next to the TV and an installed light source in the upper-left comer;
Fig. 10 shows a representation of a dynamic light effect superimposed over the image represented in Fig.9; and
Fig. 11 is a block diagram of an exemplary data processing system for performing the method of the invention.
Corresponding elements in the drawings are denoted by the same reference numeral.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Fig-1 shows a first embodiment of the electronic device of the invention. In this first embodiment, the electronic device of the invention (also referred to as the augmented reality device) is a mobile device 1, e.g. a mobile phone, tablet or pair of augmented reality glasses (possibly part of an augmented reality headset). The mobile device 1 comprises a processor 3, a transceiver 5, storage means 7, a camera 8 and a display 9. The mobile device 1 is connected to a wireless access point 11. A television 19 is also connected
to the wireless access point 11. The wireless access point 11 may further provide access the Internet 21.
The user of the mobile device 1 wants to purchase and install one or more lighting devices. Fig.l shows the home network system after a bridge 13 and lights 15 and 17 have been installed. An app (i.e. application) running on the mobile device 1 can be used to control lights 15 and 17 (e.g. Philips Hue lights) via the bridge 13. The bridge 13 is connected to the wireless access point 11, e.g. via an Ethernet link. The bridge 13
communicates with lights 15 and 17 wirelessly, e.g. via Zigbee.
The processor 3 is configured to obtain an image of at least part of an environment. In the embodiment of Fig.l, the image is captured with the camera 8. The processor 3 is further configured to determine a potential location for lights 15 and 17 by analyzing the image. The processor 3 is further configured to analyze a plurality of temporally sequential segments of content being rendered or to be rendered by the mobile device 1 or by the television 19 and determine a virtual dynamic light effect based on the analysis and the potential location determined for the lights 15 and 17. The content may be TV/movie content, music content or game content, for example.
The processor 3 is further configured to display the virtual dynamic light effect superimposed over a view on the environment while the content is being rendered (displaying information superimposed over a view on the environment is also referred to as augmented reality). A current part of the virtual dynamic light effect corresponds to a current segment of the content. For example, the color and brightness levels of the virtual dynamic light effect may be chosen to match the audio (e.g. as captured by the mobile device 1) and/or video of the content. In an embodiment, the color and brightness levels of the virtual dynamic light effect may be chosen to match the audio part of a television program or movie.
Optionally, the selected color and brightness levels further depend on user preferences (e.g. derived from the user’s most frequently used light scenes). This is especially beneficial if the content is music. Alternatively, specific light scripts or light effects which have been used often or have been created especially for the rendered content may be used.
The virtual dynamic light effect may be created by obtaining an image representing the area covered by the dynamic light effect from a database (e.g. where it is stored in relation to the selected lighting device) and scaling it to match the scale of the environment, as reflected in the image. Alternatively, pixels of the virtual dynamic light effect may be created dynamically, e.g. based on the beam angle, light output and other aspects of the selected lighting device. In a simple embodiment, all pixels of the dynamic
light effect have the same color (chromaticity and brightness). In a more advanced embodiment, pixels that are closer to the lighting device are brighter than pixels that are farther from the lighting device. For example, a linearly-decreasing light effect may be created. In an even more advanced embodiment, properties of other objects are taken into account as well, e.g. the color and pattern of a wall. If a 3D model is used, the virtual dynamic light effect may be determined even more accurately. The light effect may be determined in 3D and then converted to one or two 2D images per displayed frame using known techniques, for example.
The user maybe asked to scan the room with his augmented reality device (mobile device 1) in order to find a suitable or better potential location for the lights 15 and 17 and/or in order to create a 3D model of (part of) the room. The mobile device 1 may comprise one or more image sensors (e.g. multiple two-dimensional cameras and/or depth cameras) configured to acquire image data (e.g. color/grayscale images, depth images, etc.) representing the environment. Creation of a 3D model of the room based on such depth/image data is known in the art, and will therefore not be discussed in detail. This 3D model may be used to more accurately determine the virtual dynamic light effect, for example. Alternatively, the room may be scanned automatically during normally user behavior, e.g. if the user is just looking around while wearing augmented reality glasses.
The processor 3 maybe configured to determine the potential location by analyzing images of the room and/or a 3D model of the room. The images/3D model may for example be analyzed to determine‘empty’ spaces in the image/3D model, where the lighting device can be positioned. Additionally or alternatively, the images/3D model may for example be analyzed to identify one or more objects (e.g. the content rendering device) in the image or in the 3D model, and the potential location for the lighting device may be selected such that the lighting device is positioned relative to these one or more objects.
The processor 3 maybe configured to generate the virtual dynamic light effect by analyzing the dynamic media content. The processor may, for example, analyze the dynamic media content to determine characteristics of the dynamic media content, and generate the virtual dynamic light effect based thereon. If, for example, the dynamic media content is audio content (e.g. a song) the characteristics may relate to the pitch, tempo, timbre, etc. of the audio. If, for example, the dynamic media content is video content, the characteristics may relate to the colors of the images of the video or to temporal effects in the video (e.g. explosions, a rising sun, an approaching vehicle, etc.). Alternatively, the virtual dynamic light effect may be pre-scripted and the processor 3 may be configured to determine
the virtual dynamic light effect by obtaining (e.g. downloading) a script of the virtual dynamic light effect, the script being associated with the plurality of temporally sequential segments of said dynamic media content.
In the embodiment of the mobile device 1 shown in Fig.l, the mobile device 1 comprises one processor 3. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 3 of the mobile device 1 may be a general-purpose processor, e.g. from Qualcomm or ARM-based, or an application-specific processor. The processor 3 of the mobile device 1 may run an Android or iOS operating system for example. The storage means 7 may comprise one or more memory units. The storage means 7 may comprise one or more hard disks and/or solid-state memory, for example. The storage means 7 may be used to store an operating system, applications and application data, for example.
The transceiver 5 may use one or more wireless communication technologies to communicate with the wireless access point 11, for example. In an alternative
embodiment, multiple transceivers are used instead of a single transceiver. In the
embodiment shown in Fig.l, a receiver and a transmitter have been combined into a transceiver 5. In an alternative embodiment, one or more separate receiver components and one or more separate transmitter components are used. The processor 3 may use the transceiver 5, for example, to control the television 19 to render certain content, e.g.
demonstration content. The processor 3 may use the transceiver 5 to control the lights 15 and 17 via the bridge 13 after the bridge 13 and the lights 15 and 17 have been installed, for example. The processor 3 may use the transceiver 5 to receive information identifying one or more aspects of the content rendering device, for example.
The camera 8 may comprise a CCD or CMOS sensor, for example. The display 9 may comprise an LCD or OLED panel, for example. Alternatively, the display 9 may comprise one or more projection components, as used in e.g. Microsoft’s HoloLens augmented reality glasses. The display 9 may be a touch screen. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors. In the example of Fig.l, the content rendering device is a television. In an alternative embodiment, a smart speaker or personal computer may additionally or alternatively act as content rendering device. For example, a personal computer may analyze the content, control the lighting device and provide the same content to a monitor or television.
A first embodiment of the method of the invention is shown in Fig.2. A step 101 comprises obtaining an image of at least part of an environment. Step 101 comprises capturing the image or receiving a previously captured image, for example. A step 103 comprises determining a potential location for a lighting device by analyzing the image. Step 103 may further comprise determining a potential direction for the lighting device by analyzing the image. A step 105 comprises analyzing a plurality of temporally sequential segments of content being rendered or to be rendered by the electronic device and/or by a content rendering device located in the environment. A step 107 comprises determining a virtual dynamic light effect based on the analysis and the potential location determined for the lighting device.
A step 109 comprises displaying the virtual dynamic light effect superimposed over a view on the environment while the content is being rendered. A current part of the virtual dynamic light effect corresponds to a current segment of the content. The
superimposed displaying of step 109 can be implemented by realizing a graphical overlay on top of a captured camera image or by having the graphical overlay rendered on a transparent surface (e.g. AR glasses or a shop window), for example.
A second embodiment of the method of the invention is shown in Fig.3. In this second embodiment, the content rendering device comprises a display and two additional steps are present: step 141 and step 145. Steps 109 and 145 are combined into an overarching step 143. Step 141 comprises determining a location of the display of the content rendering device. Step 141 may comprise locating the edges of a display area or the edges of a display device (e.g. a television or monitor). The former may provide a better result, but since display device bezels are relatively small nowadays, the improvement will typically be relatively small as well. Step 145 comprises displaying a visual portion of the temporally sequential segments of content superimposed over the display of the content rendering device. The virtual dynamic light effect is displayed at the same time as (and synchronized with) this visual portion of the content (in step 143). This second embodiment is illustrated with the help of Fig.4.
In the example of Fig.4, the television 19 has been placed on a TV bench 43. No content is displayed on the television 19 itself. Instead, content 41 is displayed on the display of the mobile device 1. The content is displayed superimposed over the image 49, which provides a view on the environment. Potential locations for lights 15 and 17 have been determined such that the virtual dynamic light effect at least partly surrounds the television (e.g. using detected edges of a display area or display device), while taking into account that
there are free spaces next to the television 19 on the TV bench 43. In this embodiment, directions have also been determined for the lights 15 and 17. In addition to the content, images representing the lights 15 and 17 and virtual dynamic light effects 45 and 47
(associated with lights 15 and 17, respectively) are displayed superimposed over the image 49 as well.
In the embodiment of Fig.3 and the example of Fig.4, the content is rendered on the augmented reality device. In an alternative embodiment, the content is rendered on the content rendering device. The virtual dynamic light effect(s) may be determined based on an analysis of (temporally sequential segments of) content already being rendered by the content rendering device or the electronic device of the invention may control the content rendering device to render certain (temporally sequential segments of) content. If the augmented reality device selects the content, it may do so based on user preferences, e.g. music preferences.
A third embodiment of the method of the invention is shown in Fig.5.
Compared to the first embodiment shown in Fig.2, step 103 comprises the sub steps 111 and 117 and step 107 comprises the sub step 119. Step 111 comprises determining a location of the content rendering device in the image (i.e. by analyzing the image). In step 117, the potential location for the lighting device is determined based on the location of the content rendering device. If the content rendering device comprises a display, the potential location for the lighting device (determined in step 117) and the virtual dynamic light effect
(determined in step 119) may be determined such that the virtual dynamic light effect at least partly surrounds the display. This is depicted in Fig.4.
Furthermore, compared to the first embodiment shown in Fig.2, additional steps 113 and 115 are present. Step 113 comprises determining one or more aspects of the content rendering device. The one or more aspects of the content rendering device may comprise a category, a brand, a model number and/or information identifying supported functions of the content rendering device. These one or more aspects may be determined by analyzing the image and/or by receiving information (e.g. using Bluetooth) identifying at least one of these one or more aspects. For example, the augmented reality device may first determine which content rendering devices are nearby and obtain features of these content rendering devices from a database, thereby making it easier to detect those content rendering devices in the image(s).
Step 115 comprises selecting the lighting device from a plurality of lighting devices based on the one or more aspects of the content rendering device, e.g. from a database. This allows selection of a lighting device which can be controlled by said content
rendering device or can be controlled based on output generated by said content rendering device.
One or more characteristics of the selected lighting device may be taken into account in step 117 and/or in step 119. In step 117, the potential location for the lighting device may be based on the location of the content rendering device and one or more characteristics of the selected lighting device. In step 119, the virtual dynamic light effect may be based on the analysis, the potential location determined for the lighting device and on one or more characteristics of the selected lighting device.
Determining the potential location for the lighting device in step 103 may additionally or alternatively involve locating one or more suitable free spaces and/or one or more existing lighting devices in the image. Determining the potential location for the lighting device by locating one or more suitable free spaces is illustrated with the help of Figs.6 and 7. Fig.6 shows a representation of an image depicting a room with a television 19 placed on a TV bench 43 and a cabinet 61 next to the television 19. Analyzing the image results in detection of a suitable free space on top of cabinet 61.
Fig.7 shows a representation of an image depicting the lighting device 71 and a virtual dynamic light effect 73 displayed superimposed over the captured image. The image representing the lighting device 71 is displayed at the suitable free space on top of cabinet 61, which has been determined as the potential location for the lighting device 71. The image representing the lighting device 71 typically needs to be adjusted to match the scale and perspective of the environment as captured by the image so that the lighting device 71 appears to have the proper size and orientation. The virtual dynamic light effect 73 has been determined based on this potential location and is displayed around the potential location.
Determining the potential location for the lighting device by locating one or more existing lighting devices is illustrated with the help of Figs.9 and 10. The image represented in Fig.9 is similar to the image represented in Fig.6, except that an installed lighting device 81 is present in the upper-left corner. The lighting device 81 is creating a light effect 83. Fig.10 shows the virtual dynamic light effect 93 being superimposed over the light effect 83 of Fig.9. As the light effect 83 is wider than the virtual dynamic light effect 93, at least part of the light effect 83 is replaced with pixels having the same color as the wall. In the example of Fig.10, the lighting device 81 itself is still represented. This is beneficial if only the light bulb needs to be replaced or if the design of the replacement lighting device is not important. Alternatively, an image representing the replacement lighting device may be superimposed over the view on the lighting device 81.
A fourth embodiment of the method of the invention is shown in Fig.8.
Compared to the first embodiment shown in Fig.2, additional steps 131 and 133 are present and step 103 comprises a sub step 135. Step 131 comprises determining a location of an already installed further lighting device by analyzing the image and/or determining a potential location for a yet to be installed further lighting device by analyzing the image. For example, if step 103 has previously been performed for light 15 of Fig.4 and is now being performed for light 17 of Fig.4, the yet to be installed further lighting device is light 15.
Step 133 comprises determining one or more characteristics of the already installed further lighting device and/or the yet to be installed further lighting device. The already installed lighting device may be part of another device like a television, e.g. the Ambilight lights of a Philips Ambilight television.
Step 135 comprises determining the potential location for the lighting device, e.g. light 17 of Fig.4, based on the location and one or more characteristics of the already installed further lighting device and/or the potential location and one or more characteristics determined for the yet to be installed further lighting device, e.g. light 15 of Fig.4. The potential location may further be based on the locations of one or more suitable free spaces, the locations of one or more existing lighting devices and/or the location of a content rendering device, as described previously.
Fig.ll depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 2,3,5 and 8.
As shown in Fig.ll, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. ln one aspect, the data processing system maybe implemented as a computer that is suitable for storing and/or executing program code ft should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system
300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like.
Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig.ll with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.
A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
As pictured in Fig.ll, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig.ll) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to
executing the application, the data processing system 300 maybe configured to perform one or more operations or method steps described herein.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of
ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims
1. An electronic device (1) comprising at least one processor (3) configured to:
- obtain an image (49) of at least part of an environment, said image (49) being captured with a camera (8),
- analyze said image (49) to determine a location of a content rendering device (19) in said image (49),
- determine a potential location for a lighting device (15,17) based on said location of said content rendering device (19),
- analyze a plurality of temporally sequential segments of dynamic media content (41) being rendered or to be rendered by said electronic device (1) and/or by said content rendering device (19) located in said environment,
- determine a virtual dynamic light effect (45,47,73,93) based on said analysis of said plurality of temporally sequential segments of said dynamic media content (41) and said potential location determined for said lighting device (15,17,71,81) and
- display said virtual dynamic light effect (45,47,73,93) superimposed over a view of said environment while said content (41) is being rendered, a current part of said virtual dynamic light effect (45,47,73,93) corresponding to a current segment of said content (41).
2. An electronic device (1) as claimed in claim 1, wherein said at least one processor (3) is configured to:
- determine one or more aspects of said content rendering device (19),
- select said lighting device (15,17,71,81) from a plurality of lighting devices based on said one or more aspects of said content rendering device (19), and
- determine said potential location for said lighting device (15,17,71,81) and/or said virtual dynamic light effect (45,47,73,93) further based on one or more characteristics of said selected lighting device (15,17,71,81).
3. An electronic device (1) as claimed in claim 2, wherein said one or more aspects of said content rendering device (19) comprises at least one of a category, a brand, a
model number and information identifying supported functions of said content rendering device (19).
4. An electronic device (1) as claimed in claim 2 or 3, wherein said at least one processor (3) is configured to determine at least one of said one or more aspects of said content rendering device (19) by analyzing said image.
5. An electronic device (1) as claimed in any one of claims 2 to 4, wherein said at least one processor (3) is configured to use a wireless receiver (5) to receive information identifying at least one of said one or more aspects of said content rendering device (19).
6. An electronic device (1) as claimed in any one of the preceding claims, wherein said content rendering device (19) comprises a display and said at least one processor (3) is configured to determine said potential location for said lighting device (15,17) and said virtual dynamic light effect (45,47) such that said virtual dynamic light effect (45,47) at least partly surrounds said display.
7. An electronic device (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to:
- determine a location of an already installed further lighting device by analyzing said image and/or determine a potential location for a yet to be installed further lighting device (17) by analyzing said image, and
- determine said potential location for said lighting device (15) based on said location of said already installed further lighting device and/or said potential location determined for said yet to be installed further lighting device (17).
8. An electronic device (1) as claimed in claim 7, wherein said at least one processor (3) is configured to:
- determine one or more characteristics of said already installed further lighting device and/or said yet to be installed further lighting device (17), and
- determine said potential location for said lighting device (15) further based on said one or more determined characteristics.
9. An electronic device (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to determine said potential location for said lighting device (15,17,71,81) by locating one or more suitable free spaces and/or one or more existing lighting devices (81) in said image.
10. An electronic device (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to display an image representing said lighting device (15,17,71) superimposed over said environment at said potential location determined for said lighting device (15,17,71).
11. An electronic device (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to control said content rendering device (19) to render said temporally sequential segments of content (41).
12. An electronic device (1) as claimed in any one of the preceding claims, wherein said content rendering device (19) comprises a display and said at least one processor (3) is configured to:
- determine a location of said display of said content rendering device (19), and
- display a visual portion of said temporally sequential segments of content (41) superimposed over said display of said content rendering device (19).
13. A method of displaying a virtual light effect, comprising:
- obtaining (101) an image of at least part of an environment, said image being captured with a camera;
- analyzing said image to determine a location of a content rendering device (19) in said image (49),
- determining (103) a potential location for a lighting device (15,17) based on said location of said content rendering device (19);
- analyzing (105) a plurality of temporally sequential segments of dynamic media content being rendered or to be rendered by said electronic device and/or by a content rendering device located in said environment;
- determining (107) a virtual dynamic light effect based on said analysis of said plurality of temporally sequential segments of said dynamic media content (41) and said
potential location determined for said lighting device; and
- displaying (109) said virtual dynamic light effect superimposed over a view of said environment while said content is being rendered, a current part of said virtual dynamic light effect corresponding to a current segment of said content.
14. A computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured for enabling the method of claim 13 to be performed.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP18175458 | 2018-06-01 | ||
| EP18175458.1 | 2018-06-01 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019228969A1 true WO2019228969A1 (en) | 2019-12-05 |
Family
ID=62567273
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2019/063631 Ceased WO2019228969A1 (en) | 2018-06-01 | 2019-05-27 | Displaying a virtual dynamic light effect |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019228969A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112435323A (en) * | 2020-11-26 | 2021-03-02 | 网易(杭州)网络有限公司 | Light effect processing method, device, terminal and medium in virtual model |
| WO2021244918A1 (en) | 2020-06-04 | 2021-12-09 | Signify Holding B.V. | A method of configuring a plurality of parameters of a lighting device |
| US12349257B2 (en) | 2021-01-25 | 2025-07-01 | Signify Holding B.V. | Selecting a set of lighting devices based on an identifier of an audio and/or video signal source |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001095674A1 (en) * | 2000-06-07 | 2001-12-13 | The Delfin Project, Inc. | Method and system of auxiliary illumination for enhancing a scene during a multimedia presentation |
| EP2040472A1 (en) * | 2006-06-13 | 2009-03-25 | Sharp Kabushiki Kaisha | Data transmitting device, data transmitting method, audio-visual environment control device, audio-visual environment control system, and audio-visual environment control method |
| EP2378488A2 (en) * | 2010-04-19 | 2011-10-19 | Ydreams - Informática, S.a. | Various methods and apparatuses for achieving augmented reality |
| US20130198786A1 (en) * | 2011-12-07 | 2013-08-01 | Comcast Cable Communications, LLC. | Immersive Environment User Experience |
| US20140125668A1 (en) * | 2012-11-05 | 2014-05-08 | Jonathan Steed | Constructing augmented reality environment with pre-computed lighting |
| WO2016034546A1 (en) * | 2014-09-01 | 2016-03-10 | Philips Lighting Holding B.V. | Lighting system control method, computer program product, wearable computing device and lighting system kit |
| US9910575B2 (en) | 2012-10-24 | 2018-03-06 | Philips Lighting Holding B.V. | Assisting a user in selecting a lighting device design |
-
2019
- 2019-05-27 WO PCT/EP2019/063631 patent/WO2019228969A1/en not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001095674A1 (en) * | 2000-06-07 | 2001-12-13 | The Delfin Project, Inc. | Method and system of auxiliary illumination for enhancing a scene during a multimedia presentation |
| EP2040472A1 (en) * | 2006-06-13 | 2009-03-25 | Sharp Kabushiki Kaisha | Data transmitting device, data transmitting method, audio-visual environment control device, audio-visual environment control system, and audio-visual environment control method |
| EP2378488A2 (en) * | 2010-04-19 | 2011-10-19 | Ydreams - Informática, S.a. | Various methods and apparatuses for achieving augmented reality |
| US20130198786A1 (en) * | 2011-12-07 | 2013-08-01 | Comcast Cable Communications, LLC. | Immersive Environment User Experience |
| US9910575B2 (en) | 2012-10-24 | 2018-03-06 | Philips Lighting Holding B.V. | Assisting a user in selecting a lighting device design |
| US20140125668A1 (en) * | 2012-11-05 | 2014-05-08 | Jonathan Steed | Constructing augmented reality environment with pre-computed lighting |
| WO2016034546A1 (en) * | 2014-09-01 | 2016-03-10 | Philips Lighting Holding B.V. | Lighting system control method, computer program product, wearable computing device and lighting system kit |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021244918A1 (en) | 2020-06-04 | 2021-12-09 | Signify Holding B.V. | A method of configuring a plurality of parameters of a lighting device |
| US11985748B2 (en) | 2020-06-04 | 2024-05-14 | Signify Holding B.V. | Method of configuring a plurality of parameters of a lighting device |
| US12342440B2 (en) | 2020-06-04 | 2025-06-24 | Signify Holding B.V. | Method of configuring a plurality of parameters of a lighting device |
| CN112435323A (en) * | 2020-11-26 | 2021-03-02 | 网易(杭州)网络有限公司 | Light effect processing method, device, terminal and medium in virtual model |
| CN112435323B (en) * | 2020-11-26 | 2023-08-22 | 网易(杭州)网络有限公司 | Light effect processing method, device, terminal and medium in virtual model |
| US12349257B2 (en) | 2021-01-25 | 2025-07-01 | Signify Holding B.V. | Selecting a set of lighting devices based on an identifier of an audio and/or video signal source |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105278905B (en) | Apparatus and method for displaying object having visual effect | |
| US10210664B1 (en) | Capture and apply light information for augmented reality | |
| CN111414225B (en) | Three-dimensional model remote display method, first terminal, electronic device and storage medium | |
| US20150185825A1 (en) | Assigning a virtual user interface to a physical object | |
| US20150187137A1 (en) | Physical object discovery | |
| US10049490B2 (en) | Generating virtual shadows for displayable elements | |
| CN111833423A (en) | Presentation method, presentation device, presentation equipment and computer-readable storage medium | |
| KR20190035116A (en) | Method and apparatus for displaying an ar object | |
| US11410390B2 (en) | Augmented reality device for visualizing luminaire fixtures | |
| KR20170125618A (en) | Method for generating content to be displayed at virtual area via augmented reality platform and electronic device supporting the same | |
| CN110084204A (en) | Image processing method, device and electronic equipment based on target object posture | |
| US20210397840A1 (en) | Determining a control mechanism based on a surrounding of a remove controllable device | |
| TWI864841B (en) | Control methods, computer-readable media, and controllers | |
| WO2019228969A1 (en) | Displaying a virtual dynamic light effect | |
| US12022162B2 (en) | Voice processing method and apparatus, electronic device, and computer readable storage medium | |
| US11810336B2 (en) | Object display method and apparatus, electronic device, and computer readable storage medium | |
| US9838657B2 (en) | Projection enhancement system | |
| CN110209861A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
| CN115553066B (en) | Determining image analysis areas for entertainment lighting based on distance metrics | |
| US11151797B2 (en) | Superimposing a virtual representation of a sensor and its detection zone over an image | |
| CN109472873B (en) | Three-dimensional model generation method, device and hardware device | |
| KR102443049B1 (en) | Electric apparatus and operation method thereof | |
| CN111340931A (en) | Scene processing method and device, user side and storage medium | |
| WO2025202055A1 (en) | Displaying a representation of a virtual light effect | |
| CN109191238A (en) | Merchandise display method, apparatus, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19725757 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19725757 Country of ref document: EP Kind code of ref document: A1 |