CN114783432A - Playing control method of intelligent glasses, intelligent glasses and storage medium - Google Patents
Playing control method of intelligent glasses, intelligent glasses and storage medium Download PDFInfo
- Publication number
- CN114783432A CN114783432A CN202210287972.8A CN202210287972A CN114783432A CN 114783432 A CN114783432 A CN 114783432A CN 202210287972 A CN202210287972 A CN 202210287972A CN 114783432 A CN114783432 A CN 114783432A
- Authority
- CN
- China
- Prior art keywords
- multimedia device
- smart glasses
- target
- multimedia
- detection module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C11/00—Non-optical adjuncts; Attachment thereof
- G02C11/10—Electronic devices other than hearing aids
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Ophthalmology & Optometry (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开涉及智能眼镜的播放控制方法及智能眼镜、存储介质。所述智能眼镜的播放控制方法,包括:在至少一个多媒体设备中确定目标多媒体设备,所述目标多媒体设备为所述智能眼镜的佩戴者所关注的多媒体设备;将所述智能眼镜的音源设置为所述目标多媒体设备。
The present disclosure relates to a playback control method of smart glasses, smart glasses, and a storage medium. The playback control method of the smart glasses includes: determining a target multimedia device in at least one multimedia device, the target multimedia device being the multimedia device concerned by the wearer of the smart glasses; setting the audio source of the smart glasses to the target multimedia device.
Description
Technical Field
The present disclosure relates to smart electronic devices, and more particularly, to a play control method for smart glasses, and a storage medium.
Background art
The intelligent glasses become common articles for daily life of people, and users pay more and more attention to audio-visual enjoyment. However, the current smart glasses are difficult to adapt to different application scenarios, and especially are not easy to be applied in combination with multimedia devices, so that it is necessary to provide a smart glasses which can be applied in combination with multimedia devices and is suitable for various application scenarios.
Disclosure of Invention
The embodiment of the disclosure provides a playing control method of intelligent glasses, the intelligent glasses and a storage medium, which can enable the intelligent glasses and a multimedia device to be jointly applied.
According to a first aspect of the present disclosure, there is provided a play control method of smart glasses, including: determining a target multimedia device in at least one multimedia device, wherein the target multimedia device is a multimedia device concerned by a wearer of the intelligent glasses; and setting the sound source of the intelligent glasses as the target multimedia equipment.
According to a second aspect of the present disclosure, there is provided smart glasses comprising a memory, a processor and a play control program stored on the memory and executable on the processor, the play control program being configured to implement the steps of the play control method according to any one of the first aspect of the present disclosure.
According to a third aspect of the present disclosure, there is provided a storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the playback control method according to any one of the first aspects of the present disclosure.
The intelligent glasses have the advantages that the intelligent glasses can automatically determine the multimedia equipment concerned by the wearer and then switch the sound source of the intelligent glasses, so that the intelligent glasses can be suitable for various application scenes, and the requirements of users are better met.
Other features of embodiments of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which is to be read in connection with the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the embodiments of the disclosure.
Fig. 1 shows a flowchart of a play control method of smart glasses according to an embodiment of the present disclosure;
2-5 show schematic diagrams of examples of smart glasses according to embodiments of the present disclosure;
fig. 6 illustrates a block diagram of smart glasses according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
The embodiment of the application discloses intelligent glasses which can be jointly applied to multimedia equipment.
In this embodiment, the multimedia device may be an electronic device with a media playing function, such as a television, a projector, a tablet computer, and a game console. The smart glasses can be in communication connection with the multimedia device, so that information transmission, such as transmission of a sound source, can be performed between the smart glasses and the multimedia device, that is, the multimedia device can transmit audio of multimedia content played by the multimedia device to the smart glasses, and the audio is played by the smart glasses. The communication connection may be implemented by wireless communication methods such as bluetooth, WiFi, UWB (Ultra Wide Band ), and may also be implemented by wired communication methods.
In the disclosed embodiments, the smart glasses may be glasses with augmented reality functionality, that is, the smart glasses are provided with a light engine having the ability to project a virtual interface in front of the wearer's eyes. Smart glasses are capable of directing images through optical techniques to the retina of a wearer to present a "vision" to project the images into the visual space of the wearer.
In the embodiment of the present disclosure, at least one acoustic module may be disposed in the smart glasses, and the audio may be played through the acoustic module. The acoustic module may include a speaker or an earphone.
In the embodiment of the present disclosure, at least one microphone may be disposed in the smart glasses to collect sound information around the smart glasses. The intelligent glasses can have a voice recognition function, and a user can control the intelligent glasses in a voice mode.
In one example of the present disclosure, smart glasses include a first detection module. The first detection module is used for determining a multimedia device concerned by a wearer. The first detection module may include one or more of a UWB module, a camera, a laser sensor, an infrared sensor, an ultrasonic sensor, an acceleration sensor, and a gyro sensor. In one example of the present disclosure, in order to support the function of the first detection module, an auxiliary device may be provided in the multimedia device within the room. For example, in a room, in a multimedia device, one or more of a UWB module, a laser sensor, an infrared sensor, and an ultrasonic sensor are provided and used as an auxiliary device.
In one example of the present disclosure, the smart glasses store information of each room in advance, for example, a size, a shape, an image, and the like of each room.
In one example of the present disclosure, the smart glasses pre-store information of the multimedia device, wherein the information at least comprises an identifier of the multimedia device and function information of the multimedia device. The information of the multimedia device stored in the smart glasses may further include image information of the multimedia, information of a room to which the multimedia device belongs, information of a position of the multimedia device in the room to which the multimedia device belongs, an identifier of an auxiliary device in the multimedia device, and the like.
In one example of the present disclosure, information of the auxiliary device is stored in advance in the smart glasses. For example, the smart glasses may store an identifier of the auxiliary device in advance, the signal transmitted to the smart glasses by the auxiliary device includes the identifier of the auxiliary device, and the smart glasses may distinguish the signals transmitted by the auxiliary devices by the identifiers. For example, the smart glasses store position information of the auxiliary device in advance.
In one example, the first detection module includes a UWB module, the multimedia device is also provided with the UWB module, and the position of the smart glasses relative to the multimedia device can be determined by UWB positioning. For example, the UWB module in the multimedia device transmits a UWB signal, the UWB module in the smart glasses receives the UWB signal, and the distance and direction of the smart glasses with respect to the multimedia device are determined based on the UWB signal. For example, the UWB module in the smart glasses transmits a first UWB signal in a first direction (e.g., to the front of the smart glasses), and assuming that the first multimedia device receives the first UWB signal, the first multimedia device responds to transmit a second UWB signal, and after receiving the second UWB signal, the smart glasses acquires the identifier of the auxiliary device from the second UWB signal, so as to determine that the first multimedia device is located in the first direction of the smart glasses.
In one example, the first detection module comprises a UWB module, and the UWB modules are respectively arranged at least three different positions in the room, so that the position of the smart glasses in the room can be accurately determined. For example, a plurality of UWB modules in a room emit UWB signals, and the UWB modules in the smart glasses receive the UWB signals, and determine the position of the smart glasses in the room based on triangulation based on the position information of the UWB modules in the room stored in advance.
In one example, the first detection module includes a UWB module, and at least 3 multimedia devices are respectively provided with the UWB module in the room, so that the relative positions of the smart glasses and the at least 3 multimedia devices can be more accurately determined. For example, UWB modules of at least 3 multimedia devices in the room respectively transmit UWB signals, and UWB modules in the smart glasses receive the UWB signals, and determine the relative position of the smart glasses with respect to the at least 3 multimedia devices based on triangulation.
In one example, the first detection module comprises a camera, and the image shot by the camera is compared with a pre-stored room image and a pre-stored multimedia device image to identify, so that the current room where the wearer is located, the position of the wearer in the room, whether the multimedia device is in front of the wearer and which multimedia device can be determined.
In one example, the first detection module includes an acceleration sensor or a gyroscope sensor, such as a three-axis gyroscope, a six-axis gyroscope, a nine-axis gyroscope, an accelerometer, an angle sensor, etc., which can determine the current position of the wearer by detecting the movement of the wearer.
In one example, the first detection module includes a positioning module, such as a GPS positioning module, an inertial navigation positioning module, or the like, by which the current position of the wearer can be determined.
In one example, the first detection module includes one or more of a laser sensor, an infrared sensor, and an ultrasonic sensor. Based on laser sensor/infrared sensor/ultrasonic sensor, can detect the multimedia device in front of intelligent glasses, also can detect the information in room. For example, the first detection module includes a laser sensor, the laser sensor emits laser in multiple directions, and measures a distance according to the laser reflected back in each direction, so as to obtain data of a current room where the wearer is located, and the data is compared with pre-stored room information, so that the room where the wearer is located can be determined. For example, the smart glasses send an infrared signal to the front, an infrared module in the multimedia device sends a feedback signal after receiving the infrared signal, and the smart glasses can determine the multimedia device located in front of the smart glasses according to the feedback signal.
In one example of the present disclosure, the smart glasses include a second detection module. The second detection module is a camera.
In one example, the second detection module includes only one camera, and based on the image captured by the camera, the multimedia devices within the field of view of the camera can be determined.
In one example, the second detection module includes a first camera disposed on the left side of the smart eyewear and a second camera disposed on the right side of the smart eyewear. The field range of the first camera is a first field range, the field range of the second camera is a second field range, and the overlapping area of the first field range and the second field range is a current focus area. Based on the images taken by the first camera and the second camera, the multimedia device in the current focus area can be determined.
In one example of the present disclosure, smart eyewear includes providing a third detection module. The third detection module is used for assisting in determining the target multimedia device. The third detection module may include one or more of a laser sensor, an infrared sensor, and an ultrasonic sensor.
In one example of the present disclosure, the smart glasses include a fourth detection module. The fourth detection module can be used for detecting the hand motion of the wearer. The fourth detection module may include one or more of a camera, a laser sensor, an infrared sensor, an ultrasonic sensor, and a millimeter wave sensor. For example, a camera is provided on the frame of the smart glasses to take images in front of the smart glasses, and analyzing the images can determine the gestures and hand positions of the wearer. For example, a millimeter wave sensor is arranged on the frame of the intelligent glasses, and the position of the hand of the wearer can be determined according to sensing data.
In one example of the present disclosure, the smart glasses and the multimedia device are pre-bound, and the identification, function information, and the like of the multimedia device are stored in the smart glasses, and the multimedia device allows the smart glasses to control the same.
The embodiment of the application provides a playing control method of smart glasses, and as shown in fig. 1, the playing control method includes steps S11-S12.
Step S11, determining a target multimedia device among the at least one multimedia device, where the target multimedia device is a multimedia device concerned by the wearer of the smart glasses.
And step S12, setting the sound source of the intelligent glasses as the target multimedia equipment.
In one example of the disclosure, the smart glasses may be in bluetooth, WIFI, UWB, etc. communication connection with the target multimedia device for information transmission therebetween. Through information transmission, the intelligent glasses can receive audio information of the target multimedia equipment, and a sound source of the intelligent glasses is set as the target multimedia equipment.
In this example, the intelligent glasses can determine the multimedia device that the wearer cared for by oneself and then switch the sound source of intelligent glasses for intelligent glasses can be applicable to multiple application scenarios, satisfies user's demand better.
In the disclosed embodiments, the rear of the smart glasses is the direction towards the wearer, and the front of the smart glasses is the direction opposite to the rear of the smart glasses. The front of the smart glasses in the embodiment of the present disclosure means in front of the smart glasses and within a preset section in the left-right direction, for example, in front of the smart glasses and within a section of the smart glasses from 60 degrees to the right.
In one example of the present disclosure, determining a target multimedia device among at least one multimedia device includes: the target multimedia device is determined in the at least one multimedia device through a first detection module arranged in the smart glasses. The first detection module includes at least any one of: UWB module, camera, laser sensor, infrared sensor, ultrasonic sensor, acceleration sensor, gyroscope.
In this example, the smart glasses can determine the multimedia device concerned by the wearer through the first detection module, and further switch the sound source of the smart glasses, so that the interactive experience is improved, and the requirements of the customer are better met.
In one example of the present disclosure, determining a target multimedia device among at least one multimedia device includes: detecting the relative position between the intelligent glasses and the multimedia equipment, determining the multimedia equipment positioned right in front of the intelligent glasses according to the relative position between the intelligent glasses and the multimedia equipment, and determining the multimedia equipment positioned right in front of the intelligent glasses as target multimedia equipment.
For example, as shown in fig. 2, there are two multimedia devices in a room 1, which are the multimedia device 1 and the multimedia device 2, respectively, a user wears smart glasses, and when it is detected that the multimedia device right in front of the smart glasses is the multimedia device 1, the multimedia device 1 is determined as a target multimedia device.
In this example, the smart glasses can determine the multimedia device concerned by the wearer through the relative position between the smart glasses and the multimedia device, and further switch the sound source of the smart glasses, so that the interactive experience is improved, and the requirements of the customer are better met.
In one example of the present disclosure, determining a target multimedia device among at least one multimedia device includes: the method comprises the steps of detecting a room where the intelligent glasses are located currently and the relative position between the intelligent glasses and the multimedia device. And taking the multimedia equipment in the room where the intelligent glasses are currently located as candidate multimedia equipment, and determining the candidate multimedia equipment as target multimedia equipment under the condition that the number of the candidate multimedia equipment is one. And under the condition that the number of the candidate multimedia devices is multiple, determining the candidate multimedia device positioned in front of the intelligent glasses in the multiple candidate multimedia devices according to the relative positions of the intelligent glasses and the candidate multimedia devices, and determining the candidate multimedia device positioned in front of the intelligent glasses as the target multimedia device.
In one example, as shown in fig. 3, there is a multimedia device in the room 2, the multimedia device is a multimedia device 3, the user wears smart glasses, detects that there is a multimedia device (i.e. multimedia device 3) in the room, regards the detected multimedia device 3 as a candidate multimedia device, and determines the multimedia device 3 as the target multimedia device directly since the number of candidate multimedia devices is one.
In another example, as shown in fig. 2, there are two multimedia devices in room 1, multimedia device 1 and multimedia device 2, respectively, the user wears smart glasses, detects that there are 2 multimedia devices in the room where the wearer is located (i.e. multimedia device 1 and multimedia device 2), takes these 2 multimedia devices as candidate multimedia devices, and at the same time, detects the relative positions of the wearer and the candidate multimedia devices, determines that the candidate multimedia device located right in front of the wearer is multimedia device 1, and determines multimedia device 1 as the target multimedia device.
In this example, the target multimedia device may be determined in different ways according to the number of multimedia devices in the room where the smart glasses are located, so that the smart glasses play the audio of the target multimedia device.
In one example of the present disclosure, determining a target multimedia device among at least one multimedia device includes: and performing auxiliary positioning through a second detection module arranged in the intelligent glasses to determine the target multimedia equipment. That is, in the embodiment of the present disclosure, the target multimedia device may be determined in at least one multimedia device in combination with the detection results of the first detection module and the second detection module.
In one example of the disclosure, the first detection module is a UWB module, the second detection module is a camera, and the target multimedia device is determined in the at least one multimedia device by combining detection results of the first detection module and the second detection module.
In one example, the second detection module includes only one camera, and the current focus area is the field of view range of the camera. Based on the image taken by the camera, the multimedia devices within the field of view of the camera can be determined. For example, the camera of the second detection module is located at the center of the frame of the smart glasses, the UWB module determines that a plurality of multimedia devices are located in front of the wearer, and the multimedia device located at the center of the image among the plurality of multimedia devices is determined as the target multimedia device in combination with the image captured by the second detection module. For example, the camera of the second detection module is located at the center of the frame of the smart glasses, and determines a plurality of multimedia devices located in front of the wearer according to the captured images, and determines one of the multimedia devices closest to the distance between the smart glasses and the multimedia devices as the target multimedia device by combining the distance between the smart glasses and the multimedia devices measured by the UWB module of the first detection module.
In one example, the second detection module comprises a first camera and a second camera, the field of view range of the first camera is a first field of view range, and the field of view range of the second camera is a second field of view range; the current focus area is located within an overlap region of the first field of view range and the second field of view range. Based on the images taken by the first camera and the second camera, the multimedia devices in the current focus area can be determined. For example, a plurality of multimedia devices located in front of the wearer are determined by the UWB module, and if only one of the multimedia devices is determined to be located in the current focus area in combination with the image captured by the second detection module, the multimedia device located in the current focus area is determined to be the target multimedia device.
In the embodiment of the present disclosure, the smart glasses further include a third detection module, and may further determine, by combining detection results of the first detection module, the second detection module, and the third detection module, that the target device is determined in the at least one multimedia device. The third detection module can be one or more of a laser sensor, an infrared sensor and an ultrasonic sensor.
In one example of the present disclosure, the first detection module is a UWB module and the second detection module is a camera. Determining the target multimedia device among the at least one multimedia device comprises: judging whether the current focus area comprises two or more multimedia devices; if yes, triggering a third detection module arranged in the intelligent glasses to acquire auxiliary information; and determining the target multimedia equipment from two or more multimedia equipment by combining the first detection module, the second detection module and the third detection module.
For example, the UWB module determines that a plurality of multimedia devices (including a first multimedia device) are located in front of the wearer, and in combination with the image captured by the second detection module, determines that two or more multimedia devices of the plurality of multimedia devices are located in the current focus area, and then triggers the third detection module to emit a laser signal. The wearer rotates the head of the wearer to enable the laser signal emitted by the third detection module to face the desired multimedia device, if the wearer wants to use the first multimedia device as the target multimedia device, the wearer rotates the head to send the first laser signal to the first multimedia device, the first multimedia device receives the first laser signal and then responds to the first laser signal to send back the second laser signal, and after receiving the second laser signal, the intelligent glasses acquire the identifier of the first multimedia device from the second laser signal and determine the first multimedia device as the target multimedia device.
In one example, triggering the third detection module disposed in the smart glasses to acquire the auxiliary information may be an automatic trigger, that is, when it is determined that the current focus area includes two or more multimedia devices, the third detection module is automatically triggered to acquire the auxiliary information.
In an example, the triggering of the third detection module disposed in the smart glasses to acquire the auxiliary information may be voice triggering, that is, when it is determined that the current focus area includes two or more multimedia devices, a speaker of the smart glasses is controlled to issue a voice prompt to inquire whether a user starts the third detection module to acquire the auxiliary information, if the voice response of the user collected through a microphone of the smart glasses is "start", the third detection module is triggered to acquire the auxiliary information, and a target multimedia device is determined from the two or more multimedia devices by combining the first detection module, the second detection module, and the third detection module.
In one example of the present disclosure, when the sound source of the smart glasses is set as the target multimedia device, the speaker of the target multimedia device is also turned off. That is to say, the target multimedia equipment is changed into mute, and the intelligent glasses play audio, so that resources are saved, sound interference is reduced, and user experience is improved.
In one example of the present disclosure, the acoustic module of the smart glasses includes headphones.
In one example of the present disclosure, setting an audio source of smart glasses as a target multimedia device includes: and setting a sound source of the earphone of the intelligent glasses as the target multimedia equipment.
In one example of the present disclosure, when the sound source of the earphone of the smart glasses is set as the target multimedia device, the speaker of the target multimedia device is also turned off.
For example, a user wears the smart glasses in the present application, and when it is detected that the wearer pays attention to a certain multimedia device, the multimedia device is determined as a target multimedia device, and the target multimedia device is in communication connection with the multimedia device to acquire a sound source. And closing the loudspeaker of the target multimedia equipment, and controlling the earphone of the intelligent glasses to play the obtained sound source. Through the mode, a private playing mode can be realized, and the sound of the target multimedia equipment is played through the earphone of the intelligent glasses. In another example, a plurality of users may wear the smart glasses respectively, and the earphones of the plurality of smart glasses play the sound source of the multimedia device, so that each smart glass can be set individually, for example, to adjust the volume, or even to select different audio tracks, set different languages, and the like, when playing the sound source of the multimedia device.
In this example, by turning off the speaker of the multimedia device and playing the sound source of the multimedia device in the earphone of the smart glasses, the earphone of the smart glasses can play the audio of the multimedia device concerned by the user, so that the multimedia device can be played in a private manner, and the method is suitable for scenes needing private playing, improves user experience, and meets user requirements.
In one example of the present disclosure, setting the sound source of the smart glasses as the target multimedia device includes switching the sound source of the smart glasses from the first multimedia device to the second multimedia device in case the target multimedia device is changed from the first multimedia device to the second multimedia device.
For example, as shown in fig. 4, there are two multimedia devices in the room 3, which are multimedia device 4 and multimedia device 5, respectively, the user wears smart glasses, and when the wearer is at location a, it is detected that the multimedia device concerned by the wearer is multimedia device 4, that is, the target multimedia device is multimedia device 4, and then the sound source of the smart glasses is multimedia device 4. When the wearer moves from the position A to the position B, the change of the position of the wearer is detected, the multimedia device concerned by the wearer is changed from the multimedia device 4 to the multimedia device 5, namely the target multimedia device is changed from the multimedia device 4 to the multimedia device 5, and the sound source of the intelligent glasses is switched to the multimedia device 5.
In this example, the audio sources of the smart glasses are switched in time when the multimedia device concerned by the user changes, that is, the audio sources can be switched by the smart glasses according to the requirements of the user, so that the smart glasses can be applied to various application scenes, and the requirements of the user can be better met.
In one example of the present disclosure, the method further comprises: and controlling the first multimedia equipment to stop playing under the condition that the target multimedia equipment is changed from the first multimedia equipment to the second multimedia equipment.
For example, when the multimedia device concerned by the wearer of the intelligent glasses is changed from a first multimedia device to a second multimedia device, the sound source of the intelligent glasses is switched from the first multimedia device to the second multimedia device, and the first multimedia device is controlled to stop playing.
In this example, the smart glasses can automatically turn off the multimedia device before the user changes the attention under the condition that the multimedia device concerned by the user changes, thereby saving resources, reducing user operations, reducing interference, and improving user experience.
In one example of the present disclosure, switching an audio source of smart glasses from a first multimedia device to a second multimedia device includes: the method comprises the steps of obtaining playing information of first multimedia equipment, controlling second multimedia equipment to continue playing the multimedia contents according to the playing information, and switching a sound source of the intelligent glasses from the first multimedia equipment to the second multimedia equipment, wherein the playing information of the first multimedia equipment comprises identification of the multimedia contents played by the first multimedia equipment and a playing progress corresponding to the multimedia contents.
In an example of the present disclosure, the smart glasses may acquire playing information of the first multimedia device, such as a name of the playing content and a progress of the playing content, through a communication connection with the first multimedia device. When the sound source of the smart glasses is switched from the first multimedia device to the second multimedia device, the second multimedia device can be controlled to continue playing the playing content according to the previously acquired playing information.
Taking a scene that may be implemented in this embodiment as an example, for example, when a wearer of the smart glasses watches a television program through a television in a bedroom, the smart glasses may obtain playing information of the television program, and when the wearer moves to a living room to continue watching, that is, when the wearer pays attention to the television in the living room, the smart glasses control the television in the living room to continue playing contents played by the television in the bedroom. In one example, the smart glasses may also control the bedroom television to stop playing to save resources.
In this example, the smart glasses can automatically enable the multimedia device after the attention is changed to continue playing the content played by the multimedia device before the attention is changed under the condition that the multimedia device concerned by the user is changed, so that the user operation is simplified, and the user experience is improved.
In one example of the present disclosure, the method further includes steps S21-S23.
And step S21, projecting a virtual control interface corresponding to the target multimedia equipment.
In one example of the present disclosure, projecting a virtual control interface corresponding to a target multimedia device includes: acquiring a function list corresponding to target multimedia equipment, wherein the function list comprises at least one function item; and projecting a virtual control interface corresponding to the target multimedia equipment at a preset distance right in front of the intelligent glasses according to a preset size, wherein the virtual control interface is provided with controls corresponding to the function items one to one.
In one example, the predetermined distance may be a distance that is convenient for the wearer to handle, such as between 15 cm and 30 cm. In one example, the preset distance may be set by the wearer.
In one example, the predetermined size may be a size of an area that is conveniently manipulated by the wearer, for example, the predetermined size is 30 cm long and 25 cm wide. In one example, the preset size may be set by the wearer.
In one example, the preset distance is adapted to the preset size, and the larger the size of the virtual control interface is, the farther the preset distance is relatively. For example, the predetermined distance is 100 cm, and the virtual control interface has a length of 100 cm and a width of 60 cm. For example, the predetermined distance is 30 cm, and the virtual control interface has a length of 30 cm and a width of 15 cm.
In one example, the virtual control interface is rectangular. In one example, the shape of the virtual control interface may also be set by the wearer.
Step S22 detects the movement of the hand of the wearer.
In one example, the hand motion of the wearer is detected within a target area, which may be the area directly in front of the smart glasses.
In one example, the fourth detection module detects the hand movement of the wearer in a target area, which is an area directly in front of the smart glasses. The fourth detection module may include one or more of a millimeter wave sensor, an ultrasonic sensor, a camera, a laser sensor, and an infrared sensor. For example, a camera is provided on the frame of the smart glasses to take images in front of the smart glasses, and analyzing the images can determine the gestures and hand positions of the wearer. For example, a millimeter wave sensor is arranged on the frame of the intelligent glasses, and the position of the hand of the wearer can be determined according to sensing data. In one example, detecting the hand motion of the wearer may be detecting a position to which a finger of the wearer moves, and determining the operation of the virtual interface by the wearer based on the position to which the finger of the wearer moves. The person skilled in the art can flexibly set the hand action according to the actual need, such as a click action, a drag action, and the like.
Step S23, determining the operation of the wearer on the virtual control interface according to the hand motion of the wearer, and controlling the target multimedia equipment according to the operation of the wearer on the virtual control interface.
For example, as shown in FIG. 5, the user is wearing smart glasses and the user is focusing on a television that is determined to be the target multimedia device. Acquiring a function list corresponding to the television, wherein the function list comprises 3 function items, namely backward playing content, forward playing content and switching between playing and pausing. A virtual control interface corresponding to the television is projected in front of the user, which contains controls in the form of 3 buttons, where the left button is used to play content backwards, the right button is used to play content forwards, and the middle button is used to switch between play and pause. And when the wearer is confirmed to click the middle button on the virtual control interface according to the hand action of the wearer, controlling the television to pause or start playing.
In this example, the virtual control interface of the multimedia device concerned by the wearer can be projected by the smart glasses in front of the wearer, and the user can control the multimedia device concerned by the user through the operation of the interface, so that the use experience of the user is improved.
The embodiment of the application provides intelligent glasses, the intelligent glasses comprise a memory, a processor and a playing control program which is stored on the memory and can run on the processor, the playing control program is configured to realize the steps of the playing control method, the same technical effects can be achieved, and the steps are not repeated here for avoiding repetition.
Referring to fig. 6, an embodiment of the present application provides smart glasses, which include a memory, a processor, an acoustic module, an optical machine, a first detection module, a second detection module, a third detection module, and a fourth detection module. The acoustic module includes a speaker and/or an earphone. The optical machine is used for projecting a virtual control interface. The functions of the first detection module, the second detection module, the third detection module and the fourth detection module are referred to the above. The smart glasses further comprise a play control program stored in the memory and capable of running on the processor, wherein the play control program is configured to implement the steps of the play control method described in any one of the preceding paragraphs, and can achieve the same technical effects, and the details are not repeated here in order to avoid repetition.
The embodiment of the present application provides a storage medium, on which a program or an instruction is stored, where the program or the instruction, when executed by a processor, implements the steps of the playback control method described in any of the foregoing embodiments, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The embodiments in the present disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, as for the control method and storage medium embodiments, since they are substantially similar to the apparatus embodiments, the description is relatively simple, and reference may be made to the partial description of the method embodiments for relevant points.
The foregoing description of specific embodiments of the present disclosure has been described. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement aspects of embodiments of the disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations for embodiments of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of embodiments of the present disclosure are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (13)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210287972.8A CN114783432A (en) | 2022-03-22 | 2022-03-22 | Playing control method of intelligent glasses, intelligent glasses and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210287972.8A CN114783432A (en) | 2022-03-22 | 2022-03-22 | Playing control method of intelligent glasses, intelligent glasses and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114783432A true CN114783432A (en) | 2022-07-22 |
Family
ID=82426274
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210287972.8A Pending CN114783432A (en) | 2022-03-22 | 2022-03-22 | Playing control method of intelligent glasses, intelligent glasses and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114783432A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115291734A (en) * | 2022-10-08 | 2022-11-04 | 深圳市天趣星空科技有限公司 | Intelligent equipment control method and system based on intelligent glasses |
| CN115696276A (en) * | 2022-09-28 | 2023-02-03 | 维沃移动通信有限公司 | Service transfer method and service transfer device |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102741653A (en) * | 2009-11-24 | 2012-10-17 | 诺基亚公司 | Installation of magnetic signal sources for positioning |
| CN202976430U (en) * | 2012-07-19 | 2013-06-05 | 青岛海尔电子有限公司 | System for achieving pairing, and control terminal and controlled terminal for achieving pairing |
| CN103997570A (en) * | 2014-05-16 | 2014-08-20 | 深圳市欧珀通信软件有限公司 | Pairing method and system for mobile terminals and mobile terminals |
| US20160313973A1 (en) * | 2015-04-24 | 2016-10-27 | Seiko Epson Corporation | Display device, control method for display device, and computer program |
| CN106708255A (en) * | 2016-10-31 | 2017-05-24 | 宇龙计算机通信科技(深圳)有限公司 | Interaction control method and system for virtual interface |
| CN107113354A (en) * | 2014-12-22 | 2017-08-29 | 皇家飞利浦有限公司 | Communication system including headset equipment |
| CN109144263A (en) * | 2018-08-30 | 2019-01-04 | Oppo广东移动通信有限公司 | Social householder method, device, storage medium and wearable device |
| CN111276145A (en) * | 2020-03-10 | 2020-06-12 | 科通工业技术(深圳)有限公司 | Intelligent voice infrared equipment control system and method |
| CN112770166A (en) * | 2020-12-21 | 2021-05-07 | 深圳市欧瑞博科技股份有限公司 | Multimedia intelligent playing method and device, playing equipment and storage medium |
| CN113534715A (en) * | 2021-07-21 | 2021-10-22 | 歌尔科技有限公司 | Intelligent wearable device, and control method and system of target device |
-
2022
- 2022-03-22 CN CN202210287972.8A patent/CN114783432A/en active Pending
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102741653A (en) * | 2009-11-24 | 2012-10-17 | 诺基亚公司 | Installation of magnetic signal sources for positioning |
| CN202976430U (en) * | 2012-07-19 | 2013-06-05 | 青岛海尔电子有限公司 | System for achieving pairing, and control terminal and controlled terminal for achieving pairing |
| CN103997570A (en) * | 2014-05-16 | 2014-08-20 | 深圳市欧珀通信软件有限公司 | Pairing method and system for mobile terminals and mobile terminals |
| CN107113354A (en) * | 2014-12-22 | 2017-08-29 | 皇家飞利浦有限公司 | Communication system including headset equipment |
| US20160313973A1 (en) * | 2015-04-24 | 2016-10-27 | Seiko Epson Corporation | Display device, control method for display device, and computer program |
| CN106708255A (en) * | 2016-10-31 | 2017-05-24 | 宇龙计算机通信科技(深圳)有限公司 | Interaction control method and system for virtual interface |
| CN109144263A (en) * | 2018-08-30 | 2019-01-04 | Oppo广东移动通信有限公司 | Social householder method, device, storage medium and wearable device |
| CN111276145A (en) * | 2020-03-10 | 2020-06-12 | 科通工业技术(深圳)有限公司 | Intelligent voice infrared equipment control system and method |
| CN112770166A (en) * | 2020-12-21 | 2021-05-07 | 深圳市欧瑞博科技股份有限公司 | Multimedia intelligent playing method and device, playing equipment and storage medium |
| CN113534715A (en) * | 2021-07-21 | 2021-10-22 | 歌尔科技有限公司 | Intelligent wearable device, and control method and system of target device |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115696276A (en) * | 2022-09-28 | 2023-02-03 | 维沃移动通信有限公司 | Service transfer method and service transfer device |
| CN115291734A (en) * | 2022-10-08 | 2022-11-04 | 深圳市天趣星空科技有限公司 | Intelligent equipment control method and system based on intelligent glasses |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11755122B2 (en) | Hand gesture-based emojis | |
| KR102358939B1 (en) | Non-visual feedback of visual change in a gaze tracking method and device | |
| JP6764490B2 (en) | Mediated reality | |
| US20200341551A1 (en) | Systems and methods for providing customizable haptic playback | |
| RU2684189C2 (en) | Adaptive event recognition | |
| US9939896B2 (en) | Input determination method | |
| WO2019216419A1 (en) | Program, recording medium, augmented reality presentation device, and augmented reality presentation method | |
| JP6703627B2 (en) | Device and related methods | |
| KR102463806B1 (en) | Electronic device capable of moving and method for operating thereof | |
| JP6932206B2 (en) | Equipment and related methods for the presentation of spatial audio | |
| US20170329503A1 (en) | Editing animations using a virtual reality controller | |
| CN111970456B (en) | Shooting control method, device, equipment and storage medium | |
| JP6470374B1 (en) | Program and information processing apparatus executed by computer to provide virtual reality | |
| US10678327B2 (en) | Split control focus during a sustained user interaction | |
| CN106774849B (en) | Virtual reality equipment control method and device | |
| CN114783432A (en) | Playing control method of intelligent glasses, intelligent glasses and storage medium | |
| TW201721231A (en) | Portable virtual reality system | |
| US10257482B2 (en) | Apparatus and methods for bringing an object to life | |
| WO2019138661A1 (en) | Information processing device and information processing method | |
| US20210405686A1 (en) | Information processing device and method for control thereof | |
| CN114779924A (en) | Head-mounted display device, method for controlling household device and storage medium | |
| US12210678B2 (en) | Directing a virtual agent based on eye behavior of a user | |
| US12474814B2 (en) | Displaying an environment from a selected point-of-view | |
| WO2017215198A1 (en) | Method and device for controlling work state | |
| CN114860069A (en) | Method for controlling intelligent equipment by intelligent glasses, intelligent glasses and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220722 |
|
| RJ01 | Rejection of invention patent application after publication |