CN106200956A - A kind of field of virtual reality multimedia presents and mutual method - Google Patents
A kind of field of virtual reality multimedia presents and mutual method Download PDFInfo
- Publication number
- CN106200956A CN106200956A CN201610533620.0A CN201610533620A CN106200956A CN 106200956 A CN106200956 A CN 106200956A CN 201610533620 A CN201610533620 A CN 201610533620A CN 106200956 A CN106200956 A CN 106200956A
- Authority
- CN
- China
- Prior art keywords
- user
- multimedia
- virtual reality
- sound
- life cycle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of field of virtual reality multimedia to present and mutual method, belong to technical field of virtual reality, the method comprises the following steps: step one, the type defining multimedia element in virtual reality and attribute;Step 2, definition multimedia element at the presentation mode of virtual reality panorama 3d space and realize script;Step 3, determine user and multimedia element mutual in virtual reality;Step 4, record and analyze mutual in the behavioral data of user.The method makes content originator can freely define multimedia element, including video, image, sound, word etc., and at the direction of panorama 3d space, distance, size, movement locus and life cycle etc., moreover it is possible to the behavioral data of record user, for big data analysis;User with the direction of unrestricted choice viewing, and can interact with multimedia element simultaneously.
Description
Technical field
The present invention relates to technical field of virtual reality, present particularly to multimedia in a kind of virtual reality applications and hand over
Method mutually.
Background technology
Virtual reality technology (Virtual Reality is called for short VR) is a kind of can establishment and the meter in the experiencing virtual world
Calculation machine analogue system.This analogue system utilizes computer to generate a kind of simulated environment, is the interactive mode of a kind of Multi-source Information Fusion
Three-Dimensional Dynamic what comes into a driver's and the system emulation of entity behavior, make user be immersed in this simulated environment.Virtual reality technology is imitative
The collection of the multiple technologies such as true technology and computer graphics, human-machine interface technology, multimedia technology, sensing technology and network technology
Close, be a challenging interleaving techniques front subject and research field.Virtual reality technology mainly include simulated environment,
The aspects such as action, perception and sensing equipment.Simulated environment is generated by computer, the 3 D stereo panoramic picture of Real-time and Dynamic
And sound.Perception refers to that preferable VR should have the perception that all people are had, except regarding that computer graphics techniques is generated
Feel outside perception, the perception such as also audition, sense of touch, power feel, motion.Action refer to the head rotation of people, the action of eyes, gesture,
Or other human body behavior acies, computer process the data that the action with participant adapts, and the input to user is made
Go out real-time response, and feed back to the face of user respectively.Sensing equipment refers to that virtual reality interactive device, such as headed are followed the tracks of
Display device or the stereophone of a simply headed tracking transducer.
Realizing head tracking and also have multiple method, relatively common is to use multiple sensors.Motion sensor external member is led to
Often include accelerometer, gyroscope and magnetometric sensor.In terms of motion tracking and absolute direction, every kind of sensor has oneself
Intrinsic strong point and weakness.Therefore practices well is to use sensor " to merge " (sensor fusion), will be from each sensor
Signal combine, produce a more accurate motion detection result.
Multimedia (Multimedia) is the comprehensive of media, generally comprises the medias such as text, sound and image
Form.In computer systems, multimedia refer to combine two or more media a kind of man-machine interactive communication for information and
Communications media.The media used include word, picture, photo, sound, animation and film, and the interactive merit that program is provided
Energy.
In field of virtual reality, owing to user places oneself in the midst of 360 degree of panorama 3d spaces, multimedia present and mutual side
Formula is compared traditional method and can be very different.In prior art, more for the multimedia research presented, and to multimedia
Interactive mode research not enough, the process causing user and multimedia element to interact is the most smooth and easy, affects the use of user
Experience.It addition, prior art generally ignores the user behavioral data institute when virtual reality space and multimedia element are mutual
The value of the information comprised, do not make full use of these mutual time behavioral data, be unfavorable for the individualized feature of user is entered
Row is analyzed.
In view of this, a kind of multimedia is needed to present the solution with interactive standards in field of virtual reality.
Summary of the invention
It is an object of the invention to provide a kind of multimedia being applied to field of virtual reality to present and exchange method.
For achieving the above object, field of virtual reality multimedia of the present invention presents and includes following with mutual method
Step:
Step one, the type defining multimedia element in virtual reality and attribute;
Step 2, definition multimedia element at the presentation mode of virtual reality panorama 3d space and realize script;
Step 3, determine user and multimedia element mutual in virtual reality;
Step 4, record and analyze mutual in the behavioral data of user.
In virtual reality described in step one, the type of multimedia element includes video, image, sound, model, word
Or barrage;Wherein video includes main thread video, branching storyline video or PIP video;Image includes poster picture, scene
Middle prompting picture or commodity picture;Sound includes panorama 3D music, audio or suggestion voice;Model includes adding in scene
Computer graphic 3D model;Word includes the prompt text in captions, scene or buyer's guide word;Barrage is that user is at mutual disorder of internal organs
Comment, including word barrage or voice barrage.
In virtual reality described in step one, the attribute of multimedia element includes: unique mark of element, the depositing of element
Storage path, the title of element, the type of element, the distance of element, the size of element, the horizontal angle orientation of element, the facing upward of element
Angular range, element life cycle starting point, element life cycle terminal or element are the need of the startup that is triggered.
Multimedia element described in step 2 at the presentation mode of virtual reality panorama 3d space by described multimedia
Attribute of an element determines, is resolved the data of its attribute by program, so that it is determined that what multimedia element occurred in the scene
Time, position or size.
The script that realizes described in step 2 uses json form or xml form.
The script that realizes described in step 2 defines the distance of element and the size the two attribute of element, for picture
For, the size of element defines the original size of picture, the distance definition of the element distance of picture;Sound is come
Say, the size of element defines the area of sound source, the distance definition of the element distance of sound source, the volume of sound and sound source away from
From square be inversely proportional to.
The script that realizes described in step 2 defines horizontal angle orientation and the elevation bearing of element of element, for figure
Sheet, is rendered in panorama 3D visible space by openGL;For sound, it is rendered into panorama 3D sound field space by HRTF technology
In.
The script that realizes described in step 2 also defines life cycle starting point and the life cycle terminal of element, for figure
Sheet, the life cycle starting point of element refers to that image starts display;The life cycle terminal of element refers to that image termination shows;For
Sound, the life cycle starting point of element refers to that sound commences play out, and the life cycle terminal of element refers to that sound terminates playing.
The script that realizes described in step 2 defines element the need of the startup that is triggered, and the startup that needs to be triggered is many
Media elements is can interactive elements;In user described in step 3 and virtual reality, multimedia element includes that user touches alternately
Send out startup can interactive elements so that can interactive elements be presented in its life cycle.
Described triggering includes Bluetooth handle triggering, speech trigger, gesture triggers, eyeball triggers and stares triggering;Trigger
An essential condition be that the sight line of user is in toggle area;Described toggle area be by can interactive elements empty at panorama 3D
The scope covered between determines;Determine whether user's sight line intersects with toggle area by program;Then user is by touching
Send out Bluetooth handle trigger, or phonetic order, or by Gesture Recognition, or by eyeball tracking technology, or
By stare certain time trigger startup can interactive elements so that can interactive elements be presented in its life cycle;
Wherein Gesture Recognition includes utilizing optical tracking, sensor to follow the trail of or optical tracking follows the trail of, with sensor, the side combined
Formula.
The behavioral data of the record described in step 4 and the mutual middle user of analysis comprises the following steps: first record user
Head or the position of the movable body part in addition to head and angle in whole interaction;Nodded by each time
Portion or the position of the movable body part in addition to head and angle, analyze the position of each multimedia element in scene,
Go out user region interested and multimedia element;Then the multimedia element that record user triggers in interaction;User
Behavioral data according to script definition record in advance in the data base specified for analyzing or settling accounts.
Present invention have the advantage that field of virtual reality multimedia of the present invention presents with mutual method with existing
There is technology to compare, make content originator can freely define multimedia element, including video, image, sound, word etc., and
At the direction of panorama 3d space, distance, size, movement locus and life cycle etc., moreover it is possible to the behavioral data of record user, it is used for
Big data analysis.User with the direction of unrestricted choice viewing, and can interact with multimedia element simultaneously.
Accompanying drawing explanation
Fig. 1 is that field of virtual reality multimedia of the present invention presents and the FB(flow block) of mutual method.
Detailed description of the invention
Following example are used for illustrating the present invention, but are not limited to the scope of the present invention.
Comprise the following steps as it is shown in figure 1, field of virtual reality multimedia of the present invention presents with mutual method:
Step one, the type defining multimedia element in virtual reality and attribute;
Step 2, definition multimedia element at the presentation mode of virtual reality panorama 3d space and realize script;
Step 3, determine user and multimedia element mutual in virtual reality;
Step 4, record and analyze mutual in the behavioral data of user.
In virtual reality described in step one, the type of multimedia element includes video, image, sound, model, word
Or barrage etc.;Wherein video includes main thread video, branching storyline video or PIP video etc.;Image include poster picture,
Scene is pointed out picture or commodity picture etc.;Sound includes panorama 3D music, audio or suggestion voice etc.;Model refers mainly to add
Computer graphic (CG, Computer Graphics) 3D model in scene;Word refer to the prompt text in captions, scene or
Buyer's guide word etc. (2D word, 3D word);Barrage refers to user's some comments at mutual disorder of internal organs, can be word barrage, also
It can be voice barrage (including fixing barrage, real-time barrage).
In virtual reality described in step one, the attribute of multimedia element includes:
Unique mark of ID: element;
The store path of Src: element;
The title of Name: element;
The type of Type: element;
The distance of Distance: element;
The size of Size: element;
The horizontal angle orientation of Azimuth: element;
The elevation bearing of Elvation: element;
StartTime: element life cycle starting point;
EndTime: element life cycle terminal;
Trigger: element is the need of the startup that is triggered.
Multimedia element described in step 2 at the presentation mode of virtual reality panorama 3d space by described multimedia
Attribute of an element determines, is resolved the data of its attribute by program, so that it is determined that what multimedia element occurred in the scene
Time, position, size etc..
Realizing script and can use json form or xml form described in step 2.As a example by json form script, under
Face gives one section of simple json form script, including the reality of a picture, and the broadcasting of a sound.
In json form script above, define unique mark " ID " number of picture image and sound sound, storage
Path " src ", title " name " and type " type ".Wherein, we pre-defined " type " enumerate numbered:
| Type id | 1 | 2 | 3 | 4 | 5 | 6 |
| Element type | Picture | Video | Sound | Word | Model | Barrage |
Realize script and also define distance " distance " and the size " size " of multimedia element, the two attribute for
Picture harmony tone is usually said, the method for parsing is different.For picture, " size " defines the original size of picture,
" distance " defines the distance of picture, and when remote, the relative size of picture diminishes, otherwise becomes big.For sound
For, " size " defines the area of sound source, and this value is the least, and it is point sound source that sound more focuses on (focus), and such as one people says
Words can be approximated to be a point sound source, and position is at mouth.Otherwise, this value is the biggest, and the area of sound source is the biggest, becomes a piece of, than
As overcast down, the thunder that sky is wide." distance " defines the distance of sound source, the volume of sound and distance square
Be inversely proportional to, i.e. distance is the most remote, and volume is the lowest.
Realize azimuth " azimuth " (the horizontal angle orientation of element) and the elevation angle " the elevation " (element of script definition
Elevation bearing), for picture, be rendered in panorama 3D visible space by openGL;For sound, HRTF skill can be passed through
Art is rendered in panorama 3D sound field space.
Realize life cycle starting point " StartTime " and the life cycle terminal of element of the element of script definition
" EndTime ", determines the start and end time of multimedia element, i.e. life cycle.For picture, the life cycle of element
Starting point refers to that image starts display;The life cycle terminal of element refers to that image termination shows;For sound, the Life Cycle of element
Phase starting point refers to that sound commences play out, and the life cycle terminal of element refers to that sound terminates playing.Than json form described above
In script, sound sound1 commences play out from 20s, stops to during 30s.Sound source distance is 1 meter, is positioned at direction, 110 degree, right back.
Realize " trigger " of script definition, determine element and linearly present, or trigger startup by user
's.Ratio is in json form script described above, and the life cycle of picture image1 is from 1s to 10s, but is because marked
The value of " trigger " is 1, i.e. triggers startup by user, then if user does not trigger, and image1 will not present,
Only user effectively triggers in its life cycle, and image1 just can present in its life cycle.
Previously by the attribute of multimedia element with realize script and determine which element and trigger startup by user,
I.e. " trigger "=1.These multimedia elements have interaction, and these multimedia elements are can interactive elements in other words.
User described in step 3 with in virtual reality multimedia element include that user triggers startup alternately can be mutual
Element, so that can interactive elements be presented in its life cycle.
Triggering can pass through accomplished in many ways, triggers including Bluetooth handle triggering, speech trigger, gesture, and eyeball triggers
With stare triggering.The essential condition triggered is that the sight line of user needs in toggle area, say, that user needs note
Depending on can interactive elements.This toggle area can be by can the scope that covers in panorama 3d space of interactive elements determine.
Determine whether user's sight line intersects with toggle area by program.Then user can trigger Bluetooth handle trigger, or
By phonetic order, or by Gesture Recognition, or by eyeball tracking technology, or by staring certain time
Trigger startup can interactive elements so that can interactive elements be presented in its life cycle.Gesture triggers to be known by gesture
Do not realize.Common Gesture Recognition includes utilizing optical tracking, sensor to follow the trail of or the two mode combined.
In actual applications, can the triggering of interactive elements can be confirmation of once doing shopping, it is also possible to be to select film
In a story of a play or opera branch.
User is valuable in virtual reality space with multimedia element behavioral data time mutual.Such as purchase virtual
In thing scene, the region and commodity showing that user is interested can be analyzed.
The behavioral data of the record described in step 4 and the mutual middle user of analysis comprises the following steps: firstly the need of record
Be user's head in whole interaction position (x, y, z) and angle (azimuth, elevation).This information can
To be obtained by the sensor of virtual reality device.By position and the angle of each time point head, i.e. can get user and regard
The orientation that line is watched attentively, analyzes the position of each multimedia element in scene, can draw region and multimedia that user is interested
Element.If it will be seen that virtual reality device is equipped with corresponding peripheral hardware such as gesture identification, or eyeball tracking, in the present invention
The customer interaction information of system record is not limited to the position of head.
Then the multimedia element that user triggers in interaction can be recorded.Such as in virtual reality is done shopping, touch
The multimedia element sent out corresponding user can put into the commodity in shopping cart.
The behavioral data of user can according to script definition record in advance in the data base specified for analyzing or tying
Calculate.
Although, the present invention is described in detail to have used general explanation and specific embodiment, but at this
On the basis of invention, can make some modifications or improvements it, this will be apparent to those skilled in the art.Therefore,
These modifications or improvements without departing from theon the basis of the spirit of the present invention, belong to the scope of protection of present invention.
Claims (10)
1. a field of virtual reality multimedia presents and mutual method, it is characterised in that said method comprising the steps of:
Step one, the type defining multimedia element in virtual reality and attribute;
Step 2, definition multimedia element at the presentation mode of virtual reality panorama 3d space and realize script;
Step 3, determine user and multimedia element mutual in virtual reality;
Step 4, record and analyze mutual in the behavioral data of user.
2. the method for claim 1, it is characterised in that the class of multimedia element in the virtual reality described in step one
Type includes video, image, sound, model, word or barrage;Wherein video includes main thread video, branching storyline video or picture
Middle picture video;Image includes pointing out in poster picture, scene picture or commodity picture;Sound include panorama 3D music, audio or
Suggestion voice;Model includes the computer graphic 3D model adding in scene;Word include the prompt text in captions, scene or
Buyer's guide word;Barrage is user's comment at mutual disorder of internal organs, including word barrage or voice barrage.
3. method as claimed in claim 2, it is characterised in that the genus of multimedia element in the virtual reality described in step one
Property includes: unique mark of element, the store path of element, the title of element, the type of element, the distance of element, element
Size, the horizontal angle orientation of element, the elevation bearing of element, element life cycle starting point, element life cycle terminal or element
The need of the startup that is triggered.
4. method as claimed in claim 3, it is characterised in that the multimedia element described in step 2 is at virtual reality panorama
The presentation mode of 3d space is determined by the attribute of described multimedia element, is resolved the data of its attribute by program, from
And determine time, position or the size that multimedia element occurs in the scene.
5. method as claimed in claim 4, it is characterised in that described in step 2 realize script use json form or
Xml form.
6. method as claimed in claim 5, it is characterised in that the script that realizes described in step 2 defines the distance of element
With two attributes of size of element, for picture, the size of element defines the original size of picture, and the distance of element is fixed
The justice distance of picture;For sound, the size of element defines the area of sound source, the distance definition of element sound
The distance in source, square being inversely proportional to of the volume of sound and the distance of sound source.
7. method as claimed in claim 6, it is characterised in that the script that realizes described in step 2 defines the level of element
Angular range and the elevation bearing of element, for picture, be rendered in panorama 3D visible space by openGL;For sound, logical
Cross HRTF technology to be rendered in panorama 3D sound field space;The script that realizes described in step 2 also defines the life cycle of element
Starting point and life cycle terminal, for picture, the life cycle starting point of element refers to that image starts display;The life cycle of element
Terminal refers to that image termination shows;For sound, the life cycle starting point of element refers to that sound commences play out, the Life Cycle of element
Final point refers to that sound terminates playing.
8. method as claimed in claim 7, it is characterised in that the script that realizes described in step 2 defines whether element needs
Be triggered startup, and the multimedia element that needing is triggered starts is can interactive elements;User described in step 3 is with virtual
In reality multimedia element include alternately user trigger startup can interactive elements so that can interactive elements at its life cycle
Inside presented.
9. method as claimed in claim 8, it is characterised in that described triggering includes Bluetooth handle triggering, speech trigger, hands
Gesture triggers, and eyeball triggers and stares triggering;The essential condition triggered is that the sight line of user is in toggle area;Described touches
Send out region be by can interactive elements covers in panorama 3d space scope decision;Determine user's sight line by program and touch
Send out whether region intersects;Then user is by triggering Bluetooth handle trigger, or phonetic order, or by gesture identification skill
Art, or by eyeball tracking technology, or by stare certain time trigger startup can interactive elements so that can be mutual
Element is presented in its life cycle;Wherein Gesture Recognition includes utilizing optical tracking, sensor to follow the trail of or light
Learn to follow the trail of and follow the trail of, with sensor, the mode combined.
10. method as claimed in claim 9, it is characterised in that user in the record described in step 4 and analysis alternately
Behavioral data comprises the following steps: first record user head or the movable health in addition to head in whole interaction
The position at position and angle;By each time point head or the position of the movable body part in addition to head and angle,
Analyze the position of each multimedia element in scene, draw region and multimedia element that user is interested;Then user is recorded
The multimedia element triggered in interaction;The behavioral data of user according to script definition record in advance in the data specified
Storehouse is used for analyze or settle accounts.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610533620.0A CN106200956A (en) | 2016-07-07 | 2016-07-07 | A kind of field of virtual reality multimedia presents and mutual method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610533620.0A CN106200956A (en) | 2016-07-07 | 2016-07-07 | A kind of field of virtual reality multimedia presents and mutual method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106200956A true CN106200956A (en) | 2016-12-07 |
Family
ID=57473422
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610533620.0A Pending CN106200956A (en) | 2016-07-07 | 2016-07-07 | A kind of field of virtual reality multimedia presents and mutual method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106200956A (en) |
Cited By (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106604200A (en) * | 2017-01-19 | 2017-04-26 | 浪潮(苏州)金融技术服务有限公司 | Audio data processing method and apparatus |
| CN106804006A (en) * | 2017-03-07 | 2017-06-06 | 杭州当虹科技有限公司 | A kind of VR panoramic videos barrage comments on put-on method and system |
| CN107135237A (en) * | 2017-07-07 | 2017-09-05 | 三星电子(中国)研发中心 | A realization method and device for presenting target enhanced information |
| CN107168521A (en) * | 2017-04-10 | 2017-09-15 | 北京小鸟看看科技有限公司 | Viewing guidance method, device and head-mounted display apparatus |
| CN107197339A (en) * | 2017-04-10 | 2017-09-22 | 北京小鸟看看科技有限公司 | Display control method, device and the head-mounted display apparatus of film barrage |
| CN107229332A (en) * | 2017-05-24 | 2017-10-03 | 福州岂微网络科技有限公司 | The edit methods and terminal of a kind of virtual reality interaction content |
| CN107229340A (en) * | 2017-06-29 | 2017-10-03 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
| CN107300972A (en) * | 2017-06-15 | 2017-10-27 | 北京小鸟看看科技有限公司 | The method of comment in display device is worn, device and wear display device |
| CN107357416A (en) * | 2016-12-30 | 2017-11-17 | 长春市睿鑫博冠科技发展有限公司 | A kind of human-computer interaction device and exchange method |
| WO2018019272A1 (en) * | 2016-07-29 | 2018-02-01 | 成都理想境界科技有限公司 | Method and apparatus for realizing augmented reality on the basis of plane detection |
| CN107750036A (en) * | 2017-10-31 | 2018-03-02 | 北京酷我科技有限公司 | A kind of method for the simulation panorama audio that can customize |
| CN108460839A (en) * | 2017-02-20 | 2018-08-28 | 王素萍 | A kind of editing machine of AR applications |
| WO2018227502A1 (en) * | 2017-06-15 | 2018-12-20 | Tencent Technology (Shenzhen) Company Limited | System and method of instantly previewing immersive content |
| CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
| CN109308742A (en) * | 2018-08-09 | 2019-02-05 | 重庆爱奇艺智能科技有限公司 | A kind of method and apparatus running 2D application in the 3D scene of virtual reality |
| WO2019076264A1 (en) * | 2017-10-19 | 2019-04-25 | 华为技术有限公司 | Text display method and device in virtual reality, and virtual reality apparatus |
| CN110597392A (en) * | 2019-07-31 | 2019-12-20 | 上海上业信息科技股份有限公司 | Interaction method based on VR simulation world |
| CN111756616A (en) * | 2019-03-28 | 2020-10-09 | 南宁富桂精密工业有限公司 | Method and device for setting multi-user virtual reality chat environment |
| CN111797820A (en) * | 2020-09-09 | 2020-10-20 | 北京神州泰岳智能数据技术有限公司 | Video data processing method and device, electronic equipment and storage medium |
| CN111815747A (en) * | 2020-07-03 | 2020-10-23 | 姚福来 | Method for constructing movie and television characters by using 3D models of different objects |
| CN112929685A (en) * | 2021-02-02 | 2021-06-08 | 广州虎牙科技有限公司 | Interaction method and device for VR live broadcast room, electronic equipment and storage medium |
| CN113709543A (en) * | 2021-02-26 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Video processing method and device based on virtual reality, electronic equipment and medium |
| CN116233382A (en) * | 2022-01-07 | 2023-06-06 | 深圳看到科技有限公司 | Three-dimensional scene interaction video generation method and generation device based on scene elements |
| CN116861034A (en) * | 2023-07-03 | 2023-10-10 | 北京河图联合创新科技有限公司 | Space-time data processing method and device in meta-universe scene and electronic equipment |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090241092A1 (en) * | 2008-03-24 | 2009-09-24 | Nokia Corporation | Apparatus, methods, and computer program products providing improved application development for electronic devices |
| CN102884490A (en) * | 2010-03-05 | 2013-01-16 | 索尼电脑娱乐美国公司 | Maintaining multiple views on a shared stable virtual space |
| CN104298722A (en) * | 2014-09-24 | 2015-01-21 | 张鸿勋 | Multimedia interaction system and method |
| CN104506894A (en) * | 2014-12-22 | 2015-04-08 | 合一网络技术(北京)有限公司 | Method and device for evaluating multi-media resources |
| CN105425967A (en) * | 2015-12-16 | 2016-03-23 | 中国科学院西安光学精密机械研究所 | Sight tracking and human eye region-of-interest positioning system |
| CN105487673A (en) * | 2016-01-04 | 2016-04-13 | 京东方科技集团股份有限公司 | Man-machine interactive system, method and device |
-
2016
- 2016-07-07 CN CN201610533620.0A patent/CN106200956A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090241092A1 (en) * | 2008-03-24 | 2009-09-24 | Nokia Corporation | Apparatus, methods, and computer program products providing improved application development for electronic devices |
| CN102884490A (en) * | 2010-03-05 | 2013-01-16 | 索尼电脑娱乐美国公司 | Maintaining multiple views on a shared stable virtual space |
| CN104298722A (en) * | 2014-09-24 | 2015-01-21 | 张鸿勋 | Multimedia interaction system and method |
| CN104506894A (en) * | 2014-12-22 | 2015-04-08 | 合一网络技术(北京)有限公司 | Method and device for evaluating multi-media resources |
| CN105425967A (en) * | 2015-12-16 | 2016-03-23 | 中国科学院西安光学精密机械研究所 | Sight tracking and human eye region-of-interest positioning system |
| CN105487673A (en) * | 2016-01-04 | 2016-04-13 | 京东方科技集团股份有限公司 | Man-machine interactive system, method and device |
Non-Patent Citations (1)
| Title |
|---|
| 何正国,张毓福: "《基于VRML的交互式虚拟现实多媒体教室的研究及实现》", 《六盘水师范高等专科学校学报》 * |
Cited By (34)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018019272A1 (en) * | 2016-07-29 | 2018-02-01 | 成都理想境界科技有限公司 | Method and apparatus for realizing augmented reality on the basis of plane detection |
| CN107357416A (en) * | 2016-12-30 | 2017-11-17 | 长春市睿鑫博冠科技发展有限公司 | A kind of human-computer interaction device and exchange method |
| CN106604200A (en) * | 2017-01-19 | 2017-04-26 | 浪潮(苏州)金融技术服务有限公司 | Audio data processing method and apparatus |
| CN108460839A (en) * | 2017-02-20 | 2018-08-28 | 王素萍 | A kind of editing machine of AR applications |
| CN106804006A (en) * | 2017-03-07 | 2017-06-06 | 杭州当虹科技有限公司 | A kind of VR panoramic videos barrage comments on put-on method and system |
| CN107197339A (en) * | 2017-04-10 | 2017-09-22 | 北京小鸟看看科技有限公司 | Display control method, device and the head-mounted display apparatus of film barrage |
| CN107168521A (en) * | 2017-04-10 | 2017-09-15 | 北京小鸟看看科技有限公司 | Viewing guidance method, device and head-mounted display apparatus |
| CN107168521B (en) * | 2017-04-10 | 2020-06-23 | 北京小鸟看看科技有限公司 | Film viewing guide method and device and head-mounted display equipment |
| CN107197339B (en) * | 2017-04-10 | 2019-12-31 | 北京小鸟看看科技有限公司 | Display control method and device of film bullet screen and head-mounted display equipment |
| CN107229332A (en) * | 2017-05-24 | 2017-10-03 | 福州岂微网络科技有限公司 | The edit methods and terminal of a kind of virtual reality interaction content |
| CN107300972A (en) * | 2017-06-15 | 2017-10-27 | 北京小鸟看看科技有限公司 | The method of comment in display device is worn, device and wear display device |
| WO2018227502A1 (en) * | 2017-06-15 | 2018-12-20 | Tencent Technology (Shenzhen) Company Limited | System and method of instantly previewing immersive content |
| US10901499B2 (en) | 2017-06-15 | 2021-01-26 | Tencent Technology (Shenzhen) Company Limited | System and method of instantly previewing immersive content |
| CN107229340A (en) * | 2017-06-29 | 2017-10-03 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
| CN107229340B (en) * | 2017-06-29 | 2020-04-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
| CN107135237A (en) * | 2017-07-07 | 2017-09-05 | 三星电子(中国)研发中心 | A realization method and device for presenting target enhanced information |
| WO2019076264A1 (en) * | 2017-10-19 | 2019-04-25 | 华为技术有限公司 | Text display method and device in virtual reality, and virtual reality apparatus |
| US11394947B2 (en) | 2017-10-19 | 2022-07-19 | Huawei Technologies Co., Ltd. | Text display method and apparatus in virtual reality, and virtual reality device |
| CN107750036A (en) * | 2017-10-31 | 2018-03-02 | 北京酷我科技有限公司 | A kind of method for the simulation panorama audio that can customize |
| CN109308742A (en) * | 2018-08-09 | 2019-02-05 | 重庆爱奇艺智能科技有限公司 | A kind of method and apparatus running 2D application in the 3D scene of virtual reality |
| CN109240496A (en) * | 2018-08-24 | 2019-01-18 | 中国传媒大学 | A kind of acousto-optic interactive system based on virtual reality |
| CN109240496B (en) * | 2018-08-24 | 2021-07-16 | 中国传媒大学 | A sound and light interactive system based on virtual reality |
| CN111756616A (en) * | 2019-03-28 | 2020-10-09 | 南宁富桂精密工业有限公司 | Method and device for setting multi-user virtual reality chat environment |
| CN110597392A (en) * | 2019-07-31 | 2019-12-20 | 上海上业信息科技股份有限公司 | Interaction method based on VR simulation world |
| CN110597392B (en) * | 2019-07-31 | 2023-06-23 | 上海上业信息科技股份有限公司 | Interaction method based on VR simulation world |
| CN111815747A (en) * | 2020-07-03 | 2020-10-23 | 姚福来 | Method for constructing movie and television characters by using 3D models of different objects |
| CN111797820A (en) * | 2020-09-09 | 2020-10-20 | 北京神州泰岳智能数据技术有限公司 | Video data processing method and device, electronic equipment and storage medium |
| CN112929685A (en) * | 2021-02-02 | 2021-06-08 | 广州虎牙科技有限公司 | Interaction method and device for VR live broadcast room, electronic equipment and storage medium |
| CN112929685B (en) * | 2021-02-02 | 2023-10-17 | 广州虎牙科技有限公司 | Interaction method and device for VR live broadcast room, electronic device and storage medium |
| CN113709543A (en) * | 2021-02-26 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Video processing method and device based on virtual reality, electronic equipment and medium |
| CN113709543B (en) * | 2021-02-26 | 2024-06-25 | 腾讯科技(深圳)有限公司 | Video processing method and device based on virtual reality, electronic equipment and medium |
| CN116233382A (en) * | 2022-01-07 | 2023-06-06 | 深圳看到科技有限公司 | Three-dimensional scene interaction video generation method and generation device based on scene elements |
| CN116233382B (en) * | 2022-01-07 | 2024-09-20 | 深圳看到科技有限公司 | Three-dimensional scene interaction video generation method and generation device based on scene elements |
| CN116861034A (en) * | 2023-07-03 | 2023-10-10 | 北京河图联合创新科技有限公司 | Space-time data processing method and device in meta-universe scene and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106200956A (en) | A kind of field of virtual reality multimedia presents and mutual method | |
| AU2021261950B2 (en) | Virtual and augmented reality instruction system | |
| CN105339867B (en) | It is shown using the object of visual vivid | |
| CN105612478B (en) | User interface programmatic scaling | |
| KR102357633B1 (en) | Conversation detection | |
| US9317486B1 (en) | Synchronizing playback of digital content with captured physical content | |
| US11430186B2 (en) | Visually representing relationships in an extended reality environment | |
| CN104471511A (en) | Touchless user interface | |
| CN109191940B (en) | A kind of interactive method based on smart device and smart device | |
| KR20140128428A (en) | Method and system of providing interactive information | |
| TW200541330A (en) | Method and system for real-time interactive video | |
| CN104375778A (en) | Intelligent interactive aquarium display system | |
| US20210166461A1 (en) | Avatar animation | |
| CN109191939B (en) | Three-dimensional projection interaction method based on intelligent equipment and intelligent equipment | |
| US12515137B2 (en) | In-environment reporting of abuse in a virtual environment | |
| CN111414506A (en) | Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium | |
| Cho et al. | Realityreplay: Detecting and replaying temporal changes in situ using mixed reality | |
| CN105074752A (en) | 3D mobile and connected TV ad trafficking system | |
| US10417356B1 (en) | Physics modeling for interactive content | |
| CN113282167B (en) | Interaction method and device of head-mounted display equipment and head-mounted display equipment | |
| Tseng | Intelligent augmented reality system based on speech recognition | |
| Zhang et al. | Augmenting conversations with comic-style word balloons | |
| CN110850976A (en) | Virtual reality projection and retrieval system based on environment perception | |
| CN117519466A (en) | Control method, computer device and storage medium for augmented reality device | |
| CN103810932A (en) | Virtual starry sky teaching device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161207 |
|
| RJ01 | Rejection of invention patent application after publication |