US20180335832A1 - Use of virtual-reality systems to provide an immersive on-demand content experience - Google Patents
Use of virtual-reality systems to provide an immersive on-demand content experience Download PDFInfo
- Publication number
- US20180335832A1 US20180335832A1 US15/599,346 US201715599346A US2018335832A1 US 20180335832 A1 US20180335832 A1 US 20180335832A1 US 201715599346 A US201715599346 A US 201715599346A US 2018335832 A1 US2018335832 A1 US 2018335832A1
- Authority
- US
- United States
- Prior art keywords
- virtual
- user
- content
- demand content
- users
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Definitions
- the present disclosure relates generally to video content distribution, and more particularly, but not exclusively, to presentation of shared on-demand content in a virtual-reality system.
- Virtual-reality headsets are becoming cheaper and more readily available to the everyday user, and they could soon be a common electronic device in many households. These headsets allow users to experience content in a way that the user feels like they are actually participating in the content. Such hands-on experiences have been utilized in video gaming, movie watching, personal interactions, and other interactive and immersive-type content. However, since only one person is typically wearing the headset at a given time, such experiences can be rather lonesome and secluded, which minimizes the joy of interacting with others while interacting with the content. It is with respect to these and other considerations that the embodiments herein have been made.
- embodiments are directed toward systems and methods of providing an interactive atmosphere for sharing on-demand content among a plurality of users in a virtual-reality environment, and in particular to a virtual theater environment.
- Each of a plurality of users utilize a content receiver or virtual-reality headset, or a combination thereof, to receive on-demand content that is shared between the users.
- Each respective content receiver collects virtual-reality information associated with the user of that respective content receiver.
- This virtual-reality information includes movement information that describes the movement of the user and look information that identifies a virtual look of the user.
- the content receivers share this virtual-reality information with each other so that each content receiver can generate a virtual theater environment specific for its respective user.
- the virtual theater environment includes a plurality of seats, a virtual screen, and a stage from the perspective of the respective user.
- the shared on-demand content is displayed on the virtual screen and virtual renderings of the other users are displayed in the virtual theater environment based on the movement information of those particular users.
- each virtual theater environment adjusts to accommodate for these movements throughout the virtual theater environment.
- This virtual theater environment allows users to consume the same on demand content together and to interact with each other and the content itself.
- FIG. 1 illustrates a context diagram for providing audiovisual content to a user via a virtual-reality headset in accordance with embodiments described herein;
- FIG. 2 illustrates an example environment of a user utilizing a virtual-reality headset in accordance with embodiments described herein;
- FIGS. 3A-3C show example virtual theater environments that are being presented to users in accordance with embodiments described herein;
- FIG. 3D shows an alternative example virtual theater environment that is being presented to users in accordance with embodiments described herein;
- FIG. 4 illustrates a logical flow diagram generally showing one embodiment of a process performed by an on-demand content server to coordinate shared on-demand content for users in a virtual theater environment in accordance with embodiments described herein;
- FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process performed by a content receiver to generate the virtual theater environment for presenting the shared on-demand content to a user in accordance with embodiments described herein;
- FIG. 6 shows a system diagram that describes one implementation of computing systems for implementing embodiments described herein.
- FIG. 1 illustrates a context diagram for providing audiovisual content to a user via a virtual-reality headset in accordance with embodiments described herein.
- Example 100 may include content provider 104 , information provider 106 , content distributor 102 , communication networks 110 , on-demand content server 118 , and user premises 120 a - 120 c.
- content providers 104 generate, aggregate, and/or otherwise provide audiovisual content that is provided to one or more users.
- content providers are referred to as “channels.”
- Examples of content providers 104 may include, but are not limited to, film studios, television studios, network broadcasting companies, independent content producers, such as AMC, HBO, Showtime, or the like, or other entities that provide content for user consumption.
- a content provider may also include individuals that capture personal or home videos, and distribute these videos to others over various online media-sharing websites or other distribution mechanisms.
- the content provided by content providers 104 may be referred to as the program content, which may include movies, sitcoms, reality shows, talk shows, game shows, documentaries, infomercials, news programs, sports broadcasts, or the like.
- program content may also include commercials or other television advertisements. It should be noted that the commercials may be added to the program content by the content providers 104 or the content distributor 102 .
- Information provider 106 may create and distribute data or other information that describes or supports audiovisual content. Generally, this data is related to the content provided by content provider 104 . For example, this data may include, for example, metadata, program name, closed-caption authoring, and placement within the content, timeslot data, pay-per-view and related data, or other information that is associated with the content.
- a content distributor 102 may combine or otherwise associate the data from information provider 106 and the content from content provider 104 , which may be referred to as the distributed content. However, other entities may also combine or otherwise associate the content and other data together.
- Content distributor 102 may provide the content, whether content obtained from content provider 104 and/or data from information provider 106 , to a user through a variety of different distribution mechanisms. For example, in some embodiments, content distributor 102 may provide content and data to one or more users' content receivers 122 a - 122 c through communication network 110 on communication links 111 a - 111 c , respectively. In other embodiments, the content and data may be sent through uplink 112 , which goes to satellite 114 and back to satellite antennas 116 a - 116 c , and to the content receivers 122 a - 122 c via communication links 113 a - 113 c , respectively. It should be noted that some content receives may receive content via satellite 114 , while other content receivers receive content via communication network 110 .
- On-demand content server 118 communicates with the content receivers 122 a - 122 c or virtual-reality headsets 124 a - 124 c of each respective user via communication network 110 to coordinate shared on-demand content among multiple users, as described herein. Briefly, the on-demand content server 118 receives virtual-reality information for each user and provides it to each other user along with the on-demand content so that the content receivers 122 a - 122 c or virtual-reality headsets 124 a - 124 c of each respective user can generate a respective virtual theater environment for the shared on-demand content. In various embodiments, the on-demand content server 118 or the functionality of the on-demand content server 118 is part of or otherwise incorporated into the content distributor 102 or the content provider 104 , or it may be a separate device.
- Communication network 110 may be configured to couple various computing devices to transmit content/data from one or more devices to one or more other devices.
- communication network 110 may be the Internet, X.25 networks, or a series of smaller or private connected networks that carry the content and other data.
- Communication network 110 may include one or more wired or wireless networks.
- Content receivers 122 a - 122 c are receiving devices of content from content distributor 102 or on-demand content server 118 , and they provide the content to virtual-reality headsets 124 a - 124 c , respectively, for presentation to their respective user.
- Examples of content receivers 122 a - 122 c include, but are not limited to, a set-top box, a cable connection box, a computer, or other content or television receivers.
- the content receivers 122 a - 122 c can be configured to receive the content from the content distributor 102 or the on-demand content server 118 via communication network 110 and communication links 111 a - 111 c , respectively, or via satellite antennas 116 a - 116 c and communication links 113 a - 113 c , respectively.
- content receiver 122 a and virtual-reality headset 124 a perform similar functionality. It should be noted, that content receivers 122 b - 122 c and virtual-reality headsets 124 b - 124 c perform similar functionality.
- Content receiver 122 a is configured to provide content to a user's virtual-reality headset 124 a , or to other display devices, such as a television, monitor, projector, etc. In various embodiments, content receiver 122 a communicates with virtual-reality headset 124 a via communication link 126 a to provide on-demand content to a user, as described herein.
- Communication link 126 a may be a wired connection or wireless connection, such as Bluetooth, Wi-Fi, or other wireless communication protocol.
- the content receiver 122 a generates a virtual theater environment, as described herein, and provides it to the virtual-reality headset 124 a to be displayed to a user. In other embodiments, the content receiver 122 a provides on-demand content to the virtual-reality headset 124 a , but does not generate the virtual theater environment. In yet other embodiments, the content receiver 122 a receives the virtual theater environment from the on-demand content server 118 and provides it to the virtual-reality headset 124 a for display to a user.
- virtual-reality information is shared among multiple users to generate a virtual theater environment for each user.
- the content receiver 122 a collects, obtains, generates, or otherwise determines the virtual-reality information for the user of the virtual-reality headset 124 a from the virtual-reality headset 124 a , or from one or more cameras or other sensors (not illustrated), or a combination thereof, as described in more detail herein.
- the content receiver 122 a utilizes this virtual-reality information to generate the virtual theater environment, or it can provide it to the on-demand content server 118 or to the virtual-reality headset 124 a to generate the virtual theater environment.
- the content receiver 122 a provides the virtual-reality information to the on-demand content server 118 so that it can be shared with other content receivers 122 b - 122 c or virtual-reality headsets 124 b - 124 c to generate virtual-reality theater environments for each respective user, as described herein.
- the virtual-reality headset 124 a is configured to display a virtual theater environment to a user of the virtual-reality headset 124 a .
- Virtual-reality headset 124 a may be an all-in-one virtual-reality headset or it may be a combination of multiple separate electronic devices, such as a smartphone and a head-mounting apparatus.
- the virtual-reality headset 124 a receives the virtual theater environment from the content receiver 122 a via communication link 126 a and displays it to a user. In other embodiments, the virtual-reality headset 124 a receives on-demand content from the content receiver 122 a and generates the virtual reality environment itself before displaying it to the user. In at least one such embodiment, the virtual-reality headset 124 a obtains virtual-reality information associated with other users from the on-demand content server 118 via the content receiver 122 a . In other embodiments, the virtual-reality headset 124 a may communicate with content distributor 102 or on-demand content server 118 via communication network 110 and communication link 115 a independent of and separate from content receiver 122 a .
- the virtual-reality headset 124 a obtains virtual-reality information associated with other users from the on-demand content server 118 via communication network 110 and communication link 115 a .
- the virtual-reality headset 124 a provides the virtual-reality information of the user of the virtual-reality headset 124 a to the on-demand content server 118 so that it can be shared with other content receivers 122 b - 122 c or virtual-reality headsets 124 b - 124 c , as described herein.
- content receiver 122 a is separate from or independent of the virtual-reality headset 124 a , such as is illustrated in FIG. 1 . In various other embodiments, content receiver 122 a may be part of or integrated with the virtual-reality headset 124 a.
- content receivers 122 b - 122 c and virtual-reality headsets 124 b - 124 c include similar functionality.
- content receivers 122 b - 122 c can receive content from content distributor 102 via satellite 114 via antenna 116 b - 116 c and communication links 113 b - 113 c , respectively, and communicate with content distributor 102 or on-demand content server 118 via communication network 110 and communication links 111 b - 111 c , respectively.
- virtual-reality headsets 124 b - 124 c can communicate with content receivers 122 b - 122 c via communication links 126 b - 126 c , respectively, or communicate with on-demand content server 118 via communication network 110 and communication links 115 b - 115 c , respectively.
- FIG. 2 illustrates an example environment of a user utilizing a virtual-reality headset in accordance with embodiments described herein.
- Environment 200 is an example of a user premises, such as first user premises 120 a in FIG. 1 .
- Environment 200 includes user 202 sitting in front of a plurality of cameras 224 a - 224 b .
- Each camera 224 captures images of the user 202 , which are utilized to track the physical movement of the user 202 throughout the environment 200 .
- FIG. 2 illustrates only two cameras 224 a and 224 b , more or less cameras may also be used.
- cameras may be embedded in or part of other electronic devices, such as, but not limited to, smartphone 204 , laptop 206 , or content receiver 122 . Each of these devices may also capture images of the user 202 for tracking movement of the user 202 .
- camera 224 a captures at least one image of user 202 prior to the user putting on the virtual-reality headset 124 .
- Facial recognition techniques are utilized to digitize the user's face. This digitized rendering of the user's face is provided to other users as the virtual rendering of the user 202 .
- the user 202 may select an avatar, celebrity impression, or other character representation to use as the virtual rendering of the user, rather than the digitized version of his or her own face.
- each device that includes a camera communicates with content receiver 122 or virtual-reality headset 124 via a wired or wireless communication connection to provide captured images to the content receiver 122 or virtual-reality headset 124 for processing.
- image recognition techniques are utilized on the captured images to identify different body parts of the user 202 , and differences from one image to the next indicate and characterize movement of those body parts. This movement information is utilized to subsequently adjust the perspective of a virtual theater environment presented to the user 202 and to move or animate the virtual rendering of the user 202 for other users.
- FIGS. 3A-3C show example virtual theater environments that are being presented to users in accordance with embodiments described herein.
- FIG. 3A shows a top-down view of a virtual theater environment 300 .
- the virtual theater environment 300 includes front wall 302 , side walls 314 a and 314 b , and back wall 318 .
- On front wall 302 is a virtual screen 304 that displays on demand content to the user watching the virtual theater environment.
- the virtual theater environment 300 also includes a virtual stage 306 positioned in front of the front wall 302 and a plurality of virtual seats 308 positioned between the stage 306 and back wall 318 .
- users can sit in the virtual seats 308 or move throughout the virtual theater environment 300 to watch the virtual screen 304 .
- the virtual theater environment 300 includes a second virtual screen, such as screen 320 on back wall 318 .
- a second virtual screen such as screen 320 on back wall 318 .
- users can still watch the on-demand content even if they are not facing the screen 304 on the front wall.
- the number and positions of the screens 304 and 320 , seats 308 , and stage 306 may be different from what is illustrated.
- FIG. 3B shows virtual theater environment 300 a .
- Virtual theater environment 300 a is an embodiment of virtual theater environment 300 in FIG. 3A , but from the perspective of a first user, e.g., a user of virtual-reality headset 124 a in first user premises 120 a in FIG. 1 , sitting in one of virtual seats 308 .
- virtual theater environment 300 a includes a front wall 302 and side walls 314 a - 314 b.
- the virtual theater environment 300 a also includes a virtual stage 306 that is positioned between the virtual screen 304 and the plurality of virtual seats 308 .
- the virtual theater environment 300 a also includes a virtual rendering of other users that are participating in the shared on-demand content.
- character 310 is a virtual rendering of a second user
- character 312 is a virtual rendering of a third user, where the second and third users are remote from the first user.
- the second user may be a user of virtual-reality headset 124 b in second user premises 120 b in FIG. 1
- the third user may be a user of virtual-reality headset 124 c in third user premises 120 c in FIG. 1
- a virtual theater environment is generated and presented to each user participating in the shared on-demand content but from their own perspective.
- the virtual theater environment 300 a is from the perspective of the first user and the virtual theater environment 300 b in FIG. 3C is from the perspective of the third user.
- character 310 i.e., the virtual rendering of the second user
- the movement of character 310 is created based on the physical movements of the second user, such as the second user physically walking in place.
- Character 312 i.e., the virtual rendering of the third user, is standing on the virtual stage 306 facing the virtual seats 308 . Character 312 may have reached the stage in a manner similar to the second user and character 310 .
- the second and third users can begin to physically move, act, dance, or perform other physical movements that translate into characters 310 and 312 moving, acting, dance, etc. on the virtual stage 306 .
- the virtual theater environment allows the first user to watch the characters 310 and 312 of the second and third users move or interact with the on-demand content being shown on the virtual screen 304 .
- FIG. 3C shows virtual theater environment 300 b .
- Virtual theater environment 300 b is an embodiment of virtual theater environment 300 in FIG. 3A , but from the perspective of the third user standing on the virtual stage 306 , as mentioned above.
- the virtual theater environment 300 b includes side walls 314 a - 314 b , similar to the virtual theater environment 300 a in FIG. 3B , and also a back wall 318 .
- users can move throughout the virtual theater environment 300 b .
- the third user has moved onto the stage 306 . While one the stage 306 , the third user can be facing the screen 304 on the front wall 302 or some other direction. If the third user is facing the screen 304 , then the third user can watch the on-demand content on the screen 304 . But if the third user is not facing the screen 304 , then the on-demand content can be displayed to the third user in another manner. For example, the on-demand content can be displayed on a second screen 320 that is on back wall 318 , as illustrated in FIG. 3C . In this way, the third user can enjoy the on-demand content even though the third user is on the virtual stage 306 and not looking at screen 304 .
- the user can select to have other content displayed on screen 320 .
- the user can select other content that is related or un-related to the on-demand content being shown on screen 304 .
- the on-demand content being displayed on screen 304 may be a musical
- the content displayed on screen 320 may be the words, notes, or sheet music for the current song in the musical so that the user can sing along with the current song.
- advertisements are displayed on the screen 320 .
- the user can make an “in-app” purchase to remove the advertisements from the second screen 302 , to display the on-demand content, or to display other content.
- screen 320 or back wall 318 may display other scenery or graphics.
- the screen 320 or back wall 318 may display an outside landscape or some other scenery other than the inside of the theater.
- virtual theater environment may simulate a theater without a back wall 318 or some other setting.
- the character 310 of the second user is walking down to the virtual stage 306 along the side wall 314 b .
- character 316 is a virtual rendering of the first user sitting in the virtual seats 308 .
- FIGS. 3B and 3C illustrate the virtual theater environments from the perspective of being through the eyes of the user, e.g., first person point of view
- the user or virtual rendering of the user may be displayed in in the virtual theater environment, e.g., third person point of view. In this way, the user can watch their own movements with respect to other users or the on-demand content.
- the user can select between the first person and third person points of view.
- the users can move throughout the virtual theater environment, watch the virtual renderings of other users move throughout the virtual theater environment, and view the on-demand content that is shared between the users.
- FIGS. 3A-3C show a virtual theater environment that simulates or resembles a movie theater, embodiments are not so limited, and the virtual theater environment may simulate or represent other types settings, landscapes, or environments.
- FIG. 3D shows an alternative example virtual theater environment that is being presented to users in accordance with embodiments described herein.
- virtual theater environment 300 c shows a western-themed environment from the perspective of the first user of the first user premises 120 a in FIG. 1 .
- the virtual theater environment 300 c includes a western-styled building.
- characters 310 and 312 representing the second and third users of the second and third user premises 120 b - 120 c , respectively, are shown in the virtual theater environment 300 c based on the users' particular virtual renderings and movements.
- the on-demand content is displayed on a screen 320 on the building 322 .
- the virtual renderings of the various users can interact with one another and re-enact, sing along, or otherwise, interact with the on-demand content.
- the illustration in FIG. 3D is one example of an alternative virtual theater environment and other scenes or landscapes are also envisaged.
- process 400 described in conjunction with FIG. 4 may be implemented by or executed on one or more computing devices, such as content distributor 102 or on-demand content server 118 in FIG. 1 ; and process 500 described in conjunction with FIG. 5 may be implemented by or executed on one or more computing devices, such as content receiver 122 or virtual-reality headset 124 in FIG. 1 .
- FIG. 4 illustrates a logical flow diagram generally showing one embodiment of a process performed by an on-demand content server to coordinate shared on-demand content for users in a virtual theater environment in accordance with embodiments described herein.
- Process 400 begins, after a start block, at block 402 , where a request for shared on-demand content is received from a content receiver of a first user.
- the first user may be presented with a list of on-demand content that can be shared among a plurality of users. From this list, the first user can select the on-demand content to share, and the content receiver of the first user sends the selection to the server.
- the request also includes a list of one or more other users with which the first user would like to share the on-demand content.
- These other users may be friends of the first user, where the first user selects which friends to invite to share the on-demand content, such as via a list of friends determined from a social network account of the first user.
- Process 400 proceeds to block 404 , where an on-demand invitation is provided to the other users selected by the first user.
- the server sends an email to the other users from which they can select a link to accept the invitation.
- the server provides the on-demand invitation to the other users via the content receivers associated with the other users.
- the content receivers may display the invitation in a graphical user interface on a corresponding television from which the other users can accept or decline the on-demand invitation.
- the corresponding content receivers then send a message to the on-demand content server indicating the acceptance or non-acceptance of the on-demand invitation.
- Process 400 continues at block 406 , where the server receives an acceptance of the on-demand invitation from at least one of the other users.
- each acceptance includes a unique identifier of the content receiver associated with the other user that accepted the invitation. This unique identifier is utilized by the server to encrypt or otherwise secure the on-demand content so that it is viewed by only those users for which it is intended, i.e., the user associated with the content receiver that accepted the invitation.
- the first user may be charged a specific price to view the on-demand content. However, that price may be reduced relative to the number of other users that accept the invitation to join in viewing the on-demand content and is thus based on the total number of users that are to view the on-demand content. Similarly, the price charged to the other users for joining and viewing the content may be reduced relative to the total number of users viewing the on-demand content.
- Process 400 proceeds next to block 408 , where virtual-reality information is received for each user, i.e., the first user and the other users that accepted the shared on-demand invitation.
- each content receiver associated with the users obtains or determines the virtual-reality information and provides it to the server, which is discussed in more detail below in conjunction with FIG. 5 .
- the virtual-reality information for each user includes various different types of information associated with that particular user.
- the virtual-reality information includes look information and movement information.
- the look information identifies, defines, or otherwise characterizes how the particular user would like to be viewed by the other users.
- the look information may identify an avatar, character, celebrity impression, digital rendering of the second user, or other visual representation of the particular user.
- each user selects how they would like other users to view them, which is then characterized as the look information.
- the movement information identifies or defines how the particular user is moving in the real world while viewing the on-demand content. For example, the movement information may indicate when the user raises his or her right arm or if the user is attempting to walk.
- a user's movements can be tracked, such as by tracking changes in camera images taken over time; gyroscopes, accelerometers, rotary sensors, or other sensors attached to the user's body; thermal sensors; or other movement detection systems.
- the virtual-reality information also includes audio of the user speaking, singing, or otherwise vocalizing.
- the audio associated with a particular user is received as a separate audio stream from the content receiver associated with that particular user.
- Process 400 continues next at block 410 , where a virtual rendering of each respective user is generated based on the virtual-reality information associated with that respective user.
- Generation of each virtual rendering includes generating a graphical representation of the respective user in a real-time manner based on the look and movement information received for the respective user. Accordingly, as a respective user physically moves, those movements are provided to the server as part of the virtual-reality information, which is then utilized to animate the virtual rendering of that respective user so that the virtual rendering mirrors or mimics the physical movements of the respective user.
- Process 400 proceeds to decision block 412 , where a determination is made whether a request to augment a user's virtual rendering is received.
- the first user or the other users can input one or more augmentations to a look of another user.
- a user can input that another user's virtual rendering is to include a blue necktie or a bowler hat.
- these augmentations may be considered as “in app” purchases such that the requesting user would have to pay money to augment the virtual rendering of another user.
- a user may input an augmentation request via a menu or preselected option prior to providing the on-demand content.
- the virtual theater environment includes a virtual concession stand where a user can select which augmentations to add to which users throughout the presentation of the on-demand content.
- users may be provided with the opportunity to prevent their virtual rendering from being augmented.
- the first user may make an “in app” purchase so that other users cannot modify the virtual rendering of the first user.
- decision block 412 may be optional and may not be performed. If an augmentation request is received, process 400 flows to block 414 ; otherwise, process 400 flows to block 416 .
- a user's virtual rendering is augmented based on the received request.
- this augmentation includes modifying the look information associated with that particular user.
- process 400 flows from decision block 412 to block 416 .
- the on-demand content and the virtual renderings of each user are provided to the content receivers of the users.
- the on-demand content is provided to the content receivers of each user as an audiovisual content data stream and the virtual renderings as metadata or another data stream that is separate from the on-demand content.
- any audio received from users is also provided to the content receivers of the users so that users can talk to one another.
- Process 400 continues at decision block 418 , where a determination is made whether the on-demand content has ended. If the on-demand content has not ended, then process 400 loops to block 408 to continue streaming the on-demand content to the users and to receive updated virtual-reality information for each user. In this way, each user's physical movements are captured and provided at block 408 in real time as the on-demand content is being provided to each user, which enables the content receivers to generate virtual theater environments in real time with the on-demand content and the virtual renderings of the users and their movements. If the on-demand content has ended, process 400 terminates or otherwise returns to a calling process to perform other actions.
- FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process performed by a content receiver to generate the virtual theater environment for presenting the shared on-demand content to a user in accordance with embodiments described herein.
- Process 500 is performed by the content receiver of each user that is to view the shared on-demand content, such as the first user and the other users that accepted the shared on-demand content invitation described above in conjunction with FIG. 4 . Accordingly, the user of the particular content receiver performing process 500 is referred to as the local user, and users of other content receivers are referred to as remote users.
- Process 500 begins, after a start block, at block 502 , where a virtual theater environment is determined for a local user.
- the local user is presented with a list of various different theater-type environments from which to choose as the virtual theater environment. For example, the local user could choose a small 10-seat movie theater, a 1000-person Broadway theatre, or some other virtualized theater.
- the first user in FIG. 4 that requests the shared on-demand content may also select the virtual theater environment so that each other user does not have to make such a selection.
- Process 500 proceeds to block 504 , where virtual-reality information associated with the local user is determined.
- the virtual-reality information includes look information and movement information associated with the local user, as well as audio received from the user.
- the look information identifies, defines, or otherwise characterizes how the particular user would like to be viewed by the other user.
- one or more of the cameras are utilized to capture an image of the local user's face, such as before the local user puts on the virtual-reality headset. From this image, a digital rendering of the local user is created, such as by using facial recognition techniques to identify facial characteristics that are used to create a virtual representation of the local user.
- the local user selects some other virtual look, rather than the digital rendering of themselves, such as an avatar, character, celebrity impression, or other visual representation of the local user.
- the movement information identifies or defines how the particular user is moving in the real world while viewing the on-demand content.
- the movement information associated with the local user is determined based on multiple images captured over time by each of a plurality of cameras.
- the local user's body is identified in the images using image recognition techniques. Differences in the positioning of the user's body between images indicate movement of the local user.
- other types of sensors may also be utilized, such as thermal sensors; gyroscopes, accelerometers, rotary sensors, or other sensors attached to the user's body; or other movement detection systems.
- Process 500 continues at block 506 , where the local user's virtual-reality information is provided to the server.
- the content receiver provides the virtual-reality information, or changes thereof, periodically, at predetermined times or intervals, or when there are changes to the virtual-reality information. In this way, the user's movements are tracked in real time and provided to the server, where the server can update the virtual rendering of the local user, which is to be provided to the other content receiver.
- Process 500 proceeds next to block 508 , where on-demand content is received from the server.
- the on-demand content may be received from the server via a streamed audiovisual file.
- Process 500 continues next at block 510 , where virtual renderings of the remote users are received.
- the virtual renderings may be received from the server as metadata to the on-demand content or as a data stream.
- the content receiver may generate the virtual renderings of the remote users, rather than the server at block 410 in FIG. 4 .
- the content receiver receives the virtual-reality information associated with the remote users and then generates the virtual renderings of the remote users based on the received virtual-reality information.
- Process 500 proceeds to block 512 , where the virtual theater environment is generated for the local user.
- the virtual theater is generated such that the virtual renderings of the remote users are positioned within the virtual theater and the on-demand content is displayed on a virtual screen in the virtual theater.
- the virtual theater environment is generated from the perspective of the local user in accordance with the local user's movement information.
- the virtual theater environment consists of a front virtual screen, a plurality of seats, and a stage between the front screen and the seats.
- the local user's movements are not limited to just looking around the virtual theater. Rather, the local user can move between seats to see the virtual screen from a different perspective or angle.
- the local user can also walk among the seats, down alleys in the theater, and onto the stage.
- the local user can act out, perform, or conduct other movements via their virtual rendering standing on the stage or in some other area of the virtual theater environment.
- the remote users can watch the virtual rendering of the local user move on the stage in front of the virtual screen, or elsewhere in the virtual theater.
- the local user and the remote users can interact with each other in more ways than just listening to each other or seeing an avatar of each user. Rather, the users can reenact the on-demand content that they are watching while the on-demand content is playing, as if they are actually in a real theater.
- the look of the virtual theater also moves and adjusts with the user.
- the virtual theater environment includes a second virtual screen on the back wall behind the seats on which the on-demand content is shown, similar to what is shown in FIG. 3B . In this way, the local user does not have to miss any of the on-demand content while one stage.
- Process 500 then proceeds to block 514 , where the virtual theater environment is presented to the local user.
- the virtual-reality headset displays the virtual theater environment to the first user.
- the content receiver transmits the virtual theater environment, such as in a wireless video stream, to the virtual-reality headset for display to the first user.
- Process 500 continues at decision block 516 , where a determination is made whether the on-demand content has ended. In various embodiments, this determination is based on an end-of-file message received by the content receiver. In other embodiments, the end of the on-demand content may be identified based on a manual command provided by the first user, such as by activating a stop or pause button or changing the television channel away from the on-demand content. If the on-demand content has ended, process 500 terminates or otherwise returns to a calling process to perform other actions.
- process 500 loops to block 504 to continue to determine virtual-reality information of the local user, such as the movement information; receive the on-demand content; receive the virtual renderings and their movements of remote users; and generate the virtual theater environment for presentation to the local user.
- virtual-reality information of the local user such as the movement information
- FIG. 6 shows a system diagram that describes one implementation of computing systems for implementing embodiments described herein.
- System 600 includes content receiver 122 , content distributor 102 , content provider 104 , information provider 106 , and on-demand content server 118 .
- System 600 also includes virtual-reality headset 124 and cameras 224 .
- Content receiver 122 receives content and virtual-reality information for other users from content distributor 102 or on-demand content server 118 and generates a virtual theater environment for presentation to a user via virtual-reality headset 124 , as described herein.
- the content receiver 122 analyzes image data received from cameras 224 to generate virtual-reality information for the local user of the content receiver 122 and provides it to the content distributor 102 or the on-demand content server 118 for providing to the other users, as described herein.
- One or more general-purpose or special-purpose computing systems may be used to implement content receiver 122 . Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof. As mentioned above, the content receiver 122 and the virtual-reality headset 124 may be separate devices or they may be incorporated into a single device. Similarly, the content distributor 102 and the on-demand content server 118 may be separate devices or they may be incorporated into a single device.
- Content receiver 122 may include memory 630 , one or more central processing units (CPUs) 644 , display interface 646 , other I/O interfaces 648 , other computer-readable media 650 , and network connections 652 .
- CPUs central processing units
- Memory 630 may include one or more various types of non-volatile and/or volatile storage technologies. Examples of memory 630 may include, but are not limited to, flash memory, hard disk drives, optical drives, solid-state drives, various types of random access memory (RAM), various types of read-only memory (ROM), other computer-readable storage media (also referred to as processor-readable storage media), or the like, or any combination thereof. Memory 630 may be utilized to store information, including computer-readable instructions that are utilized by CPU 644 to perform actions, including embodiments described herein.
- Memory 630 may have stored thereon virtual-reality system 632 , which includes user movement module 634 and virtual theater generator module 636 .
- the user movement module 634 may employ embodiments described herein to utilize image data captured by cameras 224 to determine and track body movement of the user of the virtual-reality headset 124 .
- Virtual theater generator 636 employs embodiments described herein to utilize on-demand content and virtual renderings of other users, or virtual-reality information of other users, to generate the virtual theater environment for presentation on the virtual-reality headset 124 to the user of the system 600
- Memory 630 may also store other programs 640 and other data 642 .
- other data 642 may include predetermined virtual renderings of one or more users or other information.
- Display interface 646 is configured to provide content to a display device, such as virtual-reality headset 124 .
- Network connections 652 are configured to communicate with other computing devices, such as content distributor 102 or on-demand content server 118 , via communication network 110 .
- Other I/O interfaces 648 may include a keyboard, audio interfaces, other video interfaces, or the like.
- Other computer-readable media 650 may include other types of stationary or removable computer-readable media, such as removable flash drives, external hard drives, or the like.
- the virtual-reality headset 124 includes a display device for presenting the virtual theater environment to the user.
- the virtual-reality headset 124 includes other computing components similar to content receiver 122 (e.g., a memory, processor, I/O interfaces, etc.), but are not illustrated here for convenience.
- the virtual-reality headset 124 includes the components and functionality of the content receiver 122 .
- Content distributor 102 , content provider 104 , information provider 106 , on-demand content server 118 , and content receiver 122 may communicate via communication network 110 .
- the content distributor 102 , content provider 104 , information provider 106 , and on-demand content server 118 include processors, memory, network connections, and other computing components that enable the server computer devices to perform actions as described herein, but are not illustrated here for convenience.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- The present disclosure relates generally to video content distribution, and more particularly, but not exclusively, to presentation of shared on-demand content in a virtual-reality system.
- Virtual-reality headsets are becoming cheaper and more readily available to the everyday user, and they could soon be a common electronic device in many households. These headsets allow users to experience content in a way that the user feels like they are actually participating in the content. Such hands-on experiences have been utilized in video gaming, movie watching, personal interactions, and other interactive and immersive-type content. However, since only one person is typically wearing the headset at a given time, such experiences can be rather lonesome and secluded, which minimizes the joy of interacting with others while interacting with the content. It is with respect to these and other considerations that the embodiments herein have been made.
- Briefly described, embodiments are directed toward systems and methods of providing an interactive atmosphere for sharing on-demand content among a plurality of users in a virtual-reality environment, and in particular to a virtual theater environment. Each of a plurality of users utilize a content receiver or virtual-reality headset, or a combination thereof, to receive on-demand content that is shared between the users. Each respective content receiver collects virtual-reality information associated with the user of that respective content receiver. This virtual-reality information includes movement information that describes the movement of the user and look information that identifies a virtual look of the user. The content receivers share this virtual-reality information with each other so that each content receiver can generate a virtual theater environment specific for its respective user. The virtual theater environment includes a plurality of seats, a virtual screen, and a stage from the perspective of the respective user. The shared on-demand content is displayed on the virtual screen and virtual renderings of the other users are displayed in the virtual theater environment based on the movement information of those particular users. As each user physically moves his or her body, each virtual theater environment adjusts to accommodate for these movements throughout the virtual theater environment. This virtual theater environment allows users to consume the same on demand content together and to interact with each other and the content itself.
- Non-limiting and non-exhaustive embodiments are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified.
- For a better understanding of the present invention, reference will be made to the following Detailed Description, which is to be read in association with the accompanying drawings:
-
FIG. 1 illustrates a context diagram for providing audiovisual content to a user via a virtual-reality headset in accordance with embodiments described herein; -
FIG. 2 illustrates an example environment of a user utilizing a virtual-reality headset in accordance with embodiments described herein; -
FIGS. 3A-3C show example virtual theater environments that are being presented to users in accordance with embodiments described herein; -
FIG. 3D shows an alternative example virtual theater environment that is being presented to users in accordance with embodiments described herein; -
FIG. 4 illustrates a logical flow diagram generally showing one embodiment of a process performed by an on-demand content server to coordinate shared on-demand content for users in a virtual theater environment in accordance with embodiments described herein; -
FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process performed by a content receiver to generate the virtual theater environment for presenting the shared on-demand content to a user in accordance with embodiments described herein; and -
FIG. 6 shows a system diagram that describes one implementation of computing systems for implementing embodiments described herein. - The following description, along with the accompanying drawings, sets forth certain specific details in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that the disclosed embodiments may be practiced in various combinations, without one or more of these specific details, or with other methods, components, devices, materials, etc. In other instances, well-known structures or components that are associated with the environment of the present disclosure, including but not limited to the communication systems and networks, have not been shown or described in order to avoid unnecessarily obscuring descriptions of the embodiments. Additionally, the various embodiments may be methods, systems, media, or devices. Accordingly, the various embodiments may be entirely hardware embodiments, entirely software embodiments, or embodiments combining software and hardware aspects.
- Throughout the specification, claims, and drawings, the following terms take the meaning explicitly associated herein, unless the context clearly dictates otherwise. The term “herein” refers to the specification, claims, and drawings associated with the current application. The phrases “in one embodiment,” “in another embodiment,” “in various embodiments,” “in some embodiments,” “in other embodiments,” and other variations thereof refer to one or more features, structures, functions, limitations, or characteristics of the present disclosure, and are not limited to the same or different embodiments unless the context clearly dictates otherwise. As used herein, the term “or” is an inclusive “or” operator, and is equivalent to the phrases “A or B, or both” or “A or B or C, or any combination thereof,” and lists with additional elements are similarly treated. The term “based on” is not exclusive and allows for being based on additional features, functions, aspects, or limitations not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include singular and plural references.
-
FIG. 1 illustrates a context diagram for providing audiovisual content to a user via a virtual-reality headset in accordance with embodiments described herein. Example 100 may includecontent provider 104,information provider 106,content distributor 102,communication networks 110, on-demand content server 118, and user premises 120 a-120 c. - Typically,
content providers 104 generate, aggregate, and/or otherwise provide audiovisual content that is provided to one or more users. Sometimes, content providers are referred to as “channels.” Examples ofcontent providers 104 may include, but are not limited to, film studios, television studios, network broadcasting companies, independent content producers, such as AMC, HBO, Showtime, or the like, or other entities that provide content for user consumption. A content provider may also include individuals that capture personal or home videos, and distribute these videos to others over various online media-sharing websites or other distribution mechanisms. The content provided bycontent providers 104 may be referred to as the program content, which may include movies, sitcoms, reality shows, talk shows, game shows, documentaries, infomercials, news programs, sports broadcasts, or the like. In this context, program content may also include commercials or other television advertisements. It should be noted that the commercials may be added to the program content by thecontent providers 104 or thecontent distributor 102. -
Information provider 106 may create and distribute data or other information that describes or supports audiovisual content. Generally, this data is related to the content provided bycontent provider 104. For example, this data may include, for example, metadata, program name, closed-caption authoring, and placement within the content, timeslot data, pay-per-view and related data, or other information that is associated with the content. In some embodiments, acontent distributor 102 may combine or otherwise associate the data frominformation provider 106 and the content fromcontent provider 104, which may be referred to as the distributed content. However, other entities may also combine or otherwise associate the content and other data together. -
Content distributor 102 may provide the content, whether content obtained fromcontent provider 104 and/or data frominformation provider 106, to a user through a variety of different distribution mechanisms. For example, in some embodiments,content distributor 102 may provide content and data to one or more users'content receivers 122 a-122 c throughcommunication network 110 on communication links 111 a-111 c, respectively. In other embodiments, the content and data may be sent throughuplink 112, which goes tosatellite 114 and back to satellite antennas 116 a-116 c, and to thecontent receivers 122 a-122 c via communication links 113 a-113 c, respectively. It should be noted that some content receives may receive content viasatellite 114, while other content receivers receive content viacommunication network 110. - On-
demand content server 118 communicates with thecontent receivers 122 a-122 c or virtual-reality headsets 124 a-124 c of each respective user viacommunication network 110 to coordinate shared on-demand content among multiple users, as described herein. Briefly, the on-demand content server 118 receives virtual-reality information for each user and provides it to each other user along with the on-demand content so that thecontent receivers 122 a-122 c or virtual-reality headsets 124 a-124 c of each respective user can generate a respective virtual theater environment for the shared on-demand content. In various embodiments, the on-demand content server 118 or the functionality of the on-demand content server 118 is part of or otherwise incorporated into thecontent distributor 102 or thecontent provider 104, or it may be a separate device. -
Communication network 110 may be configured to couple various computing devices to transmit content/data from one or more devices to one or more other devices. For example,communication network 110 may be the Internet, X.25 networks, or a series of smaller or private connected networks that carry the content and other data.Communication network 110 may include one or more wired or wireless networks. -
Content receivers 122 a-122 c are receiving devices of content fromcontent distributor 102 or on-demand content server 118, and they provide the content to virtual-reality headsets 124 a-124 c, respectively, for presentation to their respective user. Examples ofcontent receivers 122 a-122 c include, but are not limited to, a set-top box, a cable connection box, a computer, or other content or television receivers. Thecontent receivers 122 a-122 c can be configured to receive the content from thecontent distributor 102 or the on-demand content server 118 viacommunication network 110 and communication links 111 a-111 c, respectively, or via satellite antennas 116 a-116 c and communication links 113 a-113 c, respectively. - The following is a brief discussion of the functionality of
content receiver 122 a and virtual-reality headset 124 a. It should be noted, thatcontent receivers 122 b-122 c and virtual-reality headsets 124 b-124 c perform similar functionality. -
Content receiver 122 a is configured to provide content to a user's virtual-reality headset 124 a, or to other display devices, such as a television, monitor, projector, etc. In various embodiments,content receiver 122 a communicates with virtual-reality headset 124 a viacommunication link 126 a to provide on-demand content to a user, as described herein. Communication link 126 a may be a wired connection or wireless connection, such as Bluetooth, Wi-Fi, or other wireless communication protocol. - In some embodiments, the
content receiver 122 a generates a virtual theater environment, as described herein, and provides it to the virtual-reality headset 124 a to be displayed to a user. In other embodiments, thecontent receiver 122 a provides on-demand content to the virtual-reality headset 124 a, but does not generate the virtual theater environment. In yet other embodiments, thecontent receiver 122 a receives the virtual theater environment from the on-demand content server 118 and provides it to the virtual-reality headset 124 a for display to a user. - As described herein, virtual-reality information is shared among multiple users to generate a virtual theater environment for each user. In some embodiments, the
content receiver 122 a collects, obtains, generates, or otherwise determines the virtual-reality information for the user of the virtual-reality headset 124 a from the virtual-reality headset 124 a, or from one or more cameras or other sensors (not illustrated), or a combination thereof, as described in more detail herein. Thecontent receiver 122 a utilizes this virtual-reality information to generate the virtual theater environment, or it can provide it to the on-demand content server 118 or to the virtual-reality headset 124 a to generate the virtual theater environment. In any event, thecontent receiver 122 a provides the virtual-reality information to the on-demand content server 118 so that it can be shared withother content receivers 122 b-122 c or virtual-reality headsets 124 b-124 c to generate virtual-reality theater environments for each respective user, as described herein. - The virtual-reality headset 124 a is configured to display a virtual theater environment to a user of the virtual-reality headset 124 a. Virtual-reality headset 124 a may be an all-in-one virtual-reality headset or it may be a combination of multiple separate electronic devices, such as a smartphone and a head-mounting apparatus.
- In some embodiments, the virtual-reality headset 124 a receives the virtual theater environment from the
content receiver 122 a viacommunication link 126 a and displays it to a user. In other embodiments, the virtual-reality headset 124 a receives on-demand content from thecontent receiver 122 a and generates the virtual reality environment itself before displaying it to the user. In at least one such embodiment, the virtual-reality headset 124 a obtains virtual-reality information associated with other users from the on-demand content server 118 via thecontent receiver 122 a. In other embodiments, the virtual-reality headset 124 a may communicate withcontent distributor 102 or on-demand content server 118 viacommunication network 110 and communication link 115 a independent of and separate fromcontent receiver 122 a. For example, in some embodiments, the virtual-reality headset 124 a obtains virtual-reality information associated with other users from the on-demand content server 118 viacommunication network 110 and communication link 115 a. In yet other embodiments, the virtual-reality headset 124 a provides the virtual-reality information of the user of the virtual-reality headset 124 a to the on-demand content server 118 so that it can be shared withother content receivers 122 b-122 c or virtual-reality headsets 124 b-124 c, as described herein. - In various embodiments,
content receiver 122 a is separate from or independent of the virtual-reality headset 124 a, such as is illustrated inFIG. 1 . In various other embodiments,content receiver 122 a may be part of or integrated with the virtual-reality headset 124 a. - Although the forgoing description provides details of
content receiver 122 a and virtual-reality headset 124 a,content receivers 122 b-122 c and virtual-reality headsets 124 b-124 c include similar functionality. For example,content receivers 122 b-122 c can receive content fromcontent distributor 102 viasatellite 114 viaantenna 116 b-116 c andcommunication links 113 b-113 c, respectively, and communicate withcontent distributor 102 or on-demand content server 118 viacommunication network 110 andcommunication links 111 b-111 c, respectively. Similarly, virtual-reality headsets 124 b-124 c can communicate withcontent receivers 122 b-122 c viacommunication links 126 b-126 c, respectively, or communicate with on-demand content server 118 viacommunication network 110 andcommunication links 115 b-115 c, respectively. -
FIG. 2 illustrates an example environment of a user utilizing a virtual-reality headset in accordance with embodiments described herein.Environment 200 is an example of a user premises, such as first user premises 120 a inFIG. 1 .Environment 200 includesuser 202 sitting in front of a plurality ofcameras 224 a-224 b. Eachcamera 224 captures images of theuser 202, which are utilized to track the physical movement of theuser 202 throughout theenvironment 200. AlthoughFIG. 2 illustrates only twocameras 224 a and 224 b, more or less cameras may also be used. Similarly, cameras may be embedded in or part of other electronic devices, such as, but not limited to,smartphone 204,laptop 206, orcontent receiver 122. Each of these devices may also capture images of theuser 202 for tracking movement of theuser 202. - In some embodiments, camera 224 a, or another camera, captures at least one image of
user 202 prior to the user putting on the virtual-reality headset 124. Facial recognition techniques are utilized to digitize the user's face. This digitized rendering of the user's face is provided to other users as the virtual rendering of theuser 202. In other embodiments, theuser 202 may select an avatar, celebrity impression, or other character representation to use as the virtual rendering of the user, rather than the digitized version of his or her own face. - In various embodiments, each device that includes a camera communicates with
content receiver 122 or virtual-reality headset 124 via a wired or wireless communication connection to provide captured images to thecontent receiver 122 or virtual-reality headset 124 for processing. In particular, image recognition techniques are utilized on the captured images to identify different body parts of theuser 202, and differences from one image to the next indicate and characterize movement of those body parts. This movement information is utilized to subsequently adjust the perspective of a virtual theater environment presented to theuser 202 and to move or animate the virtual rendering of theuser 202 for other users. -
FIGS. 3A-3C show example virtual theater environments that are being presented to users in accordance with embodiments described herein.FIG. 3A shows a top-down view of avirtual theater environment 300. In the illustrated example, thevirtual theater environment 300 includesfront wall 302,side walls back wall 318. Onfront wall 302 is avirtual screen 304 that displays on demand content to the user watching the virtual theater environment. Thevirtual theater environment 300 also includes avirtual stage 306 positioned in front of thefront wall 302 and a plurality ofvirtual seats 308 positioned between thestage 306 andback wall 318. As described in more detail below, users can sit in thevirtual seats 308 or move throughout thevirtual theater environment 300 to watch thevirtual screen 304. Similarly, users can interact with other users on thevirtual stage 306 or other areas of theenvironment 300. In some embodiments, thevirtual theater environment 300 includes a second virtual screen, such asscreen 320 onback wall 318. In this way, users can still watch the on-demand content even if they are not facing thescreen 304 on the front wall. It should be noted that the number and positions of thescreens seats 308, andstage 306 may be different from what is illustrated. -
FIG. 3B showsvirtual theater environment 300 a.Virtual theater environment 300 a is an embodiment ofvirtual theater environment 300 inFIG. 3A , but from the perspective of a first user, e.g., a user of virtual-reality headset 124 a in first user premises 120 a inFIG. 1 , sitting in one ofvirtual seats 308. As described above,virtual theater environment 300 a includes afront wall 302 and side walls 314 a-314 b. - On the
front wall 302 is avirtual screen 304 that is displaying on-demand content that is shared between the first user and one or more other users. Thevirtual theater environment 300 a also includes avirtual stage 306 that is positioned between thevirtual screen 304 and the plurality ofvirtual seats 308. - Along with the on-demand content, the
virtual theater environment 300 a also includes a virtual rendering of other users that are participating in the shared on-demand content. In this illustration,character 310 is a virtual rendering of a second user andcharacter 312 is a virtual rendering of a third user, where the second and third users are remote from the first user. For example, the second user may be a user of virtual-reality headset 124 b in second user premises 120 b inFIG. 1 , and the third user may be a user of virtual-reality headset 124 c in third user premises 120 c inFIG. 1 . As discussed elsewhere herein, a virtual theater environment is generated and presented to each user participating in the shared on-demand content but from their own perspective. For example, thevirtual theater environment 300 a is from the perspective of the first user and thevirtual theater environment 300 b inFIG. 3C is from the perspective of the third user. - As discussed elsewhere herein, the physical movement of a user is tracked and translated into changes in the virtual theater environment and to move or animate the virtual rendering of the user in the virtual theater environment of other users. As illustrated in
FIG. 3B ,character 310, i.e., the virtual rendering of the second user, is walking along theside wall 314 b towards thestage 306. The movement ofcharacter 310 is created based on the physical movements of the second user, such as the second user physically walking in place.Character 312, i.e., the virtual rendering of the third user, is standing on thevirtual stage 306 facing thevirtual seats 308.Character 312 may have reached the stage in a manner similar to the second user andcharacter 310. Once on thestage 306, the second and third users can begin to physically move, act, dance, or perform other physical movements that translate intocharacters virtual stage 306. The virtual theater environment allows the first user to watch thecharacters virtual screen 304. -
FIG. 3C showsvirtual theater environment 300 b.Virtual theater environment 300 b is an embodiment ofvirtual theater environment 300 inFIG. 3A , but from the perspective of the third user standing on thevirtual stage 306, as mentioned above. As illustrated, thevirtual theater environment 300 b includes side walls 314 a-314 b, similar to thevirtual theater environment 300 a inFIG. 3B , and also aback wall 318. - As described herein, users can move throughout the
virtual theater environment 300 b. In this example illustration, the third user has moved onto thestage 306. While one thestage 306, the third user can be facing thescreen 304 on thefront wall 302 or some other direction. If the third user is facing thescreen 304, then the third user can watch the on-demand content on thescreen 304. But if the third user is not facing thescreen 304, then the on-demand content can be displayed to the third user in another manner. For example, the on-demand content can be displayed on asecond screen 320 that is onback wall 318, as illustrated inFIG. 3C . In this way, the third user can enjoy the on-demand content even though the third user is on thevirtual stage 306 and not looking atscreen 304. - In other embodiments, the user can select to have other content displayed on
screen 320. For example, the user can select other content that is related or un-related to the on-demand content being shown onscreen 304. As an illustrative example, the on-demand content being displayed onscreen 304 may be a musical, and the content displayed onscreen 320 may be the words, notes, or sheet music for the current song in the musical so that the user can sing along with the current song. In yet other embodiments, advertisements are displayed on thescreen 320. In one such embodiment, the user can make an “in-app” purchase to remove the advertisements from thesecond screen 302, to display the on-demand content, or to display other content. - In some other embodiments,
screen 320 orback wall 318 may display other scenery or graphics. For example, thescreen 320 orback wall 318 may display an outside landscape or some other scenery other than the inside of the theater. In this way, virtual theater environment may simulate a theater without aback wall 318 or some other setting. As mentioned above, thecharacter 310 of the second user is walking down to thevirtual stage 306 along theside wall 314 b. Also illustrated ischaracter 316, which is a virtual rendering of the first user sitting in thevirtual seats 308. - Although
FIGS. 3B and 3C illustrate the virtual theater environments from the perspective of being through the eyes of the user, e.g., first person point of view, embodiments are not so limited. In other embodiments, the user or virtual rendering of the user may be displayed in in the virtual theater environment, e.g., third person point of view. In this way, the user can watch their own movements with respect to other users or the on-demand content. In various embodiments, the user can select between the first person and third person points of view. - By employing embodiments described herein, the users can move throughout the virtual theater environment, watch the virtual renderings of other users move throughout the virtual theater environment, and view the on-demand content that is shared between the users.
- Although
FIGS. 3A-3C show a virtual theater environment that simulates or resembles a movie theater, embodiments are not so limited, and the virtual theater environment may simulate or represent other types settings, landscapes, or environments. -
FIG. 3D shows an alternative example virtual theater environment that is being presented to users in accordance with embodiments described herein. As illustrated,virtual theater environment 300 c shows a western-themed environment from the perspective of the first user of the first user premises 120 a inFIG. 1 . In this example, thevirtual theater environment 300 c includes a western-styled building. Similar toFIGS. 3B and 3C illustrated above,characters virtual theater environment 300 c based on the users' particular virtual renderings and movements. Moreover, similar to what is described herein, the on-demand content is displayed on ascreen 320 on thebuilding 322. Accordingly, the virtual renderings of the various users can interact with one another and re-enact, sing along, or otherwise, interact with the on-demand content. The illustration inFIG. 3D is one example of an alternative virtual theater environment and other scenes or landscapes are also envisaged. - The operation of certain aspects will now be described with respect to
FIGS. 4 and 5 . In at least one of various embodiments,process 400 described in conjunction withFIG. 4 may be implemented by or executed on one or more computing devices, such ascontent distributor 102 or on-demand content server 118 inFIG. 1 ; andprocess 500 described in conjunction withFIG. 5 may be implemented by or executed on one or more computing devices, such ascontent receiver 122 or virtual-reality headset 124 inFIG. 1 . -
FIG. 4 illustrates a logical flow diagram generally showing one embodiment of a process performed by an on-demand content server to coordinate shared on-demand content for users in a virtual theater environment in accordance with embodiments described herein.Process 400 begins, after a start block, atblock 402, where a request for shared on-demand content is received from a content receiver of a first user. In various embodiments, the first user may be presented with a list of on-demand content that can be shared among a plurality of users. From this list, the first user can select the on-demand content to share, and the content receiver of the first user sends the selection to the server. - In various embodiments, the request also includes a list of one or more other users with which the first user would like to share the on-demand content. These other users may be friends of the first user, where the first user selects which friends to invite to share the on-demand content, such as via a list of friends determined from a social network account of the first user.
-
Process 400 proceeds to block 404, where an on-demand invitation is provided to the other users selected by the first user. In some embodiments, the server sends an email to the other users from which they can select a link to accept the invitation. In other embodiments, the server provides the on-demand invitation to the other users via the content receivers associated with the other users. For example, the content receivers may display the invitation in a graphical user interface on a corresponding television from which the other users can accept or decline the on-demand invitation. The corresponding content receivers then send a message to the on-demand content server indicating the acceptance or non-acceptance of the on-demand invitation. -
Process 400 continues atblock 406, where the server receives an acceptance of the on-demand invitation from at least one of the other users. In some embodiments, each acceptance includes a unique identifier of the content receiver associated with the other user that accepted the invitation. This unique identifier is utilized by the server to encrypt or otherwise secure the on-demand content so that it is viewed by only those users for which it is intended, i.e., the user associated with the content receiver that accepted the invitation. In some embodiments, the first user may be charged a specific price to view the on-demand content. However, that price may be reduced relative to the number of other users that accept the invitation to join in viewing the on-demand content and is thus based on the total number of users that are to view the on-demand content. Similarly, the price charged to the other users for joining and viewing the content may be reduced relative to the total number of users viewing the on-demand content. -
Process 400 proceeds next to block 408, where virtual-reality information is received for each user, i.e., the first user and the other users that accepted the shared on-demand invitation. In at least one embodiment, each content receiver associated with the users obtains or determines the virtual-reality information and provides it to the server, which is discussed in more detail below in conjunction withFIG. 5 . - Briefly, the virtual-reality information for each user includes various different types of information associated with that particular user. For example, the virtual-reality information includes look information and movement information. The look information identifies, defines, or otherwise characterizes how the particular user would like to be viewed by the other users. For example, the look information may identify an avatar, character, celebrity impression, digital rendering of the second user, or other visual representation of the particular user. In various embodiments, each user selects how they would like other users to view them, which is then characterized as the look information.
- The movement information identifies or defines how the particular user is moving in the real world while viewing the on-demand content. For example, the movement information may indicate when the user raises his or her right arm or if the user is attempting to walk. There are many different ways that a user's movements can be tracked, such as by tracking changes in camera images taken over time; gyroscopes, accelerometers, rotary sensors, or other sensors attached to the user's body; thermal sensors; or other movement detection systems.
- The virtual-reality information also includes audio of the user speaking, singing, or otherwise vocalizing. In some embodiments, the audio associated with a particular user is received as a separate audio stream from the content receiver associated with that particular user.
-
Process 400 continues next atblock 410, where a virtual rendering of each respective user is generated based on the virtual-reality information associated with that respective user. Generation of each virtual rendering includes generating a graphical representation of the respective user in a real-time manner based on the look and movement information received for the respective user. Accordingly, as a respective user physically moves, those movements are provided to the server as part of the virtual-reality information, which is then utilized to animate the virtual rendering of that respective user so that the virtual rendering mirrors or mimics the physical movements of the respective user. -
Process 400 proceeds to decision block 412, where a determination is made whether a request to augment a user's virtual rendering is received. In some embodiments, the first user or the other users can input one or more augmentations to a look of another user. For example, a user can input that another user's virtual rendering is to include a blue necktie or a bowler hat. In at least one embodiment, these augmentations may be considered as “in app” purchases such that the requesting user would have to pay money to augment the virtual rendering of another user. In some embodiments, a user may input an augmentation request via a menu or preselected option prior to providing the on-demand content. In other embodiments, the virtual theater environment includes a virtual concession stand where a user can select which augmentations to add to which users throughout the presentation of the on-demand content. - In some embodiments, users may be provided with the opportunity to prevent their virtual rendering from being augmented. For example, the first user may make an “in app” purchase so that other users cannot modify the virtual rendering of the first user.
- In various embodiments,
decision block 412 may be optional and may not be performed. If an augmentation request is received,process 400 flows to block 414; otherwise,process 400 flows to block 416. - At
block 414, a user's virtual rendering is augmented based on the received request. In various embodiments, this augmentation includes modifying the look information associated with that particular user. Afterblock 414,process 400 proceeds to block 416. - If, at
decision block 412, an augmentation request is not received,process 400 flows fromdecision block 412 to block 416. Atblock 416, the on-demand content and the virtual renderings of each user are provided to the content receivers of the users. In various embodiments, the on-demand content is provided to the content receivers of each user as an audiovisual content data stream and the virtual renderings as metadata or another data stream that is separate from the on-demand content. Moreover, any audio received from users is also provided to the content receivers of the users so that users can talk to one another. -
Process 400 continues atdecision block 418, where a determination is made whether the on-demand content has ended. If the on-demand content has not ended, then process 400 loops to block 408 to continue streaming the on-demand content to the users and to receive updated virtual-reality information for each user. In this way, each user's physical movements are captured and provided atblock 408 in real time as the on-demand content is being provided to each user, which enables the content receivers to generate virtual theater environments in real time with the on-demand content and the virtual renderings of the users and their movements. If the on-demand content has ended,process 400 terminates or otherwise returns to a calling process to perform other actions. -
FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process performed by a content receiver to generate the virtual theater environment for presenting the shared on-demand content to a user in accordance with embodiments described herein.Process 500 is performed by the content receiver of each user that is to view the shared on-demand content, such as the first user and the other users that accepted the shared on-demand content invitation described above in conjunction withFIG. 4 . Accordingly, the user of the particular contentreceiver performing process 500 is referred to as the local user, and users of other content receivers are referred to as remote users. -
Process 500 begins, after a start block, atblock 502, where a virtual theater environment is determined for a local user. In some embodiments, the local user is presented with a list of various different theater-type environments from which to choose as the virtual theater environment. For example, the local user could choose a small 10-seat movie theater, a 1000-person Broadway theatre, or some other virtualized theater. In some embodiments, the first user inFIG. 4 that requests the shared on-demand content may also select the virtual theater environment so that each other user does not have to make such a selection. -
Process 500 proceeds to block 504, where virtual-reality information associated with the local user is determined. As mentioned herein, the virtual-reality information includes look information and movement information associated with the local user, as well as audio received from the user. - The look information identifies, defines, or otherwise characterizes how the particular user would like to be viewed by the other user. In some embodiments, one or more of the cameras are utilized to capture an image of the local user's face, such as before the local user puts on the virtual-reality headset. From this image, a digital rendering of the local user is created, such as by using facial recognition techniques to identify facial characteristics that are used to create a virtual representation of the local user. In other embodiments, the local user selects some other virtual look, rather than the digital rendering of themselves, such as an avatar, character, celebrity impression, or other visual representation of the local user.
- The movement information identifies or defines how the particular user is moving in the real world while viewing the on-demand content. In various embodiments, the movement information associated with the local user is determined based on multiple images captured over time by each of a plurality of cameras. The local user's body is identified in the images using image recognition techniques. Differences in the positioning of the user's body between images indicate movement of the local user. In other embodiments, other types of sensors may also be utilized, such as thermal sensors; gyroscopes, accelerometers, rotary sensors, or other sensors attached to the user's body; or other movement detection systems.
-
Process 500 continues atblock 506, where the local user's virtual-reality information is provided to the server. In some embodiments, the content receiver provides the virtual-reality information, or changes thereof, periodically, at predetermined times or intervals, or when there are changes to the virtual-reality information. In this way, the user's movements are tracked in real time and provided to the server, where the server can update the virtual rendering of the local user, which is to be provided to the other content receiver. -
Process 500 proceeds next to block 508, where on-demand content is received from the server. As mentioned above, the on-demand content may be received from the server via a streamed audiovisual file. -
Process 500 continues next atblock 510, where virtual renderings of the remote users are received. As mentioned above, the virtual renderings may be received from the server as metadata to the on-demand content or as a data stream. In some other embodiments, the content receiver may generate the virtual renderings of the remote users, rather than the server atblock 410 inFIG. 4 . In such an embodiment, the content receiver receives the virtual-reality information associated with the remote users and then generates the virtual renderings of the remote users based on the received virtual-reality information. -
Process 500 proceeds to block 512, where the virtual theater environment is generated for the local user. The virtual theater is generated such that the virtual renderings of the remote users are positioned within the virtual theater and the on-demand content is displayed on a virtual screen in the virtual theater. In various embodiments, the virtual theater environment is generated from the perspective of the local user in accordance with the local user's movement information. - As illustrated above in conjunction with
FIGS. 3B and 3C , the virtual theater environment consists of a front virtual screen, a plurality of seats, and a stage between the front screen and the seats. The local user's movements are not limited to just looking around the virtual theater. Rather, the local user can move between seats to see the virtual screen from a different perspective or angle. The local user can also walk among the seats, down alleys in the theater, and onto the stage. The local user can act out, perform, or conduct other movements via their virtual rendering standing on the stage or in some other area of the virtual theater environment. Since information associated with the user's movements is provided to the server and then provided to the remote users, the remote users can watch the virtual rendering of the local user move on the stage in front of the virtual screen, or elsewhere in the virtual theater. In this way, the local user and the remote users can interact with each other in more ways than just listening to each other or seeing an avatar of each user. Rather, the users can reenact the on-demand content that they are watching while the on-demand content is playing, as if they are actually in a real theater. - As the local user moves around the virtual theater based on the local user's movement information determined at
block 504, the look of the virtual theater also moves and adjusts with the user. In some embodiments, when the local user is “on the stage” and can look up at the seats, the virtual theater environment includes a second virtual screen on the back wall behind the seats on which the on-demand content is shown, similar to what is shown inFIG. 3B . In this way, the local user does not have to miss any of the on-demand content while one stage. -
Process 500 then proceeds to block 514, where the virtual theater environment is presented to the local user. In some embodiments, where the content receiver is part of a virtual-reality headset, the virtual-reality headset displays the virtual theater environment to the first user. In other embodiments, where the content receiver is separate from the virtual-reality headset, the content receiver transmits the virtual theater environment, such as in a wireless video stream, to the virtual-reality headset for display to the first user. -
Process 500 continues atdecision block 516, where a determination is made whether the on-demand content has ended. In various embodiments, this determination is based on an end-of-file message received by the content receiver. In other embodiments, the end of the on-demand content may be identified based on a manual command provided by the first user, such as by activating a stop or pause button or changing the television channel away from the on-demand content. If the on-demand content has ended,process 500 terminates or otherwise returns to a calling process to perform other actions. If the on-demand content has not ended,process 500 loops to block 504 to continue to determine virtual-reality information of the local user, such as the movement information; receive the on-demand content; receive the virtual renderings and their movements of remote users; and generate the virtual theater environment for presentation to the local user. -
FIG. 6 shows a system diagram that describes one implementation of computing systems for implementing embodiments described herein. System 600 includescontent receiver 122,content distributor 102,content provider 104,information provider 106, and on-demand content server 118. System 600 also includes virtual-reality headset 124 andcameras 224. -
Content receiver 122 receives content and virtual-reality information for other users fromcontent distributor 102 or on-demand content server 118 and generates a virtual theater environment for presentation to a user via virtual-reality headset 124, as described herein. In various embodiments, thecontent receiver 122 analyzes image data received fromcameras 224 to generate virtual-reality information for the local user of thecontent receiver 122 and provides it to thecontent distributor 102 or the on-demand content server 118 for providing to the other users, as described herein. - One or more general-purpose or special-purpose computing systems may be used to implement
content receiver 122. Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof. As mentioned above, thecontent receiver 122 and the virtual-reality headset 124 may be separate devices or they may be incorporated into a single device. Similarly, thecontent distributor 102 and the on-demand content server 118 may be separate devices or they may be incorporated into a single device. -
Content receiver 122 may includememory 630, one or more central processing units (CPUs) 644,display interface 646, other I/O interfaces 648, other computer-readable media 650, andnetwork connections 652. -
Memory 630 may include one or more various types of non-volatile and/or volatile storage technologies. Examples ofmemory 630 may include, but are not limited to, flash memory, hard disk drives, optical drives, solid-state drives, various types of random access memory (RAM), various types of read-only memory (ROM), other computer-readable storage media (also referred to as processor-readable storage media), or the like, or any combination thereof.Memory 630 may be utilized to store information, including computer-readable instructions that are utilized byCPU 644 to perform actions, including embodiments described herein. -
Memory 630 may have stored thereon virtual-reality system 632, which includesuser movement module 634 and virtualtheater generator module 636. Theuser movement module 634 may employ embodiments described herein to utilize image data captured bycameras 224 to determine and track body movement of the user of the virtual-reality headset 124.Virtual theater generator 636 employs embodiments described herein to utilize on-demand content and virtual renderings of other users, or virtual-reality information of other users, to generate the virtual theater environment for presentation on the virtual-reality headset 124 to the user of the system 600 -
Memory 630 may also storeother programs 640 andother data 642. For example,other data 642 may include predetermined virtual renderings of one or more users or other information. -
Display interface 646 is configured to provide content to a display device, such as virtual-reality headset 124.Network connections 652 are configured to communicate with other computing devices, such ascontent distributor 102 or on-demand content server 118, viacommunication network 110. Other I/O interfaces 648 may include a keyboard, audio interfaces, other video interfaces, or the like. Other computer-readable media 650 may include other types of stationary or removable computer-readable media, such as removable flash drives, external hard drives, or the like. - The virtual-
reality headset 124 includes a display device for presenting the virtual theater environment to the user. In various embodiments, the virtual-reality headset 124 includes other computing components similar to content receiver 122 (e.g., a memory, processor, I/O interfaces, etc.), but are not illustrated here for convenience. Moreover, in some embodiments, the virtual-reality headset 124 includes the components and functionality of thecontent receiver 122. -
Content distributor 102,content provider 104,information provider 106, on-demand content server 118, andcontent receiver 122 may communicate viacommunication network 110. Thecontent distributor 102,content provider 104,information provider 106, and on-demand content server 118 include processors, memory, network connections, and other computing components that enable the server computer devices to perform actions as described herein, but are not illustrated here for convenience. - The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/599,346 US10289193B2 (en) | 2017-05-18 | 2017-05-18 | Use of virtual-reality systems to provide an immersive on-demand content experience |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/599,346 US10289193B2 (en) | 2017-05-18 | 2017-05-18 | Use of virtual-reality systems to provide an immersive on-demand content experience |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180335832A1 true US20180335832A1 (en) | 2018-11-22 |
US10289193B2 US10289193B2 (en) | 2019-05-14 |
Family
ID=64269643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/599,346 Active 2037-07-01 US10289193B2 (en) | 2017-05-18 | 2017-05-18 | Use of virtual-reality systems to provide an immersive on-demand content experience |
Country Status (1)
Country | Link |
---|---|
US (1) | US10289193B2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10559060B2 (en) * | 2017-01-19 | 2020-02-11 | Korea Advanced Institute Of Science And Technology | Method and apparatus for real time image distortion compensation in immersive theater system |
CN115624740A (en) * | 2022-09-30 | 2023-01-20 | 小派科技(上海)有限责任公司 | Virtual reality equipment, control method, device and system thereof, and interaction system |
US20230256332A1 (en) * | 2022-02-16 | 2023-08-17 | Sony Interactive Entertainment Inc. | Massively multiplayer local co-op and competitive gaming |
US11812251B2 (en) | 2019-10-18 | 2023-11-07 | Msg Entertainment Group, Llc | Synthesizing audio of a venue |
US12058510B2 (en) * | 2019-10-18 | 2024-08-06 | Sphere Entertainment Group, Llc | Mapping audio to visual images on a display device having a curved screen |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018144890A1 (en) * | 2017-02-03 | 2018-08-09 | Warner Bros. Entertainment, Inc. | Rendering extended video in virtual reality |
US10418813B1 (en) | 2017-04-01 | 2019-09-17 | Smart Power Partners LLC | Modular power adapters and methods of implementing modular power adapters |
GB2596588B (en) * | 2020-07-03 | 2022-10-26 | Sony Interactive Entertainment Inc | Data processing apparatus and method |
US12282604B2 (en) | 2022-08-31 | 2025-04-22 | Snap Inc. | Touch-based augmented reality experience |
US12322052B2 (en) | 2022-08-31 | 2025-06-03 | Snap Inc. | Mixing and matching volumetric contents for new augmented reality experiences |
US12417593B2 (en) | 2022-08-31 | 2025-09-16 | Snap Inc. | Generating immersive augmented reality experiences from existing images and videos |
US12267482B2 (en) | 2022-08-31 | 2025-04-01 | Snap Inc. | Controlling and editing presentation of volumetric content |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3890781B2 (en) | 1997-10-30 | 2007-03-07 | 株式会社セガ | Computer-readable storage medium, game device, and game image display method |
US6409599B1 (en) | 1999-07-19 | 2002-06-25 | Ham On Rye Technologies, Inc. | Interactive virtual reality performance theater entertainment system |
US6795972B2 (en) | 2001-06-29 | 2004-09-21 | Scientific-Atlanta, Inc. | Subscriber television system user interface with a virtual reality media space |
US20080307473A1 (en) | 2007-06-06 | 2008-12-11 | Virtual Worlds Ppv, Llc | Virtual worlds pay-per-view |
US8560387B2 (en) | 2007-06-07 | 2013-10-15 | Qurio Holdings, Inc. | Systems and methods of providing collaborative consumer-controlled advertising environments |
US8130219B2 (en) | 2007-06-11 | 2012-03-06 | Autodesk, Inc. | Metadata for avatar generation in virtual environments |
US8869197B2 (en) | 2008-10-01 | 2014-10-21 | At&T Intellectual Property I, Lp | Presentation of an avatar in a media communication system |
US8848024B2 (en) | 2011-03-08 | 2014-09-30 | CSC Holdings, LLC | Virtual communal television viewing |
US9268406B2 (en) * | 2011-09-30 | 2016-02-23 | Microsoft Technology Licensing, Llc | Virtual spectator experience with a personal audio/visual apparatus |
US9823738B2 (en) | 2014-07-31 | 2017-11-21 | Echostar Technologies L.L.C. | Virtual entertainment environment and methods of creating the same |
US9396588B1 (en) | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
US10657701B2 (en) * | 2016-06-30 | 2020-05-19 | Sony Interactive Entertainment Inc. | Dynamic entering and leaving of virtual-reality environments navigated by different HMD users |
-
2017
- 2017-05-18 US US15/599,346 patent/US10289193B2/en active Active
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10559060B2 (en) * | 2017-01-19 | 2020-02-11 | Korea Advanced Institute Of Science And Technology | Method and apparatus for real time image distortion compensation in immersive theater system |
US11812251B2 (en) | 2019-10-18 | 2023-11-07 | Msg Entertainment Group, Llc | Synthesizing audio of a venue |
US12058510B2 (en) * | 2019-10-18 | 2024-08-06 | Sphere Entertainment Group, Llc | Mapping audio to visual images on a display device having a curved screen |
US12101623B2 (en) | 2019-10-18 | 2024-09-24 | Sphere Entertainment Group, Llc | Synthesizing audio of a venue |
US20230256332A1 (en) * | 2022-02-16 | 2023-08-17 | Sony Interactive Entertainment Inc. | Massively multiplayer local co-op and competitive gaming |
CN115624740A (en) * | 2022-09-30 | 2023-01-20 | 小派科技(上海)有限责任公司 | Virtual reality equipment, control method, device and system thereof, and interaction system |
Also Published As
Publication number | Publication date |
---|---|
US10289193B2 (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10289193B2 (en) | Use of virtual-reality systems to provide an immersive on-demand content experience | |
JP7594642B2 (en) | Simulating local experiences by live streaming a shareable perspective of a live event | |
US11436803B2 (en) | Insertion of VR spectator in live video of a live event | |
US11729435B2 (en) | Content distribution server, content distribution method and content distribution program | |
JP6759451B2 (en) | Systems and methods to reduce the impact of human tracking device occlusion | |
CN106803966B (en) | A kind of multi-person network live broadcast method, device and electronic equipment thereof | |
CN109891899B (en) | Video content switching and synchronization system and method for switching between multiple video formats | |
CN105430455B (en) | information presentation method and system | |
US20120060101A1 (en) | Method and system for an interactive event experience | |
WO2016009865A1 (en) | Information processing device and method, display control device and method, reproduction device and method, programs, and information processing system | |
CN109416931A (en) | Device and method for eye tracking | |
CN110730340B (en) | Virtual audience display method, system and storage medium based on lens transformation | |
US12323642B2 (en) | Computer program, server device, terminal device, and method | |
CN112313962B (en) | Content distribution server, content distribution system, content distribution method, and program | |
KR102542070B1 (en) | System and method for providing virtual reality contents based on iptv network | |
JP7591163B2 (en) | Information processing system and information processing method | |
KR101915065B1 (en) | Live streaming system for virtual reality contents and operating method thereof | |
KR20230161804A (en) | Metaverse cloud streaming system and method using avatar | |
JP7148827B2 (en) | Information processing device, video distribution method, and video distribution program | |
Puopolo et al. | The future of television: Sweeping change at breakneck speed |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ECHOSTAR TECHNOLOGIES L.L.C., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARDY, CHRISTOFER;INAMA, JOHN;REEL/FRAME:042696/0826 Effective date: 20170517 |
|
AS | Assignment |
Owner name: DISH TECHNOLOGIES L.L.C., COLORADO Free format text: CHANGE OF NAME;ASSIGNOR:ECHOSTAR TECHNOLOGIES L.L.C.;REEL/FRAME:045518/0495 Effective date: 20180202 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: U.S. BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, MINNESOTA Free format text: SECURITY INTEREST;ASSIGNORS:DISH BROADCASTING CORPORATION;DISH NETWORK L.L.C.;DISH TECHNOLOGIES L.L.C.;REEL/FRAME:058295/0293 Effective date: 20211126 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |