[go: up one dir, main page]

WO2018184735A1 - Concept de participation virtuelle à des événements en direct - Google Patents

Concept de participation virtuelle à des événements en direct Download PDF

Info

Publication number
WO2018184735A1
WO2018184735A1 PCT/EP2018/025091 EP2018025091W WO2018184735A1 WO 2018184735 A1 WO2018184735 A1 WO 2018184735A1 EP 2018025091 W EP2018025091 W EP 2018025091W WO 2018184735 A1 WO2018184735 A1 WO 2018184735A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
signal
terminal
output
video material
Prior art date
Application number
PCT/EP2018/025091
Other languages
German (de)
English (en)
Inventor
Karl-Heinz Charly GRAF
Christian Weissig
Arne FINN
Rodrigo DIAZ
Original Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Fans United Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V., Fans United Gmbh filed Critical Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Publication of WO2018184735A1 publication Critical patent/WO2018184735A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Definitions

  • the present invention relates to a concept for virtual participation in live events.
  • Live broadcasts of live events exist in a variety of formats.
  • typical live broadcasts on radio and television do not allow the recipients of the live broadcasts to virtually participate in the particular live event, in the sense that they could interact with anyone on site.
  • live events in which it is enabled viewers to share their euphoria, opinion or the like via text messages live the persons on site of the live event on scoreboards, but also this possibility does not cause the switched Viewers would get an impression or a feeling that they were present at the live event.
  • the object of the present invention is to provide such a concept, namely a concept for a virtual participation of a user in a live event, so that the user gets the chance to experience the live event more authentically.
  • a more authentic virtual participation of a user in a live event when during the live event recording the same by means of a camera device from a predetermined position within a spectator area of the live Event, while at the same time allowing the user to output via his / her terminal playback signals to one or more viewers in the audience area.
  • the viewer's perspective provided by the streaming of at least a portion of the recorded video material thus gives the user the feeling of being "present” directly Having multiple viewers display in the audience area results in a more authentic experience of the live event for the user.
  • the authenticity increases by recording the live event from perspectives that are characterized not just by spectacular telephoto shots or otherwise unavailable to viewers on site locations or recordings from the air, but perspectives in which viewers on site a non-vanishing share of the recorded Make out visual field.
  • a more authentic virtual participation of a user in a live event is made possible by using a camera device for recording an environment of the camera device and an output interface for streaming at least a portion of the video material obtained by the camera apparatus, is additionally provided with an information output device which transmits a reproduction signal, which depends on an information signal received via an input interface of the device from the user's terminal, to one or more persons the environment of the device outputs.
  • the output of the reproduction signal may be such that the output of the reproduction signal and the camera position from the environment or for the one or more persons, at least in projection along a spatial direction, such as in vertical projection, from the same direction.
  • a field of view of the camera device covers more than 180 ° in a predetermined plane, such as in the horizontal plane. and the playback signal is output so that it seems to originate from the camera position in this plane or a position on a vertical to that plane through the camera position.
  • the coincidence between camera position and perceived position of the output of the user-initiated playback signal increases the authenticity of the virtual participation in the live event for the user, because the on-site viewers who respond to the user's playback signal become highly true
  • the position corresponds to the perceived output location of the playback signal, and this position coincides at least from the perspective of the external user in his perspective with his own virtual position in the live event, namely the camera position.
  • the user thus gets the feeling that his actions, ie the playback signal, and reactions of the persons, such as spectators, back to each other locally and the same applies to the local people themselves. This increases the authenticity.
  • FIG. 1 shows a schematic block diagram of a system for a virtual participation of a user in a live event, the system comprising a device for on-site use as well as a device implementable in a user's terminal according to an embodiment
  • Fig. 2 is a block diagram of an exemplary external user terminal in which the user-side device is implementable
  • FIG. 3 is a schematic three-dimensional representation of a construction of an on-site device comprising the camera device and the information output device of the on-site device, which are installed in a columnar form in this embodiment;
  • Fig. 4 schematically shows an image of the field of view taken by the camera device in the case of full audience occupation when the camera device is positioned in the viewer area;
  • Fig. 5 is a schematic view showing a viewing direction-dependent control of the reproducing direction of the reproduced signal of the user according to an embodiment
  • Fig. 6 is a schematic representation of an alternative view direction dependent
  • FIG. 7 is a schematic block diagram of a system for virtual participation of a user in a live event, such as a football event. Game, shows, with an exemplary representation of an embedding in infrastructures such as social networks.
  • the system of FIG. 1 is indicated generally by the reference numeral 10 and includes an on-site device, ie, the live event, device 12 to be operated on and a device 18 installable in a terminal 14 of the user 16.
  • the user 16 is thus a person who does not need to be in a location 20 where the live event takes place, as long as it can be somewhere else.
  • the terminal device 14 can be, for example, a computer, a mobile telephone or another electronic device of the user 16.
  • Fig. 2 shows an example of such a terminal 14 in more detail. It comprises a processor 20, a screen 22, one or more loudspeakers 24 and a user input interface 26.
  • the user input interface 26 comprises for example a microphone 26a, a camera 26c, a touch function 26f of the screen 22, a keyboard 26b, a mouse 26g, a motion detection device 26d, a gesture interface 26e or combinations thereof.
  • All units could be housed in a head-up display having the screen functionality of the display 22, the speaker functionality of the speaker 24, and motion detection functionality as the user input interface 26, as well as the processor 20.
  • the processor 20 is connected to all the units 22, 24 and 26.
  • the device 18 belonging to the system 10 could be, for example, an application running on the processor 20 or a computer program running on the processor 20.
  • the terminal 14 could be a proprietary device dedicated to the system 10, in which case the device 18 would be fixedly installed in the terminal 14 and thus, as it were, the terminal device. guesses 14 itself part of the system 10 would be.
  • the installability can thus designate a software or electrical installability.
  • the user 16 is not himself at the location 20. He may rather be at home or elsewhere beyond the location of the event.
  • the location 20 may be, for example, a football stadium, another stadium, a theater, an opera house or the like.
  • the system 10 is able to ensure that the user 16 can at least virtually participate in the live event taking place at the location 20, such as e.g. a football game, another sporting event, a play, an opera or the like.
  • the device 12 is intended to be operated on-site at the live event. Strictly speaking, the device 12 comprises a camera device 28, an output interface 30, an input interface 32 and an information output device 34, the camera device 28 and the information output device 34 being intended to be positioned in situ. It should be noted that the device may also include a controller 80, with which and via which, for example, all devices of the device 28 are connected to one another. In part, functionalities of the devices of the device 28, e.g. the interfaces 30 and 32 and the optional controller 80 may be implemented on a processor, such as one or more programs.
  • the positioning of the camera device 28 is performed such that the camera device 28 adapted to receive its surroundings makes the recording of the live event from a particular position 38 within a viewer area 36 of the live event location 20.
  • the output interface streams at least a portion of the video material received by the camera device 28 through the camera device 28 to the terminal 14 of the user 16 or the device 18 to give the user 16 a visual impression of the captured environment.
  • the user 16 gets a fairly authentic impression, namely, as if he himself is among the spectators 40.
  • the camera device 28 generates a video material 42 that is received by the output interface 30 due to its recording.
  • the camera apparatus 28 may include a plurality of cameras or a camera
  • the video material may be a panoramic video formed by composing videos of different cameras of the camera apparatus 28, or a composite video of another type, eg, none Panoramic field of view covers, or a set of multiple videos of cameras of the camera device 28, looking in different directions and thus record different parts of the environment of the camera device 28 and also cover a field of view - with mutual overlap, against each other or with gaps.
  • the camera device 28 thus has a specific field of view, which covers and captures it by capturing the environment with gaps or gaps in this field of view with one or more cameras, and the video material 42 represents this field of view.
  • the visual field is representatively shown at 44 representatively.
  • the output interface 30 may stream to the user 16 all of the video material 42 relative to the entire field of view 44, or only a portion 46 thereof, namely a portion of the field of view 44 having a limited spatial extent.
  • the streaming itself takes place, for example, via the Internet 49, for example by means of HTTP.
  • One possibility would be, for example, that the output interface 30 the Videomateria! 42 for download by the device 18, for example by means of a suitable streaming download protocol, provides, such as streaming or download using DASH.
  • the device 18 determines the cutout 46 to which the streaming is restricted: the cutout 46 is selected such that there is a viewing cutout 48 which is directly opposite the user 16 on the screen 22 of the terminal 14 is displayed.
  • the output interface 30 it would be possible for the output interface 30 to broadcast the video material 42 to the terminal 14 by broadcast or multicast signal, ie to the terminal 14 of the user 16 in one operation together with other other users, with the respective terminals, such as the terminal 14, then Extract and display the field of view 48 from the respective broadcast or multicast signal.
  • the video material 42 streamed from the output interface 30 to the terminal 14, or the section 46 thereof, is illustrated by the arrow 50 in FIG. This direction is not the only one. Rather, the input interface 32 is adapted to receive from the terminal 14 an information signal 52. The transmission of the signal 52 as well as the transmission of the video material 50 via the Internet 49 could take place.
  • the information signal 52 is a signal generated by the device 16, which the user 16 has input via the user input interface 26, as previously mentioned, depending on user inputs of the user 16, for example, containing a message to one or more of the viewers 40, and possibly in the form of a text message, a video and / or a sound recording.
  • a reproduction signal may be coded, e.g. a text message, an audio message or a video message or a combination of the same.
  • the playback signal is then obtained, for example, by decoding.
  • the information signal 52 could also encode gestures that the user 16 has entered.
  • the user 16 for generating the information signal 52 in the device 18 as a user input interface 26, a camera of the terminal 14, a keyboard of the terminal 14 and / or a microphone of the terminal 14 have used.
  • the input interface 32 receives the information signal 52 from the terminal 14 and the device 18, respectively, and the information output device 34 outputs the reproduction signal, which depends on the information signal 52, to one or more persons in the vicinity of the camera device, i. to one or more viewers 40.
  • FIG. 3 illustrates that the information output device 34 and the camera device 28 are preferably arranged in a predetermined manner relative to each other, such as mechanically fixed relative to each other, disposed in a common housing, or supported by a common frame.
  • the camera device 28 may possibly be configured as a camera system with a plurality of cameras with a frame of mirrors 60, the mirrors 60 being arranged, for example, like the lateral surface areas of a pyramid, via which cameras the camera device 28, 3, mirrored on the mirrors of an axis of symmetry 62 of the mirror gelgestells 60 look away, as in Fig. 3 by way of example with a dashed line 64 is indicated.
  • the camera device 28 would be designed as a panoramic camera system, which would be able to cover as field of view 44 a solid angle range which covers 360 ° in the plane perpendicular to the axis of symmetry 62 of the structure 60 and perpendicular thereto, for example, covering an opening angle of greater than 30 ° , such as B. 30 ° to 150 °, or an opening angle in a range of 45 ° to 120 °.
  • the information output device 34 are arranged at the same location at least along the axial direction 62, ie the projections of the camera device 28 and information output device 34 coincide with one another along the axis 62.
  • camera device 28 and information output device 34 are mounted on a common axis 62.
  • the information output device 34 may include a visual display 66 and / or a speaker array 68. Both the display 66 and the speaker array 68 are configured for omnidirectional display and audio reproduction, respectively.
  • a display area of the display 66 is cylindrical or polygonal to allow display to all sides from the axis 62
  • the speaker array 68 is an array of speakers having radiating directions in different, discrete directions are arranged from the axis 62.
  • the loudspeakers of the array 68 are arranged, for example, along a circle or along a polygon about the axis 62, and also the display surface of the display 66 could be arranged symmetrically about the axis 62, such.
  • Example as the lateral surface of a cylinder, prism or a pyramid, wherein not necessarily the complete lateral surface must be designed as a display surface, but for example, a predetermined part, such as less than 40% or less than 20%, the cylindrical surface, prism surface or pyramid Silent surface not belong to the display, but belong to the housing, for example. But it is enough when visual display 66 and / or loudspeaker array 68 can emit in discrete directions that are projected along axis 62 across field of view 44. Omnidirectionality is not required for camera device 28, and for information output device 34, playback is in directions within the field of view stretched 44.
  • Fig. 3 it is exemplarily shown that the display 66 is above the camera device 28 while the speaker array 68 is located below the camera device 28, but the mutual position along the axis 62 could also be different.
  • the aligned arrangement along the axis 62 ensures that, from the environment captured by the camera device 28, the output of the playback signal is perceived by the information output device 34 from a position at least in projection along the spatial direction 62 with the camera position of the camera device 28 matches. More specifically, regardless of where a person is in the field of view 44 of the camera device 28, that person will perceive the playback signal coming from a direction intersecting the axis 62, i. from a position at least above or below the camera device 28, or even from the same direction, i. the direction in which the camera device 28 is spatially positioned for the person.
  • the assembly 70 housing the camera device 28 and the information output device 34 is designed as a frame or housing and has a substantially columnar shape and, more particularly, has a footprint 72 configured as a foot or anchorage of the assembly 70 is to set up the structure 70 on the floor, so that the axis 62, for example, corresponds to the vertical.
  • the assembly 70 may be configured to be firmly anchored to the ground, such as in-set or the like.
  • the structure 70 may be designed so that it can be reversibly placed and fixed on the base 72, such. B. by means of screws or the like.
  • the standing surface 72 does not necessarily have to be present, so that the structure 70 could also be fixed to a ceiling or wire ropes, for example.
  • FIG. 3 further shows that optionally further components of the system 10 could be accommodated in the structure 70.
  • a microphone array or microphone could be housed in the assembly 70 to pick up the audio scene originating from the field of view 44 of the camera device 28 at a position substantially on the axis 62.
  • the output interface 30 would thus receive an audio presentation from the audio capture device 74, namely the result of the audio capture of the environment of the camera device 28, and would send an audio signal 76 derived from the audio presentation to the terminal 14 and the device 18, respectively.
  • the audio signal 76 could be, for example, a multi-channel or multi-object signal, such as an MPEG surround or SAOC signal.
  • the audio signal! 76 may be configured to represent the spatial audio scene arriving from the field of view 44 toward the camera device 28.
  • the device 18 could play the audio signal 76 to the user 16 via the one or more speakers 24.
  • the device 18 it would be possible for the device 18 to process the audio signal 76 so that the audio scene is adapted to the direction in which the viewing section 48 lies.
  • the audio signal 76 like the video material 50, could be broadcast by broadcast or multi-user to a plurality of users.
  • the audio signal 76 as well as the video material 50 could be downloaded from the device 18 by means of a streaming protocol.
  • the audio signal 76 is already being selectively downloaded by the device 18 or so that the audio signal 76 already represents the audio scene adapted to the direction 48.
  • the audio signal 76 could be a stereo signal corresponding to the audio scene originating from the field of view 44, and in which left and right channels are aligned relative to the direction of the field of view 48.
  • FIG. 3 further shows that the assembly 70, as another component of the device 12, could include a gesture input interface 78 comprising, for example, a set of cameras looking in discrete, different directions away from the axis 62 and capable of recognizing gestures that originate from persons in the field of view 44.
  • FIG. 3 shows, by way of example, that it may be possible that the assembly 70 also houses interfaces to terminals of persons in the camera designation device, such as viewers 40, such as wireless interfaces, such as a Wi-Fi interface, a Bluetooth Interface or the like. This would also allow the people 40 in the vicinity of the camera building device or assembly 70 to enter commands, such as the just-mentioned command to set up or end a videoconference with a particular user 16.
  • the camera device 28 is positioned at a certain height h above the ground, such as, for example, with respect to the ground level of the audience area.
  • the height h is determined, for example, by the distance of the camera device 28 from the standing surface 72.
  • the height h is, for example, between 1 to 2.5 m (in each case inclusive), since then for the user 16 the visual field 44 appears to be viewed from a height which also corresponds to the usual viewing heights or perspectives of persons or on-site spectators 40 seems to match.
  • an area ratio between view taken by spectators, represented in FIG. 4 by crosshatching, 84 to the area of the field of view 44 results in which the View is not blocked by viewers, more than 20%. In Fig. 4, more than 50% are shown. This gives a very authentic feeling for the user 16.
  • FIG. 5 details the possibility already mentioned above that the information output device 34 can be configured such that the reproduction of the reproduction signal can be output in a directionally selective manner, ie directionally selective in one direction within the field of view 44 of the camera device 28.
  • the information output device 34 would be capable, for example, of outputting the playback signal in different directions from, for example, the axis 32.
  • the output could include displaying the playback signal optically in the particular direction, or outputting the playback signal via a speaker in the appropriate direction.
  • the output is in a direction corresponding to an instant visual field 48 from the field of view 44 displayed to the user 16 via a terminal 14 via, for example, the screen 22 thereof.
  • 5 shows, by way of example, a hatched area 86 of the information output device 34, which area 86 of the information output device 34 is activated in order to output the playback signal of the user 16 into the camera device environment.
  • the device 18 is designed, for example, to determine via the user input interface 26, depending on the user 16, a virtual viewing direction 88, by means of which the user 16 looks into the scene in the field of view 44.
  • the user input interface 26, which uses the device 18 for this purpose is, for example, a motion sensor of the terminal 14.
  • the device 18 generates a direction signal 90 depending on the virtual line of sight 88, which sends it to the device 12, where it is received by the input interface 32, for example , The transmission can happen again via the Internet 49.
  • the direction signal 90 encodes, for example, the line of sight 88. It would also be possible for the direction signal 90 to be the signal that defines the section 46 of the video material sent by the device 12 to the device 18 or the terminal 14 of the user 16.
  • the advantage of the raster-selective output of the playback signal in a direction corresponding to the virtual line of sight 88 of the user 16 has the advantage that persons or viewers 40 who are not in the field of view 48 of the user 16, the playback signal from the user 16 is not, or not so much perceive and therefore sometimes not so much can refer to wrongly, whereby the authenticity is improved.
  • this approach allows a larger number of users simultaneously with the persons acting as spectators 40 in the visual field 44 of the camera.
  • the individual communications between the individual users on the one hand and the viewers 40 on the other hand can be "distributed" along the horizontal or transverse extension direction of the visual field 44, ie, projected along the direction 32.
  • a selection process is carried out to decide which user 16 can output his playback signal in the appropriate direction and which This selection process might be automatic, such as based on certain priority rights of the users 16 or other characteristics associated with the users 16, such as the number of likes in social networks or the like Additionally or alternatively, a selection manually by the operator of the system 10.
  • Fig. 5 shows that the playback signal originated by a particular user 16 can be outputted via the information output device 34 in a certain direction.
  • the directional selectivity is reflected, for example, in that the playback signal is output to one or more speakers of the information output device 34, which radiate substantially in the direction 88.
  • the playback signal is displayed in a portion of a display of the information output device 34 which faces in the direction 88 or is most visible from the direction 88.
  • the audio reproduction of the reproduction signal it is of course possible to superimpose a plurality of reproduction signals on each other.
  • a local distribution of the display area could be used instead of an overlay.
  • parts or all of the available display area of the information output device 34 would be split to assign to a subset of one or more users 16 from all users 16 associated with the device 12, the assignment also being made in that users 16 are assigned to those parts of the display area which face the viewing direction 88 of this user.
  • the playback signal displayed in such a portion 86 of a display could be a video recorded via a camera of the terminal 14 functioning as a user input interface 26, or a text message which the user 16 may transmit via a microphone, for example. rophon with speech recognition of the terminal 14 or a keyboard of the terminal 14 has entered as user input device 26. Also in the audio playback of the playback signal of the user 16, a selection can take place in order to limit the mutual superimposition of the audio playback signals in number. The latter playback signals may have been input via a microphone of the terminal 14 as user input device 26.
  • the information output device 34 could comprise speakers distributed in the camera designation device 34, and instead of a directionally selective audio output, a location-selective audio output would take place in the sense that those speakers or the loudspeaker for reproducing the reproduction signal of a specific user 16 would be activated, which, viewed from the camera device 28, lies in the viewing direction 88 or in the viewing section 48, which the user 16 is currently looking at.
  • a location-selective audio output would take place in the sense that those speakers or the loudspeaker for reproducing the reproduction signal of a specific user 16 would be activated, which, viewed from the camera device 28, lies in the viewing direction 88 or in the viewing section 48, which the user 16 is currently looking at.
  • the playback signal may come from the setup 70.
  • FIG. 6 For the sake of completeness, the last-mentioned situation is again indicated in FIG. 6, where, among speakers of an array of loudspeakers, which are comprised of the information output comprised by the information output device 34, such a part coincides with the playback signal originating from the user 16 , which lies in the region in which the virtual viewing direction 88 points, or in the region which projects in the direction 32 overlaps with the viewing section 48.
  • these loudspeakers are indicated by hatching.
  • speaker channels could also be a other types of playback locations may be distributed, such as screens at each audience seat or the like.
  • the above embodiments can be used, for example, in the live broadcast of football matches. This will be illustrated in the following with reference to FIG. 7.
  • the embodiments described above are based on the typical perception habits and rites of a fan during a stadium visit.
  • the above embodiments provide an integrated camera and communication system 10.
  • a camera position is selected that is in a viewer area 36 lies. It could be a specially selected fan block in which viewers who belong to a particular fan community 100 are located. Users, such as the user 16, is given the opportunity to participate in the fan community 100 via the communication and interaction option described above.
  • the camera device 28 can be designed as a 360 ° panoramic recording system.
  • Voice communication 76/52, text communication 52, video communication 52 and / or gesture control 102 may be used.
  • the external fan 16 which could also be referred to as a satellite fan, has the option of using the device 18, which is for example an app on his terminal 14, such.
  • the ability in the video material that is generated by the camera device 28, such as a 360 ° video, if necessary, with an audio live Stream 76 is supplemented to navigate from a stadium position midway between the fans 40. He sees the surrounded fans as shown in Fig.
  • the device 18 could simultaneously provide access to a chat platform 106 with text, voice and image interaction capabilities.
  • chat comments, voice messages, pictures or short video clips could be sent from a fan 16 to the chat platform 106 and optionally also to one of the displays 66, e.g.
  • the display 66 in a setup 70 is shown in FIG. 3.
  • the information signals 52 from various users 16 that define playback signals could be subjected to selection as described above. The selection could be in one two-stage procedure.
  • An automatic control 108 could filter out unwanted comments by, for example, keyword search, which are then discarded.
  • the number of likes or other parameters 109 from a social network associated with the users of the competing playback signals may also be used for such selection 108.
  • all posts can be rated as Likes.
  • this assessment could also be done by gestures that are captured via the gesture interface 78 mentioned above. These ratings can then be used as a decision support for the automatic selection, but also for the manual selection of subsequent playback signals from external users.
  • Selected external users 16 may be invited to a videoconference-like connection 12 to a segment.
  • the information signal 52 of a user 16 could comprise a video and audio signal, and the user 16 would be assigned a specific part 86 to the display 66 and his audio part of the video signal via the loudspeaker array 68 also in the area 68, namely areas facing substantially in the virtual line of sight 88, which together with the transmission of video 50 and audio recording 76 from device 12 to terminal 14 creates a type of video conference.
  • the latter transmission from device 12 to device 18 may be accomplished differently during the time of a videoconferencing, such as via another channel.
  • the eye contact between the video conference participants ie the user 16 on the one hand and the one or more viewers 40 on the other side of the videoconference within the field of view, such.
  • the parts of the field of view 44 redundant, but with possibly higher density record can be improved by the user 16 of the video conference participants virtually as his screen 22 is displayed as if he Look at user 16 directly.
  • a multi-view video analysis or teleprompter can also be used.
  • voice recognition automatic translation and gesture control can be integrated. That is, text messages as playback signals derived from a user 16 via information signals 52 may be subjected to voice recognition and automatic translation prior to their display or playback. Conversely, gesture control could be enabled for both viewers 40 and a user 16 on a terminal 14 side.
  • the system 10 may be interfaced, such as a computer.
  • a wireless or Bluetooth interface is equipped.
  • a functionality that has not yet been described in the preceding embodiments, but which may optionally be integrated, relates to a Snapshof functionality 12 for the user 16.
  • the device 18 of the device 12 sends a command, after which the device 12 extract a time excerpt from the video material 42 and send to the user 16 the terminal 14 associated address.
  • the address may be, for example, an email address of the user, a particular social media account of the user 16 in a social network 14, or the like.
  • the command which is sent by the device 18 to the device 12, for example, provided with a time stamp corresponding to a time stamp of the currently displayed field of view 48 at the terminal 14 of the user 16.
  • the device 12 for example, relies on a buffer in which the video material 42 is temporarily stored for some time.
  • the time segment 1 16 extracted in this way can either be a single image of the visual field 44, or affect a Biidfoige the visual field 44 with a predetermined duration. Since the timestamp provided to the instruction is derived from the timestamps provided by the timeframes of the video material 42 itself, this functionality is end-to-end latency neutral.
  • the on-site device 12 is provided with a haptic interface, e.g. As a touch screen.
  • users may be allowed view-gaze navigation 16, which gaze navigation not only navigates the viewport 48 within the field of view 44, i. H. the viewing direction of a virtual camera, but possibly also the listening direction of the microphone array 74, the playback direction of the loudspeaker array 68 and / or the display direction of the display 66, as described above.
  • the user 16 has a feeling of being there on-site during a live event such as B. a stadium feeling in the case of a football game. This will give viewers such.
  • a sense of belonging to the fan community on site This may result in a closer bond with the clubs, such. B. in the football clubs.
  • the clubs could for example be operator of the system 10 and thereby get the opportunity to increase their fan community significantly.
  • These new fans 16 are personally addressable, giving the club the opportunity to engage with personalized marketing.
  • aspects have been described in the context of a device, it will be understood that these aspects also constitute a description of the corresponding method, so that a block or a component of a device is also to be understood as a corresponding method step or as a feature of a method step , Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
  • Some or all of the method steps may be performed by a hardware device (or using a hardware device). Apparatus), such as a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some or more of the most important method steps may be performed by such an apparatus.
  • embodiments of the invention may be implemented in hardware or in software.
  • the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or FLASH memory, a hard disk, or other magnetic media or optical memory are stored on the electronically readable control signals, which can cooperate with a programmable computer system or cooperate such that the respective method is performed. Therefore, the digital storage medium can be computer readable.
  • some embodiments of the invention include a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is performed.
  • embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored, for example, on a machine-readable carrier.
  • Other embodiments include the computer program for performing any of the methods described herein, wherein the computer program is stored on a machine-readable medium.
  • an embodiment of the method according to the invention is thus a computer program which has a program code for performing one of the methods described herein when the computer program runs on a computer.
  • a further embodiment of the inventive method is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program is recorded for carrying out one of the methods described herein.
  • the data carrier, the digital storage medium or the computer-readable medium are typically representational and / or non-transitory or non-transient.
  • a further embodiment of the method according to the invention is thus a data stream or a sequence of signals, which represent the computer program for performing one of the methods described herein.
  • the data stream or the sequence of signals may be configured, for example, to be transferred via a data communication connection, for example via the Internet.
  • Another embodiment includes a processing device, such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • a processing device such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • Another embodiment includes a computer on which the computer program is installed to perform one of the methods described herein.
  • Another embodiment according to the invention comprises a device or system adapted to transmit a computer program for performing at least one of the methods described herein to a receiver.
  • the transmission can be done for example electronically or optically.
  • the receiver may be, for example, a computer, a mobile device, a storage device, or a similar device. be direction.
  • the device or system may include a file server for transmitting the computer program to the recipient.
  • a programmable logic device eg, a field programmable gate array, an FPGA
  • a field programmable gate array may cooperate with a microprocessor to perform one of the methods described herein.
  • the methods are performed by any hardware device. This may be a universal hardware such as a computer processor (CPU) or hardware specific to the process, such as an ASIC.
  • the devices described herein may be implemented, for example, using a hardware device, or using a computer, or using a combination of a hardware device and a computer.
  • the devices described herein, or any components of the devices described herein, may be implemented at least in part in hardware and / or software (computer program).
  • the methods described herein may be implemented using a hardware device, or using a computer, or using a combination of a hardware device and a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

La présente invention concerne une participation virtuelle plus authentique d'un utilisateur à un événement en direct lors de l'événement en direct, comportant : un enregistrement dudit événement en direct, au moyen d'un dispositif de caméra, à partir d'une position déterminée à l'intérieur d'une zone réservée aux spectateurs de l'événement en direct et ce permettant simultanément à l'utilisateur de fournir, à l'aide du terminal dudit utilisateur, des signaux de reproduction à un ou à une pluralité de spectateurs dans la zone réservée aux spectateurs ; et/ou un dispositif équipé d'un dispositif de caméra, destiné à enregistrer un environnement du dispositif de caméra et d'une interface de sortie, destinée à la diffusion d'au moins une partie d'un matériau vidéo saisi par le dispositif de caméra, lequel est en outre pourvu d'un dispositif de fourniture d'informations, qui fournit un signal de reproduction, qui dépend d'un signal d'informations, qui est reçu à l'aide d'une interface d'entrée du terminal de l'utilisateur, à une ou à une pluralité de personnes dans l'environnement dudit dispositif.
PCT/EP2018/025091 2017-04-06 2018-04-04 Concept de participation virtuelle à des événements en direct WO2018184735A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP17165208.4 2017-04-06
EP17165208 2017-04-06

Publications (1)

Publication Number Publication Date
WO2018184735A1 true WO2018184735A1 (fr) 2018-10-11

Family

ID=58606022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/025091 WO2018184735A1 (fr) 2017-04-06 2018-04-04 Concept de participation virtuelle à des événements en direct

Country Status (1)

Country Link
WO (1) WO2018184735A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473168A (zh) * 2021-07-02 2021-10-01 北京达佳互联信息技术有限公司 直播方法及系统、便携设备执行的直播方法及便携设备
CN115379134A (zh) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 一种图像采集装置、方法、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040254982A1 (en) * 2003-06-12 2004-12-16 Hoffman Robert G. Receiving system for video conferencing system
EP1564992A1 (fr) * 2004-02-13 2005-08-17 Seiko Epson Corporation Procédé et système pour enregistrement de données pour la vidéoconférence
EP2290969A1 (fr) * 2009-05-12 2011-03-02 Huawei Device Co., Ltd. Système de téléprésence, procédé et dispositif de capture vidéo
US20140278377A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Automatic note taking within a virtual meeting
EP3016381A1 (fr) * 2014-10-31 2016-05-04 Thomson Licensing Système de téléconférence vidéo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040254982A1 (en) * 2003-06-12 2004-12-16 Hoffman Robert G. Receiving system for video conferencing system
EP1564992A1 (fr) * 2004-02-13 2005-08-17 Seiko Epson Corporation Procédé et système pour enregistrement de données pour la vidéoconférence
EP2290969A1 (fr) * 2009-05-12 2011-03-02 Huawei Device Co., Ltd. Système de téléprésence, procédé et dispositif de capture vidéo
US20140278377A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Automatic note taking within a virtual meeting
EP3016381A1 (fr) * 2014-10-31 2016-05-04 Thomson Licensing Système de téléconférence vidéo

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473168A (zh) * 2021-07-02 2021-10-01 北京达佳互联信息技术有限公司 直播方法及系统、便携设备执行的直播方法及便携设备
CN113473168B (zh) * 2021-07-02 2023-08-08 北京达佳互联信息技术有限公司 直播方法及系统、便携设备执行的直播方法及便携设备
CN115379134A (zh) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 一种图像采集装置、方法、设备及介质
CN115379134B (zh) * 2022-10-26 2023-02-03 四川中绳矩阵技术发展有限公司 一种图像采集装置、方法、设备及介质

Similar Documents

Publication Publication Date Title
US11184362B1 (en) Securing private audio in a virtual conference, and applications thereof
US10218762B2 (en) System and method for providing a real-time three-dimensional digital impact virtual audience
DE69803168T2 (de) Videokonferenzsystem
EP2922237B1 (fr) Procédé, dispositif et système de commande d'une conférence
US8259155B2 (en) Providing perspective-dependent views to video conference participants
DE102015100911B4 (de) Verbesserte Kommunikation zwischen entfernten Teilnehmern/Teilnehmerinnen mittels erweiterter und virtueller Realität
EP2622853B1 (fr) Système de visioconférence bidirectionnel
US11743430B2 (en) Providing awareness of who can hear audio in a virtual conference, and applications thereof
DE69521369T2 (de) Anzeigeverfahren für einen gemeinsamen virtuellen Raum und Vorrichtung unter Verwendung dieses Verfahrens
AU2023229565B2 (en) A web-based videoconference virtual environment with navigable avatars, and applications thereof
EP2352290B1 (fr) Méthode et dispositif pour aligner des signaux audio et vidéo pendant une vidéconférence
DE102020124815A1 (de) System und vorrichtung für benutzergesteuerte virtuelle kamera für volumetrisches video
DE112019003189T5 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und informationsverarbeitungssystem
WO2021207156A1 (fr) Intégration d'un audio à distance dans un lieu de performance
DE102014004069A1 (de) Verfahren, Softwareprodukt und Vorrichtung zur Steuerung einer Konferenz
WO2018184735A1 (fr) Concept de participation virtuelle à des événements en direct
WO2024230511A1 (fr) Procédé et appareil d'interaction virtuelle, dispositif et support
DE202020002406U1 (de) apparativer Zuschauerstellvertreter - "Stadionavatar"
US20230388454A1 (en) Video conference apparatus, video conference method and computer program using a spatial virtual reality environment
DE102012007441B4 (de) Virtuelle Transportmaschine: Task-Ins-Cubicle
CN117793279A (zh) 数据处理方法、装置、电子设备及存储介质
KR20220090751A (ko) 증강현실 방송 서비스를 제공하는 양방향 방송 시스템 및 방법
KR102676182B1 (ko) 현장 서비스 제공 방법 및 시스템
JP2010028299A (ja) 会議撮影画像処理方法及び会議装置等
DE102008056158B4 (de) Verfahren zur Steuerung und Installation eines audiovisuellen Kommunikationsgerätes mit einfacher Handhabung, eine besondere Kamerasteuerungseinrichtung und Verwendung für die Unterstützung einer Person

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18718689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18718689

Country of ref document: EP

Kind code of ref document: A1