CN113923512B - Method, device and computing equipment for processing video of events without live audiences - Google Patents
Method, device and computing equipment for processing video of events without live audiences Download PDFInfo
- Publication number
- CN113923512B CN113923512B CN202111194186.5A CN202111194186A CN113923512B CN 113923512 B CN113923512 B CN 113923512B CN 202111194186 A CN202111194186 A CN 202111194186A CN 113923512 B CN113923512 B CN 113923512B
- Authority
- CN
- China
- Prior art keywords
- simulated
- audience
- event
- video
- seat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—Two-dimensional [2D] image generation
- G06T11/20—Drawing from basic elements
- G06T11/26—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a method and a device for processing a video of an event without a live audience and computing equipment. The method comprises the following steps: building an event venue model corresponding to the event video, and generating a simulated audience set corresponding to the event video; fusing the event venue model and the simulated audience set to generate a simulated venue live audience model corresponding to the event video; identifying highlight clips in the event video, and generating simulated audience audio corresponding to the highlight clips according to the simulated venue live audience model; the simulated audience audio is embedded into the highlight in the event video. By adopting the scheme, the network audience can hear the simulation sound effect of the live audience when watching the highlight of the video of the event without the live audience, the matching degree of the simulation sound effect and the actual event situation is high, the simulation sound effect can simulate the atmosphere of the live audience of the event realistically, the watching experience of the network audience for watching the video of the event without the live audience is improved, and the user retention rate of the video playing platform of the event without the live audience is improved.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to a method and a device for processing live-free audience event video and computing equipment.
Background
Sporting and other events are favored by many viewers due to their high level of sophistication, compact rhythm, and the like. Conventional sporting events and the like typically involve two categories of spectators. One type is live spectators, who watch the event live in the event venue; the other category is network spectators, and the spectators watch event videos such as event live broadcast or event recorded broadcast through the network.
Due to the influence of epidemic situation and other factors, many events have no live audience, and the events are live audience-free events. Due to the lack of atmosphere baking of the live audience, the ornamental value of the live audience-free event video can be greatly reduced, the watching experience of the network audience to the live audience-free event video is reduced, and then the user retention rate of the event video playing platform is reduced.
Disclosure of Invention
The present invention has been made in view of the above-mentioned problems, and it is therefore an object of the present invention to provide a method, apparatus and computing device for processing live-free event video that overcomes or at least partially solves the above-mentioned problems.
According to one aspect of the present invention, there is provided a method for processing video of an off-site spectator event, comprising:
building an event venue model corresponding to the event video, and generating a simulated audience set corresponding to the event video;
fusing the event venue model with the simulated audience set to generate a simulated venue live audience model corresponding to the event video;
Identifying a highlight in the event video, and generating simulated audience audio corresponding to the highlight according to the simulated venue live audience model;
embedding the simulated audience audio into the highlight in the event video.
In an alternative embodiment, the event venue model includes at least one simulated seat, each of the simulated seats having corresponding seat information; the simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information;
the fusing the event venue model with the simulated audience set further comprises:
Binding the simulated seats in the event venue model with simulated spectators in the set of simulated spectators.
In an alternative embodiment, the event venue model further includes at least one analog video capturing device, each analog video capturing device having corresponding device location information; the seat information includes seat position information;
The generating simulated audience audio corresponding to the highlight from the simulated venue live audience model further comprises:
Aiming at any simulated audience, acquiring standard audio corpus of the simulated audience according to audience information of the simulated audience;
Calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience;
And generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience.
In an alternative embodiment, the identifying the highlight in the event video further comprises: identifying a highlight in the event video and a precision of the highlight;
the generating simulated audience audio corresponding to the highlight based on the modified audio corpus of each simulated audience further comprises: and generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience and the chroma of the highlight.
In an alternative embodiment, the seat information includes a seat category; the audience information includes country information;
The binding the simulated seats in the event venue model with simulated spectators in the set of simulated spectators further comprises:
determining a target simulation seat with a seat type in the event venue model as a core seat;
identifying target simulated spectators with country information matched with country information of the competitor corresponding to the event video from the simulated spectator set;
Acquiring a preset number of target simulation audiences, and binding the preset number of target simulation audiences with the target simulation seats;
Binding the simulated spectators of the currently unbound simulated seats to the non-target simulated seats.
In an alternative embodiment, the generating the simulated audience set corresponding to the event video further includes:
Acquiring event information corresponding to the event video;
Acquiring historical live audience events with the similarity with the event information higher than a preset similarity threshold;
Acquiring live audience information of the historical live audience event;
and generating a simulated audience set corresponding to the event video according to the live audience information.
In an alternative embodiment, the event video is an event live stream;
the identifying highlight clips in the event video further comprises: and identifying highlight segments in the waiting segments of the event live stream.
According to another aspect of the present invention, there is provided a processing apparatus for video of an off-site audience event, comprising:
The venue model construction module is used for constructing an event venue model corresponding to the event video;
the simulated audience generation module is used for generating a simulated audience set corresponding to the event video;
The fusion module is used for fusing the event venue model with the simulated audience set to generate a simulated venue live audience model corresponding to the event video;
The identifying module is used for identifying the highlight in the event video;
the audio generation module is used for generating simulated audience audio corresponding to the highlight according to the simulated venue live audience model;
An embedding module for embedding the simulated audience audio into the highlight in the event video.
In an alternative embodiment, the event venue model includes at least one simulated seat, each of the simulated seats having corresponding seat information; the simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information;
The fusion module is further to: binding the simulated seats in the event venue model with simulated spectators in the set of simulated spectators.
In an alternative embodiment, the event venue model further includes at least one analog video capturing device, each analog video capturing device having corresponding device location information; the seat information includes seat position information;
The audio generation module is further configured to: aiming at any simulated audience, acquiring standard audio corpus of the simulated audience according to audience information of the simulated audience;
Calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience;
And generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience.
In an alternative embodiment, the identification module is further configured to: identifying a highlight in the event video and a precision of the highlight;
the audio generation module is further configured to: and generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience and the chroma of the highlight.
In an alternative embodiment, the seat information includes a seat category; the audience information includes country information;
the fusion module is further to: determining a target simulation seat with a seat type in the event venue model as a core seat;
identifying target simulated spectators with country information matched with country information of the competitor corresponding to the event video from the simulated spectator set;
Acquiring a preset number of target simulation audiences, and binding the preset number of target simulation audiences with the target simulation seats;
Binding the simulated spectators of the currently unbound simulated seats to the non-target simulated seats.
In an alternative embodiment, the simulated audience generation module is further to:
Acquiring event information corresponding to the event video;
Acquiring historical live audience events with the similarity with the event information higher than a preset similarity threshold;
Acquiring live audience information of the historical live audience event;
and generating a simulated audience set corresponding to the event video according to the live audience information.
In an alternative embodiment, the event video is an event live stream;
The identification module is further to: and identifying highlight segments in the waiting segments of the event live stream.
According to yet another aspect of the present invention, there is provided a computing device comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the processing method of the live audience event video.
According to still another aspect of the present invention, there is provided a computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the method for processing live-free video of an event as described above.
In the invention, an event venue model corresponding to the event video is constructed, and a simulated audience set corresponding to the event video is generated; fusing the event venue model and the simulated audience set to generate a simulated venue live audience model corresponding to the event video; identifying highlight clips in the event video, and generating simulated audience audio corresponding to the highlight clips according to the simulated venue live audience model; the simulated audience audio is embedded into the highlight in the event video. By adopting the scheme, the network audience can hear the simulation sound effect of the live audience when watching the highlight of the video of the event without the live audience, the matching degree of the simulation sound effect and the actual event situation is high, the simulation sound effect can simulate the atmosphere of the live audience of the event realistically, the watching experience of the network audience for watching the video of the event without the live audience is improved, and the user retention rate of the video playing platform of the event without the live audience is improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a flow chart illustrating a method for processing a live audience event video according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a two-dimensional structure of a stadium model according to an embodiment of the present invention;
Fig. 3 is a schematic flow chart of a method for generating a simulated audience set according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for fusing a stadium model with a simulated audience set according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a two-dimensional structure of a simulated venue live audience model according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a highlight identification method according to an embodiment of the present invention;
Fig. 7 is a schematic flow chart of a method for obtaining audio of a simulated audience according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a simulated audience audio control page provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a processing device for live-free event video according to an embodiment of the present invention;
FIG. 10 illustrates a schematic diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flow chart illustrating a method for processing a live-free spectator event video according to an embodiment of the present invention.
In the embodiment of the invention, the event video refers to event video corresponding to the event without live audience, and the event video can be an event live stream of the event without live audience, recorded broadcast video of the event without live audience, and the like.
For the video of the event without live audience, the embodiment of the invention obtains the simulation venue live audience model corresponding to the video of the event without live audience through steps S110-S130 so as to simulate the distribution of the presence audience in the event venue. And simulate the sound effect of the live audience when watching the event highlight through step S150, and finally embed the simulated sound effect into the highlight corresponding to the event video through step S160. Therefore, the network audience can hear the simulated live audience sound effect when watching the highlight of the event video, and the simulated sound effect is vivid and has high matching degree with the actual event situation.
As shown in fig. 1, the method comprises the steps of:
step S110, building an event venue model corresponding to the event video.
Specifically, event venue information corresponding to event videos is obtained, modeling is conducted based on the event venue information, and an event venue model corresponding to the event videos without live audience is generated. The embodiment of the invention does not limit the specific modeling mode.
In an alternative embodiment, in order to enhance the degree of restoration of the constructed event venue model to the actual event venue and facilitate the accurate subsequent determination of the distribution of the simulated audience in the event venue model, the event venue model constructed in the embodiment of the present invention includes at least one simulated seat, each of which has corresponding seat information. The seat information of each simulated seat in the event venue model in the embodiment of the invention is obtained according to the seat information of the actual seat in the actual event venue. Therefore, the event venue model constructed by the embodiment of the invention has higher matching degree with the actual event venue. The seat information of the simulated seat comprises seat position information of the simulated seat, and the position of the simulated seat in the event venue model can be uniquely determined through the seat position information of the simulated seat.
Further optionally, the embodiment of the invention divides the seat category of the simulated seat according to different event types, so that the distribution situation of the simulated audience is more realistic. The seat information for the simulated seat in the embodiment of the present invention further includes the seat class of the simulated seat. Wherein the seat class includes: a core seat and a non-core seat. When a match includes two participants, a simulated seat with a seat class as the core seat is typically included in the stadium model. For example, in a table tennis event, the area behind two participants, i.e., the simulated seat with the seat class as the core seat, is typically seated by a spectator who is in national agreement with the participants; when a match includes multiple participants, then the simulated seats with the seat class as the core seat are not typically included in the event venue model, e.g., in a stadium-type skating event, the simulated seats are of the same type and are all non-core seats.
In yet another alternative implementation manner, in order to further enhance the fidelity of the simulation sound effect of the subsequently embedded live audience, the event venue model constructed in the embodiment of the present invention further includes at least one analog video capturing device, where each analog video capturing device has corresponding device position information. Wherein each analog video capture device corresponds to a video capture device in an actual event venue. The video acquisition equipment in the actual event venue is used for acquiring images and audio of the live event.
In addition, venue dimensions (venue length, width, height, etc.) of the constructed event venue model, venue types (e.g., indoor closed or outdoor open, etc.), and/or dimensions of the playing fields in the venue (e.g., venue length, width, height, etc.) are determined based on corresponding information of the actual event venue.
In yet another alternative embodiment, the event venue model corresponding to the constructed event video and the model adjustment portal may be displayed, so as to facilitate viewing and adjustment of the constructed event venue model, and the like. For example, fig. 2 shows a schematic two-dimensional structure of a stadium model according to an embodiment of the present invention. It should be understood herein that, in the embodiment of the present invention, a two-dimensional structure diagram is adopted to simplify the event venue model, and in an actual implementation process, the constructed event venue model may be a two-dimensional model, a three-dimensional model or the like.
As shown in fig. 2, each square block corresponds to a simulated seat. Wherein the shaded square blocks represent simulated seats with seat types as core positions, and the white square blocks represent simulated seats with seat types as non-core positions; the circular blocks represent analog video acquisition devices. When any block is clicked, seat information of a simulated seat corresponding to the block can be further displayed, or equipment information of simulated video acquisition equipment corresponding to the block can be displayed. Or when a certain block is clicked or pressed for a long time, the adjustment entrance of the analog seat or the analog video acquisition equipment corresponding to the block can be displayed, so that the inaccurate analog seat or the analog video acquisition equipment can be adjusted conveniently.
Further optionally, this step may be performed before the event video is played, so as to improve the overall execution efficiency of the method.
Step S120, generating a simulated audience set corresponding to the event video.
The simulated audience set is a simulation of live audience in an event corresponding to the live audience-free event video.
In an alternative embodiment, to enhance the realism of the simulated audience set, the steps shown in FIG. 3 may be used to generate a simulated audience set corresponding to the live free event video. As shown in fig. 3, the method includes the following steps S121 to S124:
step S121, obtaining event information corresponding to the event video.
Wherein the event information includes event type and/or category of the game, etc. For example, the event type may be a world tournament, the game category may be table tennis, or the like.
Step S122, obtaining a historical live audience event with the similarity to the event information higher than a preset similarity threshold.
Historical live audience events specifically refer to events that have ended and have live audience. Specifically, event information of historical live audience events is obtained from a database storing historical event data. And then, similarity calculation is carried out on the acquired event information of each historical live audience event and the event information corresponding to the event video acquired in the step S121. The embodiment of the invention does not limit a specific similarity calculation mode, for example, text similarity between event information can be calculated, semantic similarity between event information can be calculated, and the like.
And further screening historical live audience events with the similarity of the event information corresponding to the event video higher than a preset similarity threshold according to the similarity calculation result. For example, if the event information corresponding to the current spectator-less event video is a world tournament table tennis, the selected historical spectator events may be historical world tournament table tennis events and the historical world tournament table tennis events may be watched by live spectators.
Step S123, the live audience information of the historical live audience events is acquired.
The live audience information includes at least one of the following: historical occupancy, historical audience duty ratios of different countries, historical audience duty ratios of different sexes, historical audience duty ratios of different age groups, and the like.
Step S124, generating a simulated audience set corresponding to the event video according to the live audience information.
Because the screened historical live audience events have higher similarity with the non-live audience events corresponding to the event video, the live audience information of the historical live audience events can accurately simulate the simulated audience set of the current non-live audience events.
The generated simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information. The audience information includes at least one of the following: country information, sex information, age group information, and the like. Wherein the ratio of the number of simulated spectators to the number of simulated seats in the simulated spectator set matches the historic seating rate obtained in step S123; the simulated audience duty ratios of different countries in the simulated audience set are matched with the historical audience duty ratios of different countries in the step S123; the simulated audience duty ratios of the different sexes in the simulated audience set are matched with the historical audience duty ratios of the different sexes in step S123; the simulated audience duty ratios of the different age groups in the simulated audience set match the historical audience duty ratios of the different age groups in step S123.
Step S130, fusing the event venue model with the simulated audience set to generate a simulated venue live audience model corresponding to the event video.
After the event venue model is fused with the simulated audience set, the distribution condition of each simulated audience at the event venue can be simulated, namely, the simulated venue scene audience model corresponding to the event video is generated.
In an alternative embodiment, fusing the event venue model with the simulated audience set specifically binds simulated seats in the event venue model with simulated audience in the simulated audience set. Wherein a simulated audience is bound to a simulated seat; a simulated seat may bind a simulated audience, or an unbound simulated audience.
For the simulated seats with the seat class as the core seats in the event venue model, the event venue model and the simulated audience set can be fused through steps S131-S134 shown in fig. 4:
step S131, determining a target simulated seat with the seat category as a core seat in the event venue model.
As shown in fig. 2, the target simulated seat is a simulated seat corresponding to a hatched square block in the figure.
Step S132, identifying target simulated spectators with country information matched with country information of the competitor corresponding to the event video from the simulated spectator set.
For example, the competitor is a chinese team and a japanese team. The country information of the competitor is china and japan, respectively. The simulated audience (hereinafter referred to as chinese simulated audience) in country and the simulated audience (hereinafter referred to as japan simulated audience) in country are acquired from the simulated audience set. The China simulated audience and the Japanese simulated audience are target simulated audience.
Step S133, obtaining a preset number of target simulation audiences, and binding the preset number of target simulation audiences with the target simulation seats.
After the target simulated spectators are screened, a preset number of target simulated spectators are obtained from the screened target simulated spectators, and each target simulated spectator with the preset number is bound with the target simulated seats. Wherein, the ratio of the preset number to the target simulated seats is higher than the ratio of the target simulated audience to the non-target simulated seats. I.e. the simulated spectators, which are in accordance with the nationality of the participants, are centrally bound to the core seating area.
For example, the participants are chinese teams and japanese teams, and the simulated audience set includes 10 chinese simulated audience members and 9 japanese simulated audience members. The preset number is 14, 7 chinese simulated spectators and 7 japan simulated spectators are obtained. And the 7 chinese simulated spectators are bound one by one with any 7 target simulated seats on the left side of fig. 2, and the 7 japan simulated spectators are bound one by one with any 7 target simulated seats on the right side of fig. 2. Wherein, the ratio of the preset number to the target simulated seats is 7/8, namely the ratio of the target simulated audience in the core seat area is 7/8; the remaining target simulated spectators, which were bound in the non-target simulated seats, were 3 chinese simulated spectators and 2 japan simulated spectators, the number of non-target simulated seats in fig. 2 was 48, and the proportion of chinese simulated spectators in the non-core seat area was 3/48, and the proportion of japan simulated spectators in the non-core seat area was 2/48.
Step S134, binding the simulation spectators of the current unbound simulation seats with the non-target simulation seats.
After binding a preset number of target simulated spectators with the target simulated seats, the current remaining simulated spectators (i.e., the simulated spectators that have not currently bound the simulated seats) are bound to the non-target simulated seats by random or the like.
Fig. 5 shows a schematic two-dimensional structure of a simulated venue live audience model according to an embodiment of the present invention. As shown in fig. 5, fig. 5 further binds a simulated audience in the simulated seat based on fig. 2. In fig. 5, "C" represents a chinese simulated audience, "J" represents a japan simulated audience, "G" represents a germany simulated audience, "a" represents a united states simulated audience, "K" represents a korea simulated audience, and "R" represents a russian simulated audience.
For the simulated seats with the seat category as the core seats in the event venue model, each simulated audience can be bound with the simulated seats in a random mode.
Step S140, identifying a highlight in the event video.
And if the event video is an event live stream, identifying a highlight in the segments to be played in the event live stream.
In an alternative highlight identification approach, a highlight identification method as shown in fig. 6 may be employed. As shown in fig. 6, the method includes the following steps S141 to S143:
Step S141, a highlight identification model is constructed in advance.
The embodiment of the invention does not limit the specific structure of the highlight identification model. For example, the highlight identification model may be constructed using a 3D convolution algorithm. The 3D convolution algorithm is to compose a cube by stacking a plurality of consecutive frames and then run a 3D convolution kernel in the cube. With this structure, the feature map in the convolution layer is connected to a plurality of adjacent frames in the previous layer to capture motion information, so that a highlight in a dynamic video can be accurately identified.
Step S142, obtaining a historical event video, marking the historical event video to generate a training sample, and carrying out model training on the constructed highlight identification model by using the training sample to obtain a trained highlight identification model.
And carrying out segmentation processing on the obtained historical event videos to generate video segments, and labeling each video segment. In the labeling process, the whole highlighting degree and/or action difficulty of the video segment can be manually labeled; or the marking is automatically carried out through the decibel change of the audio in the video segment.
After labeling, for each video segment, a preset number of image frames are extracted from the video segment, and the extracted image frames are input into a constructed highlight identification model for model training. And when the preset ending condition is met, ending training and obtaining a trained highlight identification model.
Step S143, obtaining a waiting segment of the event live stream, inputting the waiting segment into a trained highlight segment identification model, and obtaining time period information of the highlight segment output by the highlight segment identification model.
The highlight of the event video can be identified in real time by using the trained highlight identification model. The highlight-segment recognition model may output period information of the recognized highlight segment. The time period information may specifically be a start point and an end point of the highlight.
Further alternatively, the highlight-segment recognition model may also output the respective highlights for each highlight segment.
In yet another alternative highlight identification approach, a highlight annotation entry is provided through which an annotator can annotate highlight in a waiting segment by viewing the waiting segment in the interface and by way of the highlight annotation entry. Further alternatively, the type of the highlight can be marked by the marking entry, for example, the type can be shooting, ball entry, broken record, sprint, zero-stick, race point, game play point, and the like. In addition, the precision of the current highlight can be determined according to the mapping relation between the type of the highlight and the precision.
And step S150, generating simulated audience audio corresponding to the highlight according to the simulated venue live audience model.
The simulated venue live audience model can simulate the distribution situation of each simulated audience in the event venue, and the standard audio corpus of each simulated audience is mixed based on the distribution situation to generate simulated audience audio corresponding to the highlight. I.e., the simulated audience audio corresponding to the highlight is simulated sound effects that simulate the live audience when viewing the highlight.
In an alternative embodiment, the method shown in FIG. 7 may be used to obtain simulated audience audio corresponding to highlight segments. As shown in fig. 7, the method includes the following steps S151 to S153:
step S151, for any simulated audience, obtaining a standard audio corpus of the simulated audience according to the audience information of the simulated audience.
A corpus is pre-constructed, and different audio corpora, such as applause, oiling sounds and the like, are stored in the corpus. The audio corpus can be obtained from a related platform, and can also be synthesized through simulation of sound simulation equipment. Each audio corpus has corresponding corpus information, and the corpus information at least comprises at least one of the following information: corpus identification, memory address, language, gender, age group, type of drinking, etc. The corpus identification is used for uniquely determining the audio corpus, the storage address is used for acquiring the audio corpus, the languages are in particular language information of the membership of the audio corpus, the gender is in particular the gender of the speaker of the audio corpus, the age group is in particular the age group of the speaker of the audio corpus, and the cheering type can be in particular the types of oiling sound, cheering sound and the like.
And acquiring standard audio corpus of any simulated audience based on the constructed corpus. The audio corpus which is obtained from the corpus and matched with the audience information of the simulated audience is the standard audio corpus of the simulated audience. In the process of obtaining the standard audio corpus of the simulated audience, the audience information of the simulated audience is matched with the corpus information of the corpus in the corpus. For example, the audience information simulating audience a is: chinese, 20-30 years old, male, then obtain the language from the corpus, the age group is 20-30, the sex is the standard audio corpus of male's corresponding corpus seat simulation audience A.
Optionally, in order to improve the searching efficiency of the standard audio corpus, in the process of obtaining the standard audio corpus of the simulated audience, the standard audio corpus may be searched according to priorities of a plurality of audience information of the simulated audience. The priority of each audience information is as follows: country > sex > age group. For example, firstly searching an audio corpus matched with country information of the simulated audience from a corpus, and if the audio corpus matched with the country information cannot be searched, adopting a system default audio corpus; if the audio corpus matched with the country information can be found, the audio corpus consistent with the gender of the simulated audience is further screened out from the found audio corpus, then the audio corpus consistent with the age of the simulated audience is screened out, and the like.
Step S152, calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience.
As an alternative embodiment, the standard audio corpora of each simulated audience obtained in step S151 may be directly mixed to generate the simulated audience audio. However, in this way, the mixed simulated audience audio differs greatly from the actual situation. Based on this, the embodiment of the present invention further corrects the standard audio corpus of the simulated audience through step S152, and obtains the simulated audience audio through the corrected audio corpus.
Specifically, since the distances between the live audience and the video capture device are different, the sound sizes of the live audience captured by the video capture device are also different. Therefore, the embodiment of the invention aims at any simulated audience, and corrects the standard audio corpus of the simulated audience according to the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment so as to obtain the corrected audio corpus of the simulated audience. Wherein, the modified audio corpus can be obtained by the following formula 1-1:
F i=Si/340*fi (equation 1-1)
Wherein, F i is the decibel value of the corrected audio corpus, F i is the decibel value of the standard audio corpus, and S i is the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment.
Step S153, generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience.
And mixing the corrected audio corpora of each simulated audience to generate simulated audience audio corresponding to the highlight.
Optionally, in order to further enhance the fidelity of the audio of the simulated audience, the embodiment of the invention identifies the highlight and the precision of the highlight in the event video, and generates the audio of the simulated audience corresponding to the highlight based on the corrected audio corpus of each simulated audience and the precision of the highlight. Wherein the higher the highlighting, the higher the decibel value of the simulated audience audio.
Optionally, simulated audience audio corresponding to the highlight may also be generated based on the modified audio corpus of each simulated audience and the dynamic factor. For example, in a closed venue, the dynamic factor may be 1 because the wind speed and other environments have less impact on the audio acquisition; in outdoor venues, the dynamic factor may be less than 1 because of the greater impact of wind speed and other environments on the audio acquisition.
Optionally, simulated audience audio corresponding to the highlight can be generated based on the corrected audio corpus of each simulated audience and the mixing time length parameter. Each modified audio corpus may correspond to a mixing duration parameter, where the mixing duration parameter may be randomly determined within a preset range. And determining the duration of the intercepted corrected audio corpus through the mixing duration parameter, and then mixing the intercepted corrected audio corpus to generate simulated audience audio.
In addition, optionally, as shown in fig. 8, an audio control page simulating audience can be provided for the user, and the user can adjust parameters such as mixing duration, chroma, dynamic factors and the like in the page. Finally, simulated audience audio is generated based on the adjusted parameters.
Step S160, embedding the simulated audience audio into the highlight in the event video.
Specifically, the event video is parsed into event images and event audio, a timestamp corresponding to the highlight is determined, and the simulated audience audio is embedded into the event audio at a position corresponding to the timestamp, so that the mixed event audio is obtained. And then mixing the mixed event audio with the event image to generate event video containing simulated audience audio.
In addition, in an alternative embodiment, a competitor in the event video may be identified, and a segment of simulated audience audio to be embedded is determined based on the competitor's characteristic information. For example, if it is identified that a certain player Z is in the waiting period and the characteristic information of the player Z is "waiting for a clapping encouragement", it can be determined that the player Z is in the waiting period as a segment of the audio to be embedded in the simulated audience. The simulated audience audio may be embedded in the segment corresponding to the candidate phase. By adopting the mode, the viewing experience of the audience can be further improved.
Therefore, the embodiment of the invention constructs the event venue model and the simulated audience set corresponding to the event video without the live audience, and obtains the simulated venue live audience model which can simulate the distribution of the presence users in the event venue through the fusion of the event venue model and the simulated audience set; and further obtaining simulated audience audio corresponding to the highlight based on the simulated venue live audience model, and embedding the simulated audience audio into the highlight of the video of the non-live audience event. By adopting the scheme, the network audience can hear the simulation sound effect of the live audience when watching the highlight of the video of the event without the live audience, the matching degree of the simulation sound effect and the actual event situation is high, the simulation sound effect can simulate the atmosphere of the live audience of the event realistically, the watching experience of the network audience for watching the video of the event without the live audience is improved, and the user retention rate of the video playing platform of the event without the live audience is improved.
Fig. 9 is a schematic structural diagram of a processing device for live-free event video according to an embodiment of the present invention.
As shown in fig. 9, the processing device 900 for the live audience event video includes: venue model building module 910, simulated audience generation module 920, fusion module 930, recognition module 940, audio generation module 950, and embedding module 960.
A venue model construction module 910, configured to construct a venue model corresponding to the event video;
a simulated audience generation module 920, configured to generate a simulated audience set corresponding to the event video;
A fusion module 930, configured to fuse the event venue model with the set of simulated spectators, so as to generate a simulated venue live spectator model corresponding to the event video;
an identification module 940 for identifying highlight clips in the event video;
An audio generation module 950, configured to generate simulated audience audio corresponding to the highlight according to the simulated venue live audience model;
an embedding module 960 for embedding the simulated audience audio into the highlight in the event video.
In an alternative embodiment, the event venue model includes at least one simulated seat, each of the simulated seats having corresponding seat information; the simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information;
The fusion module is further to: binding the simulated seats in the event venue model with simulated spectators in the set of simulated spectators.
In an alternative embodiment, the event venue model further includes at least one analog video capturing device, each analog video capturing device having corresponding device location information; the seat information includes seat position information;
The audio generation module is further configured to: aiming at any simulated audience, acquiring standard audio corpus of the simulated audience according to audience information of the simulated audience;
Calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience;
And generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience.
In an alternative embodiment, the identification module is further configured to: identifying a highlight in the event video and a precision of the highlight;
the audio generation module is further configured to: and generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience and the chroma of the highlight.
In an alternative embodiment, the seat information includes a seat category; the audience information includes country information;
the fusion module is further to: determining a target simulation seat with a seat type in the event venue model as a core seat;
identifying target simulated spectators with country information matched with country information of the competitor corresponding to the event video from the simulated spectator set;
Acquiring a preset number of target simulation audiences, and binding the preset number of target simulation audiences with the target simulation seats;
Binding the simulated spectators of the currently unbound simulated seats to the non-target simulated seats.
In an alternative embodiment, the simulated audience generation module is further to:
Acquiring event information corresponding to the event video;
Acquiring historical live audience events with the similarity with the event information higher than a preset similarity threshold;
Acquiring live audience information of the historical live audience event;
and generating a simulated audience set corresponding to the event video according to the live audience information.
In an alternative embodiment, the event video is an event live stream;
The identification module is further to: and identifying highlight segments in the waiting segments of the event live stream.
The specific implementation process of the device may refer to the description of the corresponding parts in the above method embodiments, which is not repeated herein.
Therefore, the embodiment of the invention can enable the network audience to hear the simulation sound effect of the live audience when watching the highlight of the video of the event without the live audience, and the simulation sound effect has high matching degree with the actual event situation, and the simulation sound effect can realistically simulate the atmosphere of the event live audience, thereby improving the watching experience of the network audience for watching the video of the event without the live audience, and improving the user retention rate of the video playing platform of the event without the live audience.
Embodiments of the present invention provide a non-volatile computer storage medium having stored thereon at least one executable instruction for performing the method for processing live-free spectator event video in any of the above method embodiments.
FIG. 10 illustrates a schematic diagram of a computing device provided by an embodiment of the present invention. The specific embodiments of the present invention are not limited to a particular implementation of a computing device.
As shown in fig. 10, the computing device may include: a processor 1002, a communication interface Communications Interface, a memory 1006, and a communication bus 1008.
Wherein: the processor 1002, communication interface 1004, and memory 1006 communicate with each other via a communication bus 1008. Communication interface 1004 is used for communicating with network elements of other devices, such as clients or other servers. The processor 1002 is configured to execute the program 1010, and may specifically perform the relevant steps in the above-described embodiment of the method for processing video of a live audience-free event.
In particular, program 1010 may include program code including computer operating instructions.
The processor 1002 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 1006 for storing programs 1010. The memory 1006 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of some or all of the components according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.
Claims (8)
1. A method for processing video of an off-site audience event, comprising:
Constructing an event venue model corresponding to the event video, wherein the event venue model comprises at least one simulation seat and at least one simulation video acquisition device, each simulation seat has corresponding seat position information, and each simulation video acquisition device has corresponding device position information;
Generating a simulated audience set corresponding to the event video, wherein the simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information;
Binding the simulated seats in the event venue model with simulated spectators in the simulated spectator set to generate a simulated venue live spectator model corresponding to the event video;
Identifying highlight clips in the event video, and acquiring standard audio corpus of any simulated audience according to audience information of the simulated audience aiming at the simulated audience; calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience; generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience;
embedding the simulated audience audio into the highlight in the event video.
2. The method of claim 1, wherein the identifying highlight clips in the event video further comprises: identifying a highlight in the event video and a precision of the highlight;
the generating simulated audience audio corresponding to the highlight based on the modified audio corpus of each simulated audience further comprises: and generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience and the chroma of the highlight.
3. The method of claim 1, wherein each simulated seat has a corresponding seat class; the audience information includes country information;
The binding the simulated seats in the event venue model with simulated spectators in the set of simulated spectators further comprises:
determining a target simulation seat with a seat type in the event venue model as a core seat;
identifying target simulated spectators with country information matched with country information of the competitor corresponding to the event video from the simulated spectator set;
Acquiring a preset number of target simulation audiences, and binding the preset number of target simulation audiences with the target simulation seats;
Binding the simulated spectators of the currently unbound simulated seats to the non-target simulated seats.
4. The method of any of claims 1-3, wherein the generating the simulated audience set for the event video further comprises:
Acquiring event information corresponding to the event video;
Acquiring historical live audience events with the similarity with the event information higher than a preset similarity threshold;
Acquiring live audience information of the historical live audience event;
and generating a simulated audience set corresponding to the event video according to the live audience information.
5. A method according to any one of claims 1-3, wherein the event video is an event live stream;
the identifying highlight clips in the event video further comprises: and identifying highlight segments in the waiting segments of the event live stream.
6. A device for processing video of an off-site audience event, comprising:
the venue model construction module is used for constructing a venue model corresponding to the event video, the venue model comprises at least one simulation seat and at least one simulation video acquisition device, each simulation seat has corresponding seat information, the seat information comprises seat position information, and each simulation video acquisition device has corresponding device position information;
The simulated audience generation module is used for generating a simulated audience set corresponding to the event video, wherein the simulated audience set comprises at least one simulated audience, and each simulated audience has corresponding audience information;
The fusion module is used for binding the simulated seats in the event venue model with the simulated spectators in the simulated spectators to generate a simulated venue live spectator model corresponding to the event video;
The identifying module is used for identifying the highlight in the event video;
the audio generation module is used for aiming at any simulated audience, and acquiring standard audio corpus of the simulated audience according to audience information of the simulated audience; calculating the distance between the simulated seat bound by the simulated audience and the simulated video acquisition equipment according to the seat position information of the simulated seat bound by the simulated audience and the equipment position information of the simulated video acquisition equipment, and correcting the standard audio corpus of the simulated audience according to the distance to obtain the corrected audio corpus of the simulated audience; generating simulated audience audio corresponding to the highlight based on the corrected audio corpus of each simulated audience;
An embedding module for embedding the simulated audience audio into the highlight in the event video.
7. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
The memory is configured to store at least one executable instruction that causes the processor to perform operations corresponding to the method of processing live-free spectator event video as described in any one of claims 1-5.
8. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the method of processing live-free spectator event video as described in any one of claims 1-5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111194186.5A CN113923512B (en) | 2021-10-13 | 2021-10-13 | Method, device and computing equipment for processing video of events without live audiences |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111194186.5A CN113923512B (en) | 2021-10-13 | 2021-10-13 | Method, device and computing equipment for processing video of events without live audiences |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113923512A CN113923512A (en) | 2022-01-11 |
| CN113923512B true CN113923512B (en) | 2024-07-16 |
Family
ID=79239971
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111194186.5A Active CN113923512B (en) | 2021-10-13 | 2021-10-13 | Method, device and computing equipment for processing video of events without live audiences |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113923512B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105635834A (en) * | 2015-12-20 | 2016-06-01 | 天脉聚源(北京)科技有限公司 | Competition result displaying method and device |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20090006371A (en) * | 2007-07-11 | 2009-01-15 | 야후! 인크. | Method and system for providing virtual co-existence to broadcast viewers or listeners in an online broadcasting system (virtual co-presence) |
| US8360835B2 (en) * | 2007-10-23 | 2013-01-29 | I-Race, Ltd. | Virtual world of sports competition events with integrated betting system |
| CN105263038B (en) * | 2015-09-24 | 2018-08-24 | 天脉聚源(北京)科技有限公司 | The method and apparatus of dynamic displaying virtual auditorium |
| JP6461850B2 (en) * | 2016-03-31 | 2019-01-30 | 株式会社バンダイナムコエンターテインメント | Simulation system and program |
| US10621784B2 (en) * | 2017-09-29 | 2020-04-14 | Sony Interactive Entertainment America Llc | Venue mapping for virtual reality spectating of live events |
| US11625987B2 (en) * | 2019-03-12 | 2023-04-11 | Fayble, LLC | Systems and methods for generation of virtual sporting events |
| US11731047B2 (en) * | 2019-03-12 | 2023-08-22 | Fayble, LLC | Systems and methods for manipulation of outcomes for virtual sporting events |
| US11128925B1 (en) * | 2020-02-28 | 2021-09-21 | Nxp Usa, Inc. | Media presentation system using audience and audio feedback for playback level control |
| CN113395540A (en) * | 2021-06-09 | 2021-09-14 | 广州博冠信息科技有限公司 | Virtual broadcasting system, virtual broadcasting implementation method, device and equipment, and medium |
-
2021
- 2021-10-13 CN CN202111194186.5A patent/CN113923512B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105635834A (en) * | 2015-12-20 | 2016-06-01 | 天脉聚源(北京)科技有限公司 | Competition result displaying method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113923512A (en) | 2022-01-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR101535579B1 (en) | Augmented reality interaction implementation method and system | |
| CN107707931B (en) | Method and device for generating interpretation data according to video data, method and device for synthesizing data and electronic equipment | |
| CN111552869B (en) | Method and device for displaying housing information | |
| US10521963B1 (en) | Methods and systems for representing a pre-modeled object within virtual reality data | |
| CN110727341A (en) | Event augmentation based on augmented reality effects | |
| EP2444971A2 (en) | Centralized database for 3-D and other information in videos | |
| TW202123178A (en) | Method for realizing lens splitting effect, device and related products thereof | |
| CN108337573A (en) | A kind of implementation method that race explains in real time and medium | |
| US11978484B2 (en) | Systems and methods for generating and presenting virtual experiences | |
| US20170048597A1 (en) | Modular content generation, modification, and delivery system | |
| CN107638690B (en) | Augmented reality implementation method, device, server and medium | |
| CN105632263A (en) | Augmented reality-based music enlightenment learning device and method | |
| CN109120990B (en) | Live broadcast method, device and storage medium | |
| CN109408672A (en) | A kind of article generation method, device, server and storage medium | |
| CN113923512B (en) | Method, device and computing equipment for processing video of events without live audiences | |
| JP6450305B2 (en) | Information acquisition apparatus, information acquisition method, and information acquisition program | |
| CN105894084A (en) | Theater box office people counting method, device and system | |
| CN113379514A (en) | Information recommendation method and device, electronic equipment and medium | |
| KR102045347B1 (en) | Surppoting apparatus for video making, and control method thereof | |
| CN113676775A (en) | Method for implanting advertisement in video and game by using artificial intelligence | |
| CN118612499A (en) | Image generation method, device, equipment, storage medium and computer program product | |
| CN114120196B (en) | Highlight video processing method and device, storage medium and electronic equipment | |
| Acir | Video games, virtual reality and augmented reality applications in tourism promotion and marketing | |
| Milanović | Musicians' attitudes towards concert activities in the first year of the pandemic: Belgrade context | |
| CN109391849B (en) | Processing method and system, multimedia output device and memory |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |