CN114268815B - Video quality determining method, device, electronic equipment and storage medium - Google Patents
Video quality determining method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114268815B CN114268815B CN202111537652.5A CN202111537652A CN114268815B CN 114268815 B CN114268815 B CN 114268815B CN 202111537652 A CN202111537652 A CN 202111537652A CN 114268815 B CN114268815 B CN 114268815B
- Authority
- CN
- China
- Prior art keywords
- video
- determined
- collocation
- type
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000012545 processing Methods 0.000 claims abstract description 42
- 230000006399 behavior Effects 0.000 claims description 42
- 238000012549 training Methods 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 12
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure relates to a video quality determining method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring video feature information of a video to be determined and video feature information of the video to be determined, acquiring element reference information corresponding to a first type matching element under the condition that the video to be determined contains the first type matching element, wherein the element reference information represents a plurality of historical operation data corresponding to videos related to the first type matching element in a preset historical time window, processing the video feature information of the video to be determined and the element reference information corresponding to the first type matching element through a target video quality determination model to obtain quality data of the video to be determined, and the quality data represents the probability of a user operating the video to be determined. Meanwhile, the quality data of the video is obtained by combining the video characteristic information of the video and the element reference information corresponding to the first type collocation element contained in the video, so that the accuracy of the quality data of the video can be improved.
Description
Technical Field
The present disclosure relates to video quality determination technologies, and in particular, to a video quality determination method, apparatus, electronic device, and storage medium.
Background
In the video field, a recommendation system performs personalized recommendation according to consumption data (such as praise times, collection times, comment times and the like) of a user video.
In the related art, it is difficult to quickly make a recommendation for a newly uploaded video without accumulating consumption data, so that cold start is generally performed based on the video content itself, that is, whether the video is a high-quality video is judged according to attribute information of the video content itself, and then the high-quality video is recommended, but whether the video is a high-quality video is judged directly according to attribute information of the video content itself, so that accuracy is poor, and further a recommendation effect is poor.
Disclosure of Invention
The disclosure provides a video quality determining method, a video quality determining device, an electronic device and a storage medium, so as to at least solve the problem of poor accuracy of video quality prediction in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided a video quality determining method, including:
acquiring a video to be determined and video characteristic information of the video to be determined;
acquiring element reference information corresponding to a first type collocation element under the condition that the video to be determined contains the first type collocation element; the element reference information represents a plurality of historical operation data corresponding to the video related to the first type collocation element in a preset historical time window;
And processing the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain quality data of the video to be determined, wherein the quality data represents the probability of operating the video to be determined by a user.
Optionally, the acquiring the video feature information of the video to be determined includes:
Acquiring duration information and/or picture size information of the video to be determined, and determining video attribute information of the video to be determined according to the duration information and/or the picture size information;
Acquiring picture resolution information and/or picture collocation information of the video to be determined, and determining video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and determining the video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
Optionally, the target video quality determination model is trained according to the following steps:
Obtaining a plurality of sample videos with a plurality of different types of sample collocations and video characteristic information of each sample video, wherein each sample video carries a label for representing whether the video is a quality video or not;
obtaining element reference information corresponding to each type of sample collocation element, wherein the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element in a preset historical time window;
respectively processing element reference information corresponding to each type of sample collocation element and video characteristic information of sample videos with the type of sample collocation element through a video quality determination model, and updating model parameters of the video quality determination model by combining labels carried by a plurality of sample videos with the type of sample collocation element until the video quality determination model meets convergence conditions, and ending training;
And determining a video quality determination model at the end of training as the target video quality determination model.
Optionally, after acquiring the video to be determined, the method further comprises:
Inquiring whether the video to be determined has user behavior history data or not;
Obtaining element reference information corresponding to the first type collocation element comprises the following steps:
And when the video to be determined does not have the user behavior history data or the number of the user behavior history data of the video to be determined is smaller than a preset number, acquiring element reference information corresponding to the first type collocation element according to the first type collocation element.
Optionally, acquiring element reference information corresponding to the first type collocation element includes at least one of the following:
acquiring the times of video shooting by using the first type collocation element;
acquiring the exposure times and/or playing times of other videos with the first type collocation elements;
Acquiring the times of operation behaviors generated by a plurality of users on other videos with the first type collocation elements;
acquiring the number of other videos with the first type collocation element in a video database;
The number of users using the first type collocation element is obtained.
Optionally, before the video feature information of the video to be determined and the element reference information corresponding to the first type of collocation element are processed by the target video quality determination model, the method further includes:
Acquiring element reference information corresponding to a second type collocation element under the condition that the video to be determined contains the second type collocation element;
the processing, by the target video quality determining model, the video feature information of the video to be determined and the element reference information corresponding to the first type collocation element to obtain quality data of the video to be determined includes:
Processing the video characteristic information of the video to be determined and element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
processing the video characteristic information of the video to be determined and element reference information corresponding to the second type of collocation element through the target video quality determination model to obtain a second quality component of the video to be determined;
And determining the quality data of the video to be determined according to the first quality score and the second quality score.
According to a second aspect of an embodiment of the present disclosure, there is provided a video pushing method, including:
Determining quality data of candidate videos according to the method of the first aspect;
and pushing the candidate video when the quality data of the candidate video is larger than a preset threshold value.
According to a third aspect of the embodiments of the present disclosure, there is provided a video quality determining apparatus, including:
The first acquisition module is configured to acquire a video to be determined and video characteristic information of the video to be determined;
The second acquisition module is configured to acquire element reference information corresponding to a first type collocation element under the condition that the video to be determined comprises the first type collocation element; the element reference information represents a plurality of historical operation data corresponding to the video related to the first type collocation element in a preset historical time window;
the processing module is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain quality data of the video to be determined, wherein the quality data represents the probability of a user operating the video to be determined.
Optionally, the first acquisition module includes:
The first acquisition sub-module is configured to acquire duration information and/or picture size information of the video to be determined, and determine video attribute information of the video to be determined according to the duration information and/or the picture size information;
The second acquisition submodule is configured to acquire picture resolution information and/or picture collocation information of the video to be determined, and to determine video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and the first determination submodule is configured to determine video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
Optionally, the apparatus further comprises:
A first obtaining module configured to obtain a plurality of sample videos having a plurality of different types of sample collocations and video feature information of each of the sample videos, each sample video carrying a tag indicating whether the video is a premium video;
The second obtaining module is configured to obtain element reference information corresponding to each type of sample collocation element, wherein the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element in a preset historical time window;
The training module is configured to process element reference information corresponding to each type of sample collocation element and video characteristic information of the sample video with the type of sample collocation element through the video quality determination model, and update model parameters of the video quality determination model by combining labels carried by a plurality of sample videos with the type of sample collocation element until the video quality determination model meets convergence conditions, and finish training;
a first determination module configured to determine a video quality determination model at the end of training as the target video quality determination model.
Optionally, after the first acquisition module, the apparatus further includes:
the query module is configured to query whether the video to be determined has user behavior history data or not;
the second acquisition module includes:
The third obtaining sub-module is configured to obtain element reference information corresponding to the first type collocation element according to the first type collocation element when the video to be determined does not have user behavior history data or the number of the user behavior history data of the video to be determined is smaller than a preset number.
Optionally, the second acquisition module includes at least one of:
A fourth obtaining sub-module configured to obtain the number of times of video shooting using the first type collocation element;
A fifth obtaining sub-module configured to obtain the exposure times and/or play times of other videos with the first type collocation element;
A sixth obtaining sub-module configured to obtain a number of times that a plurality of users have operational behaviors on other videos having the first type of collocation element;
A seventh obtaining sub-module configured to obtain a number of other videos having the first type collocation element in a video database;
an eighth acquisition sub-module is configured to acquire the number of users using the first type collocation element.
Optionally, before the processing module, the apparatus further comprises:
the third acquisition module is configured to acquire element reference information corresponding to a second type collocation element under the condition that the video to be determined comprises the second type collocation element;
The processing module comprises:
The first processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
the second processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the second type collocation element through the target video quality determination model to obtain a second quality score of the video to be determined;
and a second determination submodule configured to determine quality data of the video to be determined according to the first quality score and the second quality score. According to a fourth aspect of embodiments of the present disclosure, there is provided a video pushing apparatus, including:
a second determining module configured to determine quality data of candidate videos according to the method of the first aspect;
And the pushing module is configured to push the candidate video when the quality data of the candidate video is greater than a preset threshold value.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
A processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video quality determination method according to the first aspect or the video push method according to the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the video quality determination method as set forth in the first aspect or the video pushing method as set forth in the second aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising readable program code which, when executed by a processor of an electronic device, enables the electronic device to perform the video quality determination method as set forth in the first aspect or the video pushing method as set forth in the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
According to the method, the device and the system, the video feature information of the video to be determined and the video feature information of the video to be determined are obtained, then element reference information corresponding to a first type matching element is obtained under the condition that the fact that the video to be determined contains the first type matching element is detected, wherein the element reference information represents a plurality of historical operation data corresponding to videos related to the first type matching element in a preset historical time window, the video feature information of the video to be determined and the element reference information corresponding to the first type matching element are processed through a target video quality determining model, and quality data of the video to be determined are obtained, wherein the quality data represent the probability of a user operating the video to be determined. The quality data of the video is obtained by adopting the target video quality determining model and combining the video characteristic information of the video and the element reference information corresponding to the first type collocation element contained in the video, so that the accuracy of the quality data of the obtained video can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flowchart illustrating a method of video quality determination, according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a target video quality determination model training method, according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of training a target video quality determination model according to an exemplary embodiment;
FIG. 4 is a block diagram of a video quality determination apparatus according to an exemplary embodiment;
Fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating a video quality prediction method according to an exemplary embodiment, which may be used for a server as shown in fig. 1, including the following steps.
In step S11, a video to be determined and video feature information of the video to be determined are acquired.
The server may obtain a video to be determined, where the video to be determined is a video of a video quality to be determined, that is, the video to be determined is a video without consumption data or a video with less consumption data, prediction of the video to be determined cannot be performed according to the consumption data, and the video to be determined has collocation elements, where the collocation elements may be template elements when the video is shot, for example, a short video may have multiple video shooting playing methods, such as magic expression playing methods or music playing methods, in the short video APP, where the collocation elements may be specific magic expression or music, when a user uses the collocation elements to shoot the video, the user may select the existing collocation elements to shoot the video, each video with the collocation element may correspond to an identifier of the collocation element, and the identifiers corresponding to the videos with the same collocation element are the same, that is, each element corresponds to an identifier. After the user finishes shooting the video to be determined, the server can obtain the video to be determined.
The video characteristic information of the video to be determined is characteristic information inherent to the video to be determined, and specifically, the method for acquiring the video characteristic information of the video to be determined may be:
Acquiring duration information and/or picture size information of the video to be determined, and determining video attribute information of the video to be determined according to the duration information and/or the picture size information;
Acquiring picture resolution information and/or picture collocation information of the video to be determined, and determining video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and determining the video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
The video characteristic information of the video to be determined may be determined according to video attribute information and video picture quality information, where the video attribute information may be determined according to duration information and/or picture size information, that is, the video attribute information may be determined according to duration information only, may be determined according to picture size information only, and may be determined according to duration information and picture size information at the same time, where the duration information is a duration of the video, the picture size information may be a picture aspect ratio of the video, the video picture quality information of the video may be determined according to picture resolution information and/or picture collocation information of the video to be determined according to picture resolution information only, may be determined according to picture collocation information only, may be determined according to picture resolution information and picture collocation information at the same time, and may be determined according to picture aesthetic degree of the video, specifically, the picture aesthetic degree may be determined according to preset features included in the video picture, the more preset features, the higher the corresponding value of the video picture aesthetic degree, where the preset features may be color, the picture has no blood scene, and the like.
In step S12, under the condition that the video to be determined includes a first type collocation element, element reference information corresponding to the first type collocation element is obtained; the element reference information represents a plurality of historical operation data corresponding to the videos related to the first type collocation element in a preset historical time window.
The method comprises the steps that matching elements contained in a video to be determined are detected firstly, because each type of matching element corresponds to an identifier, corresponding element reference information can be obtained according to each type of matching element in the video to be determined, when the fact that the video to be determined contains the first type of matching element is detected, element reference information corresponding to the first type of matching element is obtained, wherein the element reference information can be data in a preset historical time window, the preset historical time window can be within a near week, the element reference information corresponding to the first type of matching element is historical operation data corresponding to all videos related to the type of matching element, historical operation data corresponding to all operations of a user on the matching element are stored in a video database, the historical operation data is associated with the identifier of one type of matching element, and the server can obtain the historical operation data corresponding to all videos corresponding to the identifier of the type of matching element from the video database according to the identifier of the matching element, wherein the first type of element can contain a plurality of elements.
Specifically, the element reference information corresponding to the first type collocation element may include at least one of the following:
For example, the element reference information may include a shooting number, that is, a number of times that the video is shot using the collocation element is acquired, where the shooting number includes a number of times that the video is shot using the collocation element by the user but is not uploaded and released.
The element reference information may further include an exposure number, that is, an exposure number of acquiring other videos with the collocation element, where the exposure number is a total number of times that the other users see the corresponding videos after the platform pushes the other videos with the collocation element to the other users.
The element reference information may further include a number of playing times, that is, a number of times that the server may obtain other videos with the collocation element, where the number of playing times is a number of times that the other users see the corresponding videos after the platform pushes the other videos with the collocation element to the other users, and at least watch the videos for a preset time, and the specific preset time may be 3 seconds.
The element reference information may further include the number of operations, that is, the number of times that the server may obtain the user's operation behavior for other videos with the collocation element, and the specific operation behavior may be praise, comment, forwarding, etc.
The element reference information may also include the number of works, i.e., the number of other videos in the video database that have the collocation element, that the user takes and uploads the published video.
The element reference information may further include the number of persons shot, that is, the server may acquire the number of users who use the collocation element, wherein the number of persons shot is calculated only once when the same user shoots a plurality of videos using the collocation element.
In step S13, the video feature information of the video to be determined and the element reference information corresponding to the first type collocation element are processed through a target video quality determination model, so as to obtain quality data of the video to be determined, where the quality data represents probability of operating the video to be determined by a user.
After the corresponding element reference information is obtained according to the first type matching element contained in the video to be determined, the quality data of the video to be determined can be obtained by processing the video characteristic information of the video to be determined and the element reference information corresponding to the first type matching element, wherein the quality data represent the probability of the user to generate the operation behavior of the video to be determined, the quality data can be a value between 0 and 1, the higher the quality data is, the greater the potential of the user to generate the operation behavior of the video to be determined is, namely the higher the quality data is, the better the video to be determined is, the higher the popularity of the video to be determined is, and the target video quality determination model is obtained by training the video quality determination model according to a plurality of sample videos with a plurality of different types of sample matching elements and the video characteristic information of each sample video. The video quality determination model may be a classical machine learning model xgboost, with each sample video with a sample collocation element carrying a tag indicating whether the sample video is a premium video, so that when the video quality determination model is trained, the calculation of the loss function is performed based on the output of the video quality determination model and the video tag, so as to adjust the parameters of the video quality determination model.
According to the video quality determining method, the video to be determined and the video characteristic information of the video to be determined are obtained, then element reference information corresponding to the first type matching element is obtained under the condition that the fact that the video to be determined contains the first type matching element is detected, wherein the element reference information represents historical operation data corresponding to a plurality of videos related to the first type matching element in a preset historical time window, then the video characteristic information of the video to be determined and the element reference information corresponding to the first type matching element are processed through a target video quality determining model, and quality data of the video to be determined are obtained, wherein the quality data represent probability that a user operates the video to be determined. The quality data of the video is obtained by adopting the target video quality determining model and combining the video characteristic information of the video and the element reference information corresponding to the first type collocation element contained in the video, so that the accuracy of the quality data of the obtained video can be improved.
The video quality determining method is applied to a cold start scene, namely the video to be determined has no consumption data or has too little consumption data, and the video quality cannot be predicted according to the consumption data of the video to be determined. Therefore, the short-term historical consumption data aiming at the collocation elements can well represent the consumption degree of the collocation elements of the type within a few days in the future, namely, the probability that other users generate operation behaviors to the video to be determined is caused, and a video quality determination method is provided, by acquiring corresponding element reference information according to the collocation elements of the first type contained in the video to be determined, all consumption data corresponding to the collocation elements of the type can be obtained according to the consumption data, the method and the device are used for predicting the quality of the video to be determined, so that the problem of inaccurate video quality determination caused by no consumption data or too little consumption data of the video to be determined is avoided, the accuracy of video quality determination is improved, the recommending effect of the video can be improved, the probability of operating behaviors of a user on the recommended video is improved, and more users can be stimulated to use the collocation elements of the type to create the video. The exposure is greatly improved by matching the supply of elements. The element reference information within one week can be used in particular, so that the content of the more popular collocation elements in the near term is more easily seen by the vast majority of users.
On the basis of the above technical solution, after obtaining the video to be determined having the collocation element, it is required to determine whether the video to be determined is a video lacking consumption data, including: inquiring whether the video to be determined has user behavior history data or not; and when the video to be determined does not have the user behavior history data or the number of the user behavior history data of the video to be determined is smaller than the preset number, acquiring element reference information corresponding to the first type of collocation elements according to the first type of collocation elements contained in the video to be determined.
By inquiring whether the video to be determined has user behavior historical data or not, when the video to be determined does not have the user behavior historical data or the number of the user behavior historical data of the video to be determined is smaller than the preset number, acquiring the user behavior historical data according to collocation elements of the video to be determined, otherwise, predicting the video quality directly according to the user behavior historical data of the video to be determined, wherein the user behavior historical data is historical operation data of a plurality of users on the video to be determined, when the video to be determined does not have the user behavior historical data, the video to be determined can be determined as a video newly released by the user, quality determination cannot be performed according to consumption data of the video to be determined, and when the number of the user behavior historical data of the video to be determined is smaller than the preset number, the result of quality prediction according to the consumption data of the video to be determined is inaccurate due to the fact that the number of the user behavior historical data is smaller, and at this time, the video quality determination method can be used for acquiring element reference information according to collocation the first type of the collocation elements contained in the video to be determined, so that the accuracy of determining the video quality is improved.
Fig. 2 is a flowchart illustrating a quality prediction model training method according to an exemplary embodiment, as shown in fig. 2, the training method of the target video quality determination model may be based on the above technical solution:
In step S21, a plurality of sample videos having a plurality of different types of sample collocations and video feature information of each of the sample videos are obtained, each sample video carrying a tag indicating whether the video is a premium video.
Firstly, a training sample is required to be obtained, a plurality of sample videos with a plurality of sample collocations of different types can be directly obtained from a database, each video in the obtained plurality of sample videos contains one type or a plurality of sample collocations, the plurality of videos contain various different types of sample collocations as much as possible, so that when a trained target video quality determination model is used, quality determination can be carried out on various videos with different types of collocations, each sample video carries a label which indicates whether the sample video is a high-quality video or not, wherein whether the label of each sample video is a high-quality video or not is marked by manual verification, the label can be classified into a high-quality label and a low-quality label, namely, the potential of a user for generating operation behaviors on the video is larger, and the potential of the low-quality label for generating operation behaviors on the video is smaller. Specifically, a plurality of sample videos with a plurality of different types of sample collocations in a preset time can be obtained, and the preset time can be one week or one month.
In step S22, element reference information corresponding to each type of sample collocation element is obtained, where the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element within a preset historical time window.
The sample collocation elements also have identifiers, and element reference information corresponding to each type of sample collocation elements can be obtained according to the identifiers of each type of sample collocation elements, wherein the element reference information is characterized in that in a preset historical time window, a plurality of pieces of historical operation data corresponding to sample videos related to the type of sample collocation elements can be in a week.
In step S23, the element reference information corresponding to each type of sample matching element and the video feature information of the sample video with the type of sample matching element are processed through the video quality determining model, and model parameter updating is performed on the video quality determining model by combining the labels carried by the plurality of sample videos with the type of sample matching element until the video quality determining model meets the convergence condition, so that training is finished.
The method comprises the steps of processing element reference information corresponding to each type of sample collocation element and video characteristic information of sample videos with the type of sample collocation element through a video quality determination model, obtaining a quality determination result of the video quality determination model on the sample videos with the type of sample collocation element, obtaining a real quality result of the sample videos according to labels carried by the sample videos, calculating a gap between the quality determination result and the real quality result of the sample videos to obtain a loss function value, obtaining a parameter adjustment value of the video quality determination model according to a gradient descent algorithm, adjusting parameters of the video quality determination model according to the parameter adjustment value, performing multi-round training on the video quality determination model according to a plurality of sample videos with different types of sample collocation elements and the video characteristic information of each sample video, continuously adjusting parameters of the video quality determination model until the video quality determination model meets a convergence condition, and finishing training, wherein the convergence condition can be that the loss function value obtained according to the quality determination result and the real quality result is not changed, or the fluctuation range of the loss function value is smaller than a preset threshold value, and the preset threshold value can be 0.1.
In step S24, a video quality determination model at the end of training is determined as the target video quality determination model.
And after the training is finished, the predicted quality of the video quality determination model meets the requirement, and at the moment, the video quality determination model can be determined as a target video quality determination model so as to determine the video quality. The target video quality determination model obtained by training through the method is characterized in that when the quality of the video to be determined is predicted, the closer the element reference information and the video characteristic information corresponding to the video to be determined are to the element reference information and the video characteristic information corresponding to the high-quality video, the higher the quality data of the video to be determined are.
By introducing consumption information (element reference information) to train a model, the consumption potential of the video can be estimated directly, the higher the potential is, the more easily the video is liked by a user, the more the user's creation will can be stimulated, and the creation of the video is driven.
Fig. 3 is a flowchart of a training method of a target video quality determination model according to an exemplary embodiment, as shown in fig. 3, in which fig. 3 is an example of a matching element as a magic expression, consumption information (user behavior history data) corresponding to the magic expression used by a video to be determined is obtained first, the specific consumption information may include shooting times, exposure times, playing times, number of works and number of shooting people, then video attribute information and video picture quality information of the video to be determined are obtained, and then the consumption information, the video attribute information and the video picture quality information are input into a video quality determination model (here, a classical machine learning model xgboost), and a score of the video to be determined as a score of a good video and a score of a bad video are output, so that a loss function value is calculated according to the score of the good video and the score of the bad video and a label carried by the video, and further modification of model parameters is performed.
Based on the above technical solution, a video may further include multiple types of collocation elements, where when the video includes multiple types of collocation elements, the following method may be used to determine the quality of the video:
Before the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element are processed through the target video quality determination model, the method further comprises:
Acquiring element reference information corresponding to a second type collocation element under the condition that the video to be determined contains the second type collocation element;
the processing, by the target video quality determining model, the video feature information of the video to be determined and the element reference information corresponding to the first type collocation element to obtain quality data of the video to be determined includes:
Processing the video characteristic information of the video to be determined and element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
processing the video characteristic information of the video to be determined and element reference information corresponding to the second type of collocation element through the target video quality determination model to obtain a second quality component of the video to be determined;
And determining the quality data of the video to be determined according to the first quality score and the second quality score.
After detecting that the video to be determined includes the first type matching element, detecting may be further continued, when detecting that the video to be determined includes the second type matching element, element reference information corresponding to the second type matching element may be obtained, then, determining a first quality score of the video to be determined according to the video feature information and the first type matching element, determining a second quality score of the video to be determined according to the video feature information and the second type matching element, and finally, determining quality data of the video to be determined according to the first quality score and the second quality score, specifically, the first quality score and the second quality score may be directly added to obtain quality data of the video to be determined, or different weights may be set for the first type matching element and the second type matching element, and then, the first quality score and the second quality score may be multiplied by the corresponding weights respectively and then added to obtain quality data of the video to be determined, where the first type matching element and the second type matching element may include a plurality of elements.
Since the video may further include multiple types of collocation elements, that is, may further include a third type collocation element other than the first type collocation element and the second type collocation element, at this time, all types of collocation elements included in the video to be determined may be detected, then, quality scores corresponding to each type of collocation element may be obtained by the above method, and then, quality data of the video to be determined may be determined by using the obtained multiple quality scores.
On the basis of the above technical solution, after obtaining the quality data of the video to be determined by the target video quality determining model, the method may further include: and pushing the candidate video when the quality data of the candidate video is larger than a preset threshold value.
According to the quality data of the video to be determined, which is determined by the target video quality determining model, the video to be determined can be judged to be a good-quality video or a bad-quality video, specifically, when the quality data of the video to be determined, which is determined by the target video quality determining model, is larger than a preset threshold, the video to be determined is determined to be a good-quality video, that is, the potential of the user to generate operation behaviors of the video to be determined is larger, and at the moment, the video to be determined can be pushed to the video playing terminal so as to enable the video to be exposed more. When the quality data of the video to be determined, which is determined by the target video quality determining model, is smaller than a preset threshold value, determining that the video to be determined is an inferior video, and at the moment, not recommending the video to be determined. The value of the output quality data can be between 0 and 1, the preset threshold value can be 0.5, and the specific value can be determined according to actual conditions and is not particularly limited.
Fig. 4 is a block diagram illustrating a video quality determination apparatus according to an exemplary embodiment. Referring to fig. 4, the apparatus includes a first acquisition module 41, a second acquisition module 42, and a processing module 43.
The first acquisition module is configured to acquire a video to be determined and video characteristic information of the video to be determined;
The second acquisition module is configured to acquire element reference information corresponding to a first type collocation element under the condition that the video to be determined contains the first type collocation element; the element reference information represents a plurality of historical operation data corresponding to the video related to the first type collocation element in a preset historical time window;
The processing module is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain quality data of the video to be determined, wherein the quality data represents the probability of a user operating the video to be determined.
Optionally, the first acquisition module includes:
The first acquisition sub-module is configured to acquire duration information and/or picture size information of the video to be determined, and determine video attribute information of the video to be determined according to the duration information and/or the picture size information;
The second acquisition submodule is configured to acquire picture resolution information and/or picture collocation information of the video to be determined, and to determine video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and the first determination submodule is configured to determine video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
Optionally, the apparatus further comprises:
A first obtaining module configured to obtain a plurality of sample videos having a plurality of different types of sample collocations and video feature information of each of the sample videos, each sample video carrying a tag indicating whether the video is a premium video;
The second obtaining module is configured to obtain element reference information corresponding to each type of sample collocation element, wherein the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element in a preset historical time window;
The training module is configured to process element reference information corresponding to each type of sample collocation element and video characteristic information of the sample video with the type of sample collocation element through the video quality determination model, and update model parameters of the video quality determination model by combining labels carried by a plurality of sample videos with the type of sample collocation element until the video quality determination model meets convergence conditions, and finish training;
A first determination module configured to determine a video quality determination model at the end of training as the quality determination model.
Optionally, after the first acquisition module, the apparatus further includes:
the query module is configured to query whether the video to be determined has user behavior history data or not;
the second acquisition module includes:
The third obtaining sub-module is configured to obtain element reference information corresponding to the first type collocation element according to the first type collocation element when the video to be determined does not have user behavior history data or the number of the user behavior history data of the video to be determined is smaller than a preset number.
Optionally, the second acquisition module includes at least one of:
A fourth obtaining sub-module configured to obtain the number of times of video shooting using the first type collocation element;
A fifth obtaining sub-module configured to obtain the exposure times and/or play times of other videos with the first type collocation element;
A sixth obtaining sub-module configured to obtain a number of times that a plurality of users have operational behaviors on other videos having the first type of collocation element;
A seventh obtaining sub-module configured to obtain a number of other videos having the first type collocation element in a video database;
an eighth acquisition sub-module is configured to acquire the number of users using the first type collocation element.
Optionally, before the processing module, the apparatus further comprises:
the third acquisition module is configured to acquire element reference information corresponding to a second type collocation element under the condition that the video to be determined comprises the second type collocation element;
The processing module comprises:
The first processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
the second processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the second type collocation element through the target video quality determination model to obtain a second quality score of the video to be determined;
and the second determination submodule is configured to determine the quality data of the video to be determined according to the first quality score and the second quality score.
According to the video quality determining device provided by the embodiment, the video to be determined and the video characteristic information of the video to be determined are obtained, then, under the condition that the video to be determined contains the first type matching element, element reference information corresponding to the first type matching element is obtained, wherein the element reference information is characterized in that in a preset historical time window, a plurality of historical operation data corresponding to the video related to the first type matching element are represented, then, the video characteristic information of the video to be determined and the element reference information corresponding to the first type matching element are processed through a target video quality determining model, so that quality data of the video to be determined is obtained, and the quality data is characterized in that the probability of operating the video to be determined by a user is represented. The quality data of the video is obtained by adopting the target video quality determining model and combining the video characteristic information of the video and the element reference information corresponding to the first type collocation element contained in the video, so that the accuracy of the quality data of the obtained video can be improved.
The video pushing device comprises a second determining module and a pushing module:
the second determining module is configured to determine quality data of the candidate video according to a video quality determining method;
The pushing module is configured to push the candidate video when the quality data of the candidate video is greater than a preset threshold.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram of an electronic device, according to an example embodiment. For example, electronic device 500 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 5, an electronic device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes a screen between the electronic device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. When the electronic device 500 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the electronic device 500. For example, the sensor assembly 514 may detect an on/off state of the electronic device 500, a relative positioning of components such as a display and keypad of the electronic device 500, a change in position of the electronic device 500 or a component of the electronic device 500, the presence or absence of a user's contact with the electronic device 500, an orientation or acceleration/deceleration of the electronic device 500, and a change in temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the electronic device 500 and other devices, either wired or wireless. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 5G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the video quality determination method or the video pushing method described above.
In an exemplary embodiment, a storage medium is also provided, such as a memory 504 including instructions executable by the processor 520 of the electronic device 500 to perform the video quality determination method or video pushing method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising readable program code executable by the processor 520 of the electronic device 500 to perform the above-described video quality determination method or video push method. Alternatively, the program code may be stored in a storage medium of the electronic device 500, which may be a non-transitory computer readable storage medium, such as ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (16)
1. A method for determining video quality, comprising:
Acquiring a video to be determined and video characteristic information of the video to be determined; the video to be determined is provided with collocation elements, each collocation element corresponds to an identifier, and identifiers corresponding to videos with the same collocation elements are the same;
acquiring element reference information corresponding to a first type collocation element under the condition that the video to be determined contains the first type collocation element; the element reference information represents a plurality of historical operation data corresponding to the video related to the first type collocation element in a preset historical time window;
processing the video characteristic information of the video to be determined and element reference information corresponding to the first type collocation element through a target video quality determination model to obtain quality data of the video to be determined, wherein the quality data represents the probability of a user operating the video to be determined;
The element reference information corresponding to the first type of collocation element is historical operation data corresponding to all videos related to the type of collocation element, and the historical operation data is associated with the identification of the corresponding type of collocation element.
2. The method of claim 1, wherein the obtaining video characteristic information of the video to be determined comprises:
Acquiring duration information and/or picture size information of the video to be determined, and determining video attribute information of the video to be determined according to the duration information and/or the picture size information;
Acquiring picture resolution information and/or picture collocation information of the video to be determined, and determining video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and determining the video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
3. The method of claim 1, wherein the target video quality determination model is trained by:
Obtaining a plurality of sample videos with a plurality of different types of sample collocations and video characteristic information of each sample video, wherein each sample video carries a label for representing whether the video is a quality video or not;
obtaining element reference information corresponding to each type of sample collocation element, wherein the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element in a preset historical time window;
respectively processing element reference information corresponding to each type of sample collocation element and video characteristic information of sample videos with the type of sample collocation element through a video quality determination model, and updating model parameters of the video quality determination model by combining labels carried by a plurality of sample videos with the type of sample collocation element until the video quality determination model meets convergence conditions, and ending training;
And determining a video quality determination model at the end of training as the target video quality determination model.
4. A method according to any one of claims 1-3, wherein, before processing the video feature information of the video to be determined and the element reference information corresponding to the first type collocation element by a target video quality determination model, the method further comprises:
Acquiring element reference information corresponding to a second type collocation element under the condition that the video to be determined contains the second type collocation element;
the processing, by the target video quality determining model, the video feature information of the video to be determined and the element reference information corresponding to the first type collocation element to obtain quality data of the video to be determined includes:
Processing the video characteristic information of the video to be determined and element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
processing the video characteristic information of the video to be determined and element reference information corresponding to the second type of collocation element through the target video quality determination model to obtain a second quality component of the video to be determined;
And determining the quality data of the video to be determined according to the first quality score and the second quality score.
5. A method according to any one of claims 1-3, wherein after acquiring the video to be determined, the method further comprises:
Inquiring whether the video to be determined has user behavior history data or not;
Obtaining element reference information corresponding to the first type collocation element comprises the following steps:
And when the video to be determined does not have the user behavior history data or the number of the user behavior history data of the video to be determined is smaller than a preset number, acquiring element reference information corresponding to the first type collocation element according to the first type collocation element.
6. A method according to any one of claims 1-3, wherein obtaining element reference information corresponding to the first type of collocation element comprises at least one of:
acquiring the times of video shooting by using the first type collocation element;
acquiring the exposure times and/or playing times of other videos with the first type collocation elements;
Acquiring the times of operation behaviors generated by a plurality of users on other videos with the first type collocation elements;
acquiring the number of other videos with the first type collocation element in a video database;
The number of users using the first type collocation element is obtained.
7. A video pushing method, comprising:
The method of any of claims 1-6 determining quality data of candidate videos;
and pushing the candidate video when the quality data of the candidate video is larger than a preset threshold value.
8. A video quality determining apparatus, comprising:
The first acquisition module is configured to acquire a video to be determined and video characteristic information of the video to be determined; the video to be determined is provided with collocation elements, each collocation element corresponds to an identifier, and identifiers corresponding to videos with the same collocation elements are the same;
The second acquisition module is configured to acquire element reference information corresponding to a first type collocation element under the condition that the video to be determined comprises the first type collocation element; the element reference information represents a plurality of historical operation data corresponding to the video related to the first type collocation element in a preset historical time window;
The processing module is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain quality data of the video to be determined, wherein the quality data represents the probability of a user operating the video to be determined;
The element reference information corresponding to the first type of collocation element is historical operation data corresponding to all videos related to the type of collocation element, and the historical operation data is associated with the identification of the corresponding type of collocation element.
9. The apparatus of claim 8, wherein the first acquisition module comprises:
The first acquisition sub-module is configured to acquire duration information and/or picture size information of the video to be determined, and determine video attribute information of the video to be determined according to the duration information and/or the picture size information;
The second acquisition submodule is configured to acquire picture resolution information and/or picture collocation information of the video to be determined, and to determine video picture quality information of the video to be determined according to the picture resolution information and/or the picture collocation information;
and the first determination submodule is configured to determine video characteristic information of the video to be determined according to the video attribute information and the video picture quality information.
10. The apparatus of claim 8, wherein the apparatus further comprises:
A first obtaining module configured to obtain a plurality of sample videos having a plurality of different types of sample collocations and video feature information of each of the sample videos, each sample video carrying a tag indicating whether the video is a premium video;
The second obtaining module is configured to obtain element reference information corresponding to each type of sample collocation element, wherein the element reference information represents a plurality of historical operation data corresponding to sample videos related to the type of sample collocation element in a preset historical time window;
The training module is configured to process element reference information corresponding to each type of sample collocation element and video characteristic information of the sample video with the type of sample collocation element through the video quality determination model, and update model parameters of the video quality determination model by combining labels carried by a plurality of sample videos with the type of sample collocation element until the video quality determination model meets convergence conditions, and finish training;
a first determination module configured to determine a video quality determination model at the end of training as the target video quality determination model.
11. The apparatus according to any one of claims 8-10, wherein after the first acquisition module, the apparatus further comprises:
the query module is configured to query whether the video to be determined has user behavior history data or not;
the second acquisition module includes:
The third obtaining sub-module is configured to obtain element reference information corresponding to the first type collocation element according to the first type collocation element when the video to be determined does not have user behavior history data or the number of the user behavior history data of the video to be determined is smaller than a preset number.
12. The apparatus of any of claims 8-10, wherein the second acquisition module comprises at least one of:
A fourth obtaining sub-module configured to obtain the number of times of video shooting using the first type collocation element;
A fifth obtaining sub-module configured to obtain the exposure times and/or play times of other videos with the first type collocation element;
A sixth obtaining sub-module configured to obtain a number of times that a plurality of users have operational behaviors on other videos having the first type of collocation element;
A seventh obtaining sub-module configured to obtain a number of other videos having the first type collocation element in a video database;
an eighth acquisition sub-module is configured to acquire the number of users using the first type collocation element.
13. The apparatus according to any one of claims 8-10, wherein prior to the processing module, the apparatus further comprises:
the third acquisition module is configured to acquire element reference information corresponding to a second type collocation element under the condition that the video to be determined comprises the second type collocation element;
The processing module comprises:
The first processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the first type collocation element through a target video quality determination model to obtain a first quality score of the video to be determined;
the second processing submodule is configured to process the video characteristic information of the video to be determined and the element reference information corresponding to the second type collocation element through the target video quality determination model to obtain a second quality score of the video to be determined;
And a second determination submodule configured to determine quality data of the video to be determined according to the first quality score and the second quality score.
14. A video pushing device, comprising:
A second determination module configured to determine quality data of candidate videos according to the method of any one of claims 1-6;
And the pushing module is configured to push the candidate video when the quality data of the candidate video is greater than a preset threshold value.
15. An electronic device, comprising:
A processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video quality determination method of any one of claims 1 to 6 or the video push method of claim 7.
16. A storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the video quality determination method of any one of claims 1 to 6 or the video push method of claim 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111537652.5A CN114268815B (en) | 2021-12-15 | 2021-12-15 | Video quality determining method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111537652.5A CN114268815B (en) | 2021-12-15 | 2021-12-15 | Video quality determining method, device, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114268815A CN114268815A (en) | 2022-04-01 |
| CN114268815B true CN114268815B (en) | 2024-08-13 |
Family
ID=80827396
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111537652.5A Active CN114268815B (en) | 2021-12-15 | 2021-12-15 | Video quality determining method, device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114268815B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114915807B (en) * | 2022-07-14 | 2022-12-13 | 飞狐信息技术(天津)有限公司 | Information processing method and device |
| CN115269919A (en) * | 2022-08-01 | 2022-11-01 | 中译语通科技股份有限公司 | A method, device, electronic device and storage medium for determining the quality of a short video |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109547814A (en) * | 2018-12-13 | 2019-03-29 | 北京达佳互联信息技术有限公司 | Video recommendation method, device, server and storage medium |
| CN111741330A (en) * | 2020-07-17 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Video content evaluation method and device, storage medium and computer equipment |
| CN112040339A (en) * | 2020-08-31 | 2020-12-04 | 广州市百果园信息技术有限公司 | Method and device for making video data, computer equipment and storage medium |
| CN113259727A (en) * | 2021-04-30 | 2021-08-13 | 广州虎牙科技有限公司 | Video recommendation method, video recommendation device and computer-readable storage medium |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3834424A4 (en) * | 2018-08-10 | 2022-03-23 | Microsoft Technology Licensing, LLC | PROVIDING A VIDEO RECOMMENDATION |
| CN109286825B (en) * | 2018-12-14 | 2021-04-30 | 北京百度网讯科技有限公司 | Method and apparatus for processing video |
| CN109800325B (en) * | 2018-12-26 | 2021-10-26 | 北京达佳互联信息技术有限公司 | Video recommendation method and device and computer-readable storage medium |
| CN110730369B (en) * | 2019-10-15 | 2022-01-04 | 青岛聚看云科技有限公司 | Video recommendation method and server |
| CN112905839A (en) * | 2021-02-10 | 2021-06-04 | 北京有竹居网络技术有限公司 | Model training method, model using device, storage medium and equipment |
-
2021
- 2021-12-15 CN CN202111537652.5A patent/CN114268815B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109547814A (en) * | 2018-12-13 | 2019-03-29 | 北京达佳互联信息技术有限公司 | Video recommendation method, device, server and storage medium |
| CN111741330A (en) * | 2020-07-17 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Video content evaluation method and device, storage medium and computer equipment |
| CN112040339A (en) * | 2020-08-31 | 2020-12-04 | 广州市百果园信息技术有限公司 | Method and device for making video data, computer equipment and storage medium |
| CN113259727A (en) * | 2021-04-30 | 2021-08-13 | 广州虎牙科技有限公司 | Video recommendation method, video recommendation device and computer-readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114268815A (en) | 2022-04-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107105314B (en) | Video playback method and device | |
| CN109168062B (en) | Video playing display method and device, terminal equipment and storage medium | |
| CN114722238B (en) | Video recommendation method and device, electronic equipment, storage medium and program product | |
| CN110941727B (en) | Resource recommendation method and device, electronic equipment and storage medium | |
| CN109360197B (en) | Image processing method and device, electronic equipment and storage medium | |
| US20220277204A1 (en) | Model training method and apparatus for information recommendation, electronic device and medium | |
| CN109819288A (en) | Determination method, apparatus, electronic equipment and the storage medium of advertisement dispensing video | |
| CN112685641B9 (en) | Information processing method and device | |
| CN113868467A (en) | Information processing method, information processing device, electronic equipment and storage medium | |
| CN114268815B (en) | Video quality determining method, device, electronic equipment and storage medium | |
| CN110502648A (en) | Recommended models acquisition methods and device for multimedia messages | |
| CN110019897B (en) | Method and device for displaying picture | |
| CN112784151B (en) | Method and related device for determining recommended information | |
| CN105163188A (en) | Video content processing method, device and apparatus | |
| CN114422854B (en) | Data processing method, device, electronic device and storage medium | |
| CN111835739A (en) | Video playing method and device and computer readable storage medium | |
| CN112685599B (en) | Video recommendation method and device | |
| CN113347484A (en) | Comment recommendation method and device and electronic equipment | |
| CN114143566A (en) | Information pushing method, device, equipment and storage medium | |
| CN115484471B (en) | Method and device for recommending anchor | |
| CN114359788B (en) | Media information processing methods, devices, electronic devices, storage media and products | |
| CN112699910A (en) | Method and device for generating training data, electronic equipment and storage medium | |
| CN111898019B (en) | Information push method and device | |
| CN111291268B (en) | Information processing method, information processing apparatus, and storage medium | |
| CN112711643B (en) | Training sample set acquisition method and device, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |