Disclosure of Invention
First, the technical problem to be solved
In view of the above-mentioned drawbacks and shortcomings of the prior art, the present invention provides an automatic generation method of an evaluation video.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
S101, determining video equipment according to project information;
S102, determining the starting time of automatic generation of the evaluation video according to project information; determining the end time of automatic generation of the evaluation video according to the video acquired by the video equipment;
s103, acquiring a comment video from a video acquired by video equipment based on the video start time and the video end time;
and S104, associating the bid evaluation video with the project according to the project information, and storing the association relationship and the bid evaluation video.
Optionally, before S102, the method further includes:
s201, determining evaluation start time according to item information;
S202, starting video equipment to perform video pre-acquisition 30 minutes before the evaluation start time;
S203, identifying each frame acquired in advance, and determining whether personnel appear;
S204, if no personnel appear in all the pre-acquired frames when the bid evaluation start time is reached, controlling the video recording equipment to acquire video from the bid evaluation start time;
and S205, if the evaluation start time is not reached, the personnel appear in the pre-collected frame, and the video recording equipment is controlled to collect the video from the time when the personnel appear for the first time.
Optionally, determining the starting time of automatic generation of the rating video according to the project information includes:
determining a bid evaluation expert according to the project information;
And determining the starting time of automatic generation of the label-assessing video according to the time of finishing sign-in of all label-assessing experts.
Optionally, determining the starting time of automatic generation of the label-assessing video according to the time when all label-assessing experts complete sign-in includes:
and determining the time of finishing the sign-in by the last label evaluation expert as the starting time of automatic generation of the label evaluation video.
Optionally, determining the starting time of automatic generation of the label-assessing video according to the time when all label-assessing experts complete sign-in includes:
acquiring the first appearance time of each evaluation expert;
determining the time for each label evaluation expert to finish sign-in;
calculating the sign-in duration of each comment expert = the time of completing the sign-in-first time;
Acquiring a target video acquired by video recording equipment; the starting time of the target video is the time when the mark evaluation expert appears for the first time, and the ending time of the target video is the time when all mark evaluation experts finish signing in;
determining an anomaly coefficient according to the target video;
And determining the starting time of automatic generation of the bid evaluation video according to the sign-in duration and the anomaly coefficient of each bid evaluation expert.
Optionally, determining the anomaly coefficient according to the target video includes:
carrying out person identification on each frame of image in the target video, and determining whether a target image frame exists or not; a non-evaluation expert exists in the target image frame;
if the target image frame does not exist, determining that the anomaly coefficient is 0;
If the target image frame exists, determining a continuous coefficient according to the target image frame, and determining an abnormal coefficient as the continuous coefficient.
Optionally, determining the continuous coefficients from the target image frame includes:
Dividing the target image frames of the continuous frame numbers into a group according to the frame numbers to obtain target image frame groups;
determining the number of target image frames of each group;
Determining the average target image frame number = sum of the target image frame numbers of all groups/group number;
determining group size difference = SQRT (SUM (square of difference between the number of target image frames of each group and the number of average target image frames)/group number); wherein SQRT () is a square root function, and SUM () is a SUM function;
The continuous coefficient = group number x group size difference is determined.
Optionally, determining the starting time of automatic generation of the bid evaluation video according to the sign-in duration and the anomaly coefficient of each bid evaluation expert comprises:
Determining an average sign-in duration= (maximum value in sign-in durations of all panelists + minimum value of sign-in durations of all panelists +4 total number of sign-in durations and/or panelists of all panelists)/6;
Calculating a check-in duration outlier = average check-in duration (1+ anomaly coefficient)/preset check-in standard duration;
If the abnormal value of the sign-in duration is greater than 1, determining the starting time of automatic generation of the label evaluation video as the first appearance time of a first label evaluation expert;
If the abnormal value of the sign-in duration is not more than 1, determining the time for the last label evaluation expert to finish sign-in as the starting time of automatic generation of the label evaluation video.
Optionally, determining the end time of automatic generation of the rating video according to the video collected by the video recording device includes:
The earliest of the following three times is determined as the end time of automatic generation of the rating video:
The time of uploading the bid evaluation result, the time when the bid evaluation ending button is triggered, and the preset time after the bid evaluation ending time determined according to the project information.
Optionally, the preset time is 30 minutes or the time when the end frame first appears;
Wherein no person is in the end frame.
(III) beneficial effects
Determining video equipment according to the project information; determining the starting time of automatic generation of the evaluation video according to the project information; determining the end time of automatic generation of the evaluation video according to the video acquired by the video equipment; based on the video recording start time and the video recording end time, obtaining a comment video from a video collected by video recording equipment; and associating the bid evaluation video with the project according to the project information, and storing the association relation and the bid evaluation video. According to the method, the starting time and the ending time of automatic generation of the bid evaluation video are determined according to the project information and the video acquired by the video equipment, and the bid evaluation video is acquired from the video acquired by the video equipment based on the video starting time and the video ending time, so that the automatic acquisition of the bid evaluation video is realized, and uncertain factors caused by manual operation are avoided.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
The evaluation video is completely separated from the service system, and the video is recorded according to 24 hours of the whole day of the evaluation room. The video cutting needs to be carried out manually, the video evaluation and the video evaluation service are associated, the workload is large, and meanwhile, the manual operation can bring some uncertain factors.
Based on the above, the invention relates to an automatic generation method of an evaluation video, which comprises the following steps: determining video equipment according to the project information; determining the starting time of automatic generation of the evaluation video according to the project information; determining the end time of automatic generation of the evaluation video according to the video acquired by the video equipment; based on the video recording start time and the video recording end time, obtaining a comment video from a video collected by video recording equipment; and associating the bid evaluation video with the project according to the project information, and storing the association relation and the bid evaluation video. According to the method, the starting time and the ending time of automatic generation of the bid evaluation video are determined according to the project information and the video acquired by the video equipment, and the bid evaluation video is acquired from the video acquired by the video equipment based on the video starting time and the video ending time, so that the automatic acquisition of the bid evaluation video is realized, and uncertain factors caused by manual operation are avoided.
Referring to fig. 1, the implementation process of the automatic generation method of the rating video provided in this embodiment is as follows:
S101, determining the video recording equipment according to the project information.
The project information comprises project names, preset bid evaluation room identifications, bid evaluation time, bid evaluation expert information and the like.
The method comprises the step of determining video equipment installed in a bid evaluation room based on the bid evaluation room identification in project information, wherein the video equipment can record videos of the bid evaluation room, and then acquired videos are obtained.
However, in the prior art, the video recording device can record the video for 24 hours, so that the video acquisition for the non-standard evaluation time is invalid, not only is the consumption of video acquisition resources of the video recording device wasted, but also the consumption of the acquired video on storage resources is wasted. The invention does not record continuously for 24 hours, but effectively collects the video through the following steps.
I.e. before executing step S102:
S201, determining the evaluation start time according to the item information.
Since the bid amount time is included in the item information, the earliest time is determined as the bid amount start time from the bid amount time in the item information.
S202, starting video equipment to perform video pre-acquisition 30 minutes before the evaluation start time.
That is, the video recording device is turned off 30 minutes after the last evaluation is finished and before the current evaluation is started, so that resources consumed by video recording are saved and resources consumed by storage of recorded invalid video are saved.
But the pre-acquisition is started in step S202, that is, the acquired video is buffered, and when the acquisition is determined in step S204 and S205, the pre-acquired video is removed, so as to ensure that the acquired video is a valuable video.
And S203, identifying each pre-acquired frame, and determining whether personnel are present.
The method comprises the steps of determining whether an object in each frame of image is a person or not based on object identification in the existing image, if so, determining that the person appears, otherwise, determining that the person does not appear.
The step is only to judge whether the person is, and not to judge the identity of the person.
And S204, if no personnel appear in all the pre-acquired frames when the bid evaluation start time is reached, controlling the video recording equipment to acquire the video from the bid evaluation start time.
If the time of starting the bid evaluation is reached, no personnel appear in all the pre-collected frames, and the fact that the preset bid evaluation time is reached is indicated, but the bid evaluation expert does not arrive, the video is still collected at the moment, and the video of the whole bid evaluation time is ensured to be complete.
And S205, if the evaluation start time is not reached, the personnel appear in the pre-collected frame, and the video recording equipment is controlled to collect the video from the time when the personnel appear for the first time.
If the time of starting the evaluation is not reached, a person appears in the pre-acquired frame, which indicates that a person enters the evaluation room before the evaluation is not started, and possibly that an evaluation expert is early or an abnormal person enters the evaluation room, at this time, the activity video of the person needs to be recorded so as to confirm whether the evaluation is effective, so that the video acquisition is started at the time of first appearance of the person.
The step S201 to the step S205 ensure that the video acquisition time of the video equipment is dynamically determined according to the safety condition of the evaluation room, thereby avoiding the waste caused by 24-hour recording, ensuring that all contents related to the evaluation are acquired, and ensuring the integrity of the generation of the evaluation video.
S102, determining the starting time of automatic generation of the rating video according to the project information. And determining the end time of automatic generation of the evaluation video according to the video acquired by the video equipment.
1. The process of determining the starting time of automatic generation of the rating video according to the project information is as follows:
1) And determining the bid evaluation expert according to the project information.
Because the project information includes the bid evaluation expert information, the bid evaluation expert is determined from the bid evaluation expert information in the project information, and meanwhile, the characteristics of the bid evaluation expert, such as a face, an ID, a user name password for identity verification and the like, can be obtained.
2) And determining the starting time of automatic generation of the label-assessing video according to the time of finishing sign-in of all label-assessing experts.
There are various implementations of this step, and the following merely exemplifies two types of steps:
The first implementation mode: and determining the time of finishing the sign-in by the last label evaluation expert as the starting time of automatic generation of the label evaluation video.
Because the identity authentication information of the label evaluation expert is obtained when the label evaluation expert is determined, whether each expert completes the sign-in or not can be determined according to the identity authentication information, and if so, the time for the last label evaluation expert to complete the sign-in is determined as the starting time of automatic generation of the label evaluation video.
For example, if the identity authentication information is a user name and a verification code, the user name and the password input by a person entering the tag evaluation room are obtained, if the identity authentication information is the same as the identity authentication information, the tag evaluation expert is considered to enter the tag evaluation room, equipment for inputting the user name and the password is monitored at the moment, if a check-in key on the equipment is triggered, the tag evaluation expert is considered to complete check-in, and the time when the check-in key on the equipment is triggered is determined as the time when the tag evaluation expert completes check-in.
For another example, if the identification information is a face, the label-evaluating expert will face towards the video equipment after entering the label-evaluating room, so that the video equipment collects the face of the person, the face is compared with the face in the identification information through the collected face, if the face is the same with the face in the identification information, the label-evaluating expert is considered to enter the label-evaluating room, at this time, the position of the label-evaluating expert in each frame is tracked through the existing object tracking scheme in the image, and the distance between the position and each equipment position (because the equipment is fixed, before the video is collected in the label-evaluating room, the position of each equipment in the test image is labeled through the video equipment, the labeled position is not changed, at this time, the equipment with the shortest distance is identified as the equipment operated by the label-evaluating expert, if the label-registering button on the equipment is triggered, the label-evaluating expert is considered to finish the label-registering, and the time when the label-registering button on the equipment is triggered is determined as the time when the label-evaluating expert completes the label-registering.
The second implementation mode:
(1) And obtaining the first appearance time of each evaluation expert.
After entering the label evaluation room, the label evaluation expert faces the video equipment, so that the video equipment collects the face of the person. The method comprises the steps of identifying each frame of image based on an object identification scheme in the existing image, if a face image is identified, and the identified face is compared with the face in the identity authentication information, if the face image is identical to the face in the identity authentication information, the person is considered to enter a label evaluation room, and the time of first collecting the face of the person is determined as the first appearance time of the level standard component.
(2) And determining the time for each label evaluation expert to complete sign-in.
The time for completing the check-in is the same as the time for completing the check-in the first implementation, and will not be described here.
(3) The sign-in duration = time to complete sign-in-first time occurrence time for each comment expert is calculated.
The sign-in duration characterizes the length of time between entry of the label evaluation expert into the label evaluation room and completion of the sign-in.
(4) And acquiring the target video acquired by the video recording equipment.
The starting time of the target video is the time when the label evaluation expert appears for the first time, and the ending time of the target video is the time when all label evaluation experts finish sign-in.
The target video is actually a sub-video of the video collected by the video recording device, and the target video is the total check-in duration of all experts.
(5) And determining abnormal coefficients according to the target video.
The abnormal coefficient characterizes whether the whole signing process of the label judging expert is normal or not, and the larger the abnormal coefficient is, the abnormal signing process is indicated.
The abnormal coefficient determination process comprises the following steps:
s301, carrying out person identification on each frame of image in the target video, and determining whether a target image frame exists.
Wherein, there is a non-evaluation expert in the target image frame.
That is, in step S301, person recognition is performed for each frame in the target video, and it is determined whether frames of other persons than the rating specialist, that is, other persons enter the rating room, occur. If yes, the check-in process is abnormal, and if no, the check-in process is normal.
S302, if the target image frame does not exist, the anomaly coefficient is determined to be 0.
S302 is a case where no other person enters the evaluation room, and is considered normal at this time, so the abnormality factor is 0.
If the target image frame exists, determining a continuous coefficient according to the target image frame, and determining an abnormal coefficient as the continuous coefficient, wherein the total frame number of the target image frame/the total frame number of the target video is determined by S303.
S303 is a case where other personnel enter the label evaluation room, and the label evaluation room is considered abnormal, so that:
S303-1 determines the continuous coefficients from the target image frame.
The continuous coefficient characterizes the abnormality of entering the evaluation room, and is composed of two aspects, namely, the more times other personnel enter the evaluation room, the more abnormal the times, and the more random the other personnel enter, the more abnormal the randomness.
The implementation process is as follows:
A. and dividing the target image frames of the continuous frame numbers into a group according to the frame numbers to obtain target image frame groups.
Every time other personnel enter the evaluation room, the frame numbers of the acquired images are continuous, if the intermittent condition occurs, the description is two times of entering, so that the number of times that other personnel enter the evaluation room can be known by dividing the target image frames of the continuous frame numbers into a group, namely the number of times that other personnel enter the evaluation room. The target image frames in each group are the images of the same time other people enter.
B. The number of target image frames for each group is determined.
The greater the number of target image frames, the longer the time that other people enter the rating room.
C. The average target image frame number = sum of the target image frame numbers of all groups/group number is determined.
D. Group size difference=sqrt (SUM (square of difference between the number of target image frames of each group and the number of average target image frames)/group number is determined.
Where SQRT () is a square root function and SUM () is a SUM function.
The difference in group size accounts for the degree of discretion between the time durations of each other entering the rating room, the larger the value, the larger the difference in time durations of each other entering the rating room, the smaller the value, the smaller the difference in time durations of each other entering the rating room.
E. the continuous coefficient = group number x group size difference is determined.
The larger the number of groups, the more times other personnel enter the scoring room, the larger the continuous coefficient, so the number of groups is proportional to the continuous coefficient, and the group sizes are different. The difference in group size accounts for the degree of dispersion between the time periods when each other person enters the rating room, the larger this value is, the greater the randomness of entering the rating room is, and the greater the degree of abnormality is, and therefore the greater the continuous coefficient is.
S303-2 determines that the anomaly coefficient is a continuous coefficient x the total number of frames of the target image frame/the total number of frames of the target video.
The continuous coefficient characterizes the abnormality of entering the evaluation room, and the larger the continuous coefficient is, the more abnormal the continuous coefficient is, so the continuous coefficient is in direct proportion to the abnormal coefficient. The total frame number of the target image frame/the total frame number of the target video represents the proportion of the time length of other people to the total signing time length, and the longer the proportion is, the greater the disturbing degree of the signing process is, the more abnormal is, so the total frame number of the target image frame/the total frame number of the target video is in direct proportion to the abnormal coefficient.
(6) And determining the starting time of automatic generation of the bid evaluation video according to the sign-in duration and the anomaly coefficient of each bid evaluation expert.
A. An average sign-in duration= (maximum value in sign-in durations of all panelists + minimum value of sign-in durations of all panelists +4 total number of sign-in durations and/or panelists) 6 is determined.
The average sign-in duration characterizes the average sign-in duration of each label evaluation expert when the label evaluation is performed.
B. calculating a check-in duration outlier = average check-in duration (1+ anomaly coefficient)/preset check-in standard duration.
The preset sign-in standard duration is an experience value and is preset by a user. Because the time length of entering the label evaluation room to check in is subject to a certain rule and no abnormal condition exists, a check-in time length interval can be obtained through statistics of big data, the check-in standard duration in the step is the maximum value of the check-in time length interval plus the maximum acceptable time length after interference of the abnormal condition, and if the maximum time length exceeds the maximum time length, the check-in is considered to be abnormal.
The maximum acceptable duration after the interference of the abnormal situation is also a value obtained by analyzing big data through the abnormal sign-in duration under a large number of different interference conditions.
The present embodiment does not limit the determination process of the check-in standard duration.
C. if the abnormal value of the sign-in duration is greater than 1, determining the starting time of automatic generation of the label evaluation video as the first appearance time of a first label evaluation expert.
The average sign-in duration (1+ abnormal coefficient) characterizes the sign-in duration under the scene of the sign-in (i.e. abnormal condition), the preset sign-in standard duration characterizes the maximum duration acceptable under various scenes (i.e. abnormal condition), if the sign-in duration abnormal value is greater than 1, the sign-in duration is larger than the maximum duration acceptable, and if the sign-in duration abnormal value is greater than 1, the sign-in duration is greater than the maximum duration acceptable, and if the sign-in duration abnormal value is abnormal, the sign-in record is generated from the first appearance time of a first sign-in expert so as to preserve the most complete sign-in image and assist the subsequent personnel in evaluating the sign-in validity. Therefore, if the abnormal value of the sign-in duration is larger than 1, the starting time of automatic generation of the label evaluation video is determined to be the first time of the first label evaluation expert.
D. If the abnormal value of the sign-in duration is not more than 1, determining the time for the last label evaluation expert to finish sign-in as the starting time of automatic generation of the label evaluation video.
If the abnormal value of the sign-in duration is greater than 1, the sign-in is normal, and then the sign-in record is generated from the time when the last sign-in expert finishes sign-in, so that only the last sign-in process image is reserved.
The starting time of automatic generation of the label-evaluating video can be flexibly determined according to the label-evaluating registration process, so that the integrity of the label-evaluating video is ensured, and the resource consumption is reduced.
2. The process of determining the end time of automatic generation of the rating video according to the video collected by the video recording equipment is as follows:
The earliest of the following three times is determined as the end time of automatic generation of the rating video:
The time of uploading the bid evaluation result, the time when the bid evaluation ending button is triggered, and the preset time after the bid evaluation ending time determined according to the project information.
The preset time is 30 minutes or the time when the ending frame appears for the first time, and no personnel exist in the ending frame.
That is, the end frame is a frame in which no person is in the rating room.
For example, automatic generation of the bid evaluation video is performed at the starting time of the generation of the bid evaluation video, and if the bid evaluation expert is monitored to upload the bid evaluation result first, the automatic generation of the bid evaluation video is ended at the moment, so that the final bid evaluation video is obtained.
And (3) automatically generating the bid evaluation video at the starting time of the generation of the bid evaluation video, wherein if a bid evaluation expert is monitored to press a bid ending button firstly, the automatic generation of the bid evaluation video is ended at the moment, and the final bid evaluation video is obtained.
And (3) automatically generating the bid evaluation video at the starting time of the bid evaluation video generation, wherein as time goes by, when the latest time of the bid evaluation time (namely the bid evaluation ending time) in the project information is reached, the bid evaluation expert is not monitored to upload the bid evaluation result, and a certain bid evaluation expert is not monitored to press a finish bid evaluation button, then the automatic generation of the bid evaluation video is finished 30 minutes after the bid evaluation ending time, and the final bid evaluation video is obtained. Or after the end time of the bid evaluation, whether the portrait exists in each frame of image is identified, if no personnel exist in the bid evaluation room for the first time (i.e. no personnel exist in the bid evaluation room), the automatic generation of the bid evaluation video is ended, and the final bid evaluation video is obtained. And then or within 30 minutes after the end time of the bid evaluation, if no personnel exist in the bid evaluation room, ending the automatic generation of the bid evaluation video, and further obtaining the final bid evaluation video; and within 30 minutes after the bid evaluation ending time, if personnel exist in the bid evaluation room, automatically generating the bid evaluation video after 30 minutes after the bid evaluation ending time, and further obtaining the final bid evaluation video.
S103, obtaining the comment video from the video collected by the video device based on the video starting time and the video ending time.
After the video recording start time and the video recording end time are determined, the video collected by the video recording equipment in the time period is intercepted, and the intercepted video is further used as the evaluation video.
After interception, the video equipment can be controlled to be shut down, so that the resource occupation of the video equipment is reduced.
And S104, associating the bid evaluation video with the project according to the project information, and storing the association relationship and the bid evaluation video.
After the bid evaluation video is obtained, the bid evaluation video is associated with the project (for example, the association relationship between the bid evaluation video identifier and the project identifier is formed based on the project name included in the project information), the association relationship and the bid evaluation video are stored, so that the subsequent query of the bid evaluation video based on the project is facilitated, and meanwhile, the bid evaluation video can be queried and downloaded according to time and video equipment.
The method of the embodiment determines video equipment according to project information; determining the starting time of automatic generation of the evaluation video according to the project information; determining the end time of automatic generation of the evaluation video according to the video acquired by the video equipment; based on the video recording start time and the video recording end time, obtaining a comment video from a video collected by video recording equipment; according to the project information, the bid evaluation video is associated with the project, and the association relation and the bid evaluation video are stored, so that automatic acquisition of the bid evaluation video is realized, and uncertain factors caused by manual operation are avoided.
In order that the above-described aspects may be better understood, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer.
Furthermore, it should be noted that in the description of the present specification, the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., refer to a specific feature, structure, material, or characteristic described in connection with the embodiment or example being included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art upon learning the basic inventive concepts. Therefore, the appended claims should be construed to include preferred embodiments and all such variations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention should also include such modifications and variations provided that they come within the scope of the following claims and their equivalents.