Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another element. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of this disclosure.
As shown in fig. 1, in one embodiment, a video material marking method is provided, and the video material marking method specifically may include the following steps:
Step S202, obtaining target materials and target information corresponding to the target materials.
In the embodiment of the application, the target material is a picture or a video clip forming a video. The target information corresponding to the target material may include, but is not limited to, information of a designer of the target material, a type of the target material, a size of the target material, a generation time of the target material, and the like. In the video production process, after the designer finishes the design of the target material of the video, the target material and the target information corresponding to the target material can be uploaded to a computer equipment database, so that the computer equipment can acquire the target material and the target information corresponding to the target material according to the input of a user. It should be noted that, in this embodiment, the specific classification manner of the type of the target material is not limited, for example, the type of the target material is generally classified into a picture type and a video type, where the picture includes a large picture, a screen flash, a graphics flash, etc., and the video includes a title, a mouth cast, a scenario, etc.
Step S204, determining a marking character string according to the target information;
in the embodiment of the application, the marking character string is a character string added to the target material. The specific method for determining the marking string according to the target information is not limited in this embodiment, for example, as shown in fig. 2, step S204 may include the following steps:
step S302, generating the Long type ID according to the target information.
In the embodiment of the application, the unique Long type ID is generated by the target information corresponding to the target material. The specific method for generating the Long type ID according to the target information is not limited in this embodiment, for example, after receiving a target information, the database of the computer device may randomly generate an ID of a self-increasing Long type, that is, a primary key ID, and for example, the generated Long type ID may be 15817. And each target material corresponds to a unique Long type ID.
Step S304, converting the Long type ID into a character string with a preset length to obtain the marking character string.
In the embodiment of the present application, the length of the marking string is not limited, for example, the string with the preset length may be a string formed by 4-bit characters. The marking string may be formed by concatenating 4 bits of "AaBbCDdEeFfGgHhiJjKLlMmNnOPpQqRrSTtUVWXYyZ 2346789. And the marking character string corresponding to each target material has uniqueness. The specific method for converting the Long type ID into the character string with the preset length is not limited in this embodiment, for example, when the character string with the preset length is a character string with the length of 4 bits, the Long type ID may be converted into the character string with the length of 4 bits by using an algorithm of converting 10 bits into 52 bits. For example, the Long type ID is 15817, which is converted into a 4-bit length string AdJK through a 10-ary to 52-ary algorithm.
In the embodiment of the application, when the generated Long type ID is 1 and the length of the converted character string is less than 4 bits, the character "a" can be used to supplement 4 bits in front, so that the Long type ID is 1 and the character string obtained by the algorithm from 10 system to 52 system is "AAAa". When the number of the character strings is less than 4, the character strings are complemented to 4, so that the unification of the algorithm can be facilitated, and the subsequent unification identification is facilitated.
And step S206, generating the marking character string watermark on the target material.
In the embodiment of the application, when the target material is a picture, the marking character string watermark is added to the picture. When the target material is video, a marking character string watermark can be added to the picture of the video frame. When the target material is video, video frame data of the video can be acquired before the marking character string watermark is generated.
In the embodiment of the present application, a specific method for generating the marking string watermark on the target material is not limited, and for example, a third party plug-in "FFMpeg" may be used.
In the embodiment of the application, in order to facilitate the identification of the marking character string watermark and avoid the influence of the character string watermark on the content of the target material, the color of the marking character string watermark is generally selected to be capable of obviously distinguishing the background of the target material, and the marking character string is added to the background area of the material. Therefore, as shown in fig. 3, step S206 may specifically include the steps of:
Step S402, identifying a background area and a background color of the target material.
In the embodiment of the application, the target in the target material can be identified by utilizing an image identification algorithm, and the area outside the target in the picture is the background area, so that the background area is obtained. RGB values of the pixel blocks of the background area of the picture are acquired, so that the background color can be acquired. The method of identifying the background area and the background color of the target material is not limited thereto.
And step S404, determining the color of the marking character string according to the background color.
In the embodiment of the application, the colors of the marking character strings corresponding to different background colors can be preset. For example, a dark background may be made to correspond to a light-colored marking string, whereas a light background corresponds to a dark-colored marking string. After the RGB value of the background of the target material is obtained, the color of the marking character string can be determined according to the preset colors of the marking character strings corresponding to different background colors.
Step S406, adding the marking string to the background area.
In the embodiment of the application, the number of the marking strings added in the background area is not limited, for example, only one marking string can be added, and the marking strings can be uniformly added in the background area according to a certain density.
In the embodiment of the application, the target materials with the marking character strings can be synthesized into the video later, so that release and delivery can be performed.
According to the video material marking method provided by the embodiment of the application, the target material and the target information corresponding to the target material are obtained, the marking character string is obtained according to the target information, and the watermark corresponding to the marking character string is generated on the target material, so that automatic marking of the target material can be realized, the manual marking is avoided, the marking efficiency is improved, and the labor cost is reduced. And the information of the video material can be obtained according to the marking character string watermark on the material in the video.
As shown in fig. 4, in another embodiment of the present application, there is further provided a method for identifying a video material, for identifying a material marked by the above-mentioned video material marking method, the method for identifying a video material comprising:
Step S502, cutting the video material to obtain a picture set with the size of the watermark area of the marking character string.
In the embodiment of the present application, the specific method for cutting the video material is not limited, for example, a region-based segmentation method may be adopted to cut according to the size of the watermark region of the marking string. The video material is a picture or video. When the video material is a picture, the video material can be directly cut. And cutting each picture in the video material into a plurality of pictures with the size of the watermark region of the marking character string by taking the size of the watermark region of each marking character string as a reference, thereby obtaining a picture set with the size of the watermark region of the marking character string. When the video material is video, video frame images of the video can be obtained first, and the video material is cut into a plurality of videos with the size of the watermark area of the marking character string by cutting each video frame image, so that a video set is obtained, and then specified frame pictures are intercepted from each preset unit time length of the video set, so that the picture set is obtained. For example, if the preset unit duration is 1 second and the designated frame picture is 2 frames of pictures, 2 frames of pictures are cut from each second of video of the video set, and finally the cut video frames of pictures form the picture set.
Step S504, identifying pictures in the picture set and determining a character string set.
In the embodiment of the application, the specific method for identifying the pictures in the picture set is not limited, for example, a third party identification tool easyocr can be used, and the third party tool provides an identification method, and only the picture address, the coordinates and the size need to be transmitted, so that the tool can automatically identify the character strings on the pictures.
Step S506, converting the character string set into a Long type id set.
In the embodiment of the application, the character string set can be converted into the Long type id set by using an algorithm of converting 52 system into 10 system. For example, a string set such as AdJa, adla, adJK is identified from the video, and the string set is converted into a Long type id set by the algorithm to obtain the string set such as 15801, 15951 and 15817.
In the embodiment of the present application, it should be noted that after the string set is obtained, the string set needs to be converted and filtered, for example, strings with a length other than 4 bits in the string set are screened out, and characters with similar lower case letters such as "o, u, c" and the like in the string set and capital letters are uniformly converted into uppercase. And then, de-duplicating the character string set after conversion and filtering, and finally converting the character string set after de-duplication into a Long type id set.
And step S508, identifying the video material according to the Long type id set.
In the embodiment of the application, the information of the video material can be identified by searching the target information corresponding to the Long type id in the database of the computer equipment through the Long type id in the Long type id set.
According to the video material identification method provided by the embodiment of the application, when the video material is generated by the video material marking method, the target information corresponding to the video material can be identified by the video material identification method. Therefore, video creation can be better protected, and video materials are prevented from being stolen.
As shown in fig. 5, in one embodiment, a video material marking apparatus is provided, which may be integrated in a computer device, and specifically may include an acquisition module 610, a marking string determining module 620, and a marking module 630.
The obtaining module 610 is configured to obtain a target material and target information corresponding to the target material.
The marking character string determining module 620 is configured to determine a marking character string according to the target information.
A marking module 630, configured to generate the marking string watermark on the target material.
The functions of the acquisition module 610, the marking character string determining module 620 and the marking module 630 provided in the embodiment of the present application are implemented in one-to-one correspondence with the step S202, the step S204 and the step S206 in the above video material marking method, and for the specific explanation in the video material marking device and the related detailed and optimized content, reference is made to the specific embodiment in the above video material marking method, which is not described herein again.
As shown in fig. 6, in one embodiment, a video material identifying apparatus is provided, where the video material identifying apparatus may also be integrated into a computer device, and specifically may include a cutting module 710, a first identifying module 720, a converting module 730, and a second identifying module 740.
And the cutting module 710 is configured to cut the video material to obtain a picture set with a size of the watermark region of the marking string.
The first identifying module 720 is configured to identify a picture in the picture set, and determine a character string set.
And a conversion module 730, configured to convert the string set into a Long type id set.
And a second identifying module 740, configured to identify the video material according to the Long type id set.
The functions of the cutting module 710, the first identifying module 720, the converting module 730, and the second identifying module 740 in the video material identifying device provided in the embodiment of the present application are in one-to-one correspondence with the step S502, the step S504, the step S506, and the step S508 in the above video material identifying method, and for the specific explanation in the video material identifying device, and the related detailed and optimized content refer to the specific embodiments in the above video material identifying method, which are not repeated herein.
FIG. 7 illustrates an internal block diagram of a computer device in one embodiment. The computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by the processor, causes the processor to implement a video material marking method or a video material identification method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform the steps of a video material marking method or a video material recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the video material marking apparatus provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 7. The memory of the computer device may store various program modules that make up the video material marking apparatus, such as the acquisition module 610, the marking string determination module 620, and the marking module 630 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the video material marking method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 7 may perform step S202 through the acquisition module 610 in the video material marking apparatus shown in fig. 5. The computer device may perform step S204 through the marking string determination module 620. The computer device may perform step S206 through the marking module 630.
In one embodiment, the video material recognition apparatus provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 7. The memory of the computer device may store various program modules constituting the video material recognition apparatus, such as the cutting module 710, the first recognition module 720, the conversion module 730, and the second recognition module 740 shown in fig. 6. The computer program constituted by the respective program modules causes the processor to execute the steps in the video material recognition method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 7 may perform step S502 by the cutting module 710 in the video material recognition apparatus as shown in fig. 6. The computer device may perform step S504 through the first recognition module 720. The computer apparatus may perform step S506 through the conversion module 730. The computer device may perform step S508 through the second recognition module 740.
In one embodiment, a computer device is presented, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
Step S202, acquiring target materials and target information corresponding to the target materials;
step S204, determining a marking character string according to the target information;
and step S206, generating the marking character string watermark on the target material.
Or the processor when executing the computer program performs the steps of:
step S502, cutting the video material to obtain a picture set with the size of a watermark region of a marking character string;
Step S504, identifying pictures in the picture set and determining a character string set;
Step S506, converting the character string set into a Long type id set;
and step S506, identifying the video material according to the Long type id set.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor causes the processor to perform the steps of:
Step S202, acquiring target materials and target information corresponding to the target materials;
step S204, determining a marking character string according to the target information;
and step S206, generating the marking character string watermark on the target material.
Or the processor when executing the computer program performs the steps of:
step S502, cutting the video material to obtain a picture set with the size of a watermark region of a marking character string;
Step S504, identifying pictures in the picture set and determining a character string set;
Step S506, converting the character string set into a Long type id set;
and step S506, identifying the video material according to the Long type id set.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.