The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, in an embodiment of the present invention, a three-dimensional watermark adding method is provided to quickly add a three-dimensional watermark to a target video and to dynamically adjust a display state of the three-dimensional watermark. The three-dimensional watermark adding method at least comprises the following steps:
step 101: receiving target watermark information;
step 102: acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
step 103: establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
step 104: and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
The target watermark information may include at least one of text information, picture information, animation information, and the like. Accordingly, the three-dimensional stereoscopic watermark may include at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark. The dynamic shooting parameter information may include at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view field angle information of the unmanned aerial vehicle. According to the dynamic shooting parameter information, a simulated lens stereo space for the unmanned aerial vehicle to shoot the target video can be established, and then the dynamic relative position relation between the unmanned aerial vehicle and the target object in the target video can be determined according to the simulated lens stereo space.
Referring to fig. 2, in an embodiment, the obtaining of the dynamic shooting parameter information corresponding to the target video includes:
step 201: acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
step 202: generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
step 203: and storing the dynamic shooting parameter information in association with the target video.
Specifically, in the process of shooting a target video, the unmanned aerial vehicle can acquire dynamic flight coordinates (x, y, z) of the unmanned aerial vehicle in a GPS (global positioning system) positioning mode, a Beidou positioning mode and the like, wherein x represents longitude information, y represents latitude information, and z represents flight altitude information, so that flight track information and flight speed information are generated according to the change of the dynamic flight coordinates, and flight attitude information is generated according to output data of a flight attitude sensor built in the unmanned aerial vehicle. Optionally, the attitude sensor comprises an Inertial Measurement Unit (IMU). Meanwhile, generating cradle head angle information according to the angle change of the cradle head carried on the unmanned aerial vehicle, and generating lens focal length information and lens view field angle information according to shooting parameters of a camera lens carried on the unmanned aerial vehicle.
Further, the dynamic shooting parameter information and the target video are stored in an associated manner, so that a mapping relationship between the target video and the corresponding dynamic shooting parameter information is established, and when a three-dimensional watermark needs to be added to the target video, the dynamic shooting parameter information corresponding to the target video can be acquired according to the mapping relationship. For example, a mapping relationship between the target video and the corresponding dynamic shooting parameter information may be established by adding a specific type tag to the target video, and when a three-dimensional watermark needs to be added to the target video, the dynamic shooting parameter information corresponding to the target video may be acquired by reading the specific type tag. In an embodiment, the dynamic shooting parameter information may also be stored in a data stream of the target video, and further, when a three-dimensional watermark needs to be added to the target video, the corresponding dynamic shooting parameter information may be directly read from the data stream of the target video.
It can be understood that, when storing the dynamic shooting parameter information in association with the target video, time stamp information corresponding to different shooting parameter information needs to be recorded in the dynamic shooting parameter information, so that the shooting parameter information and the video data stream are associated with each other in time.
Referring to fig. 3, after the generating the three-dimensional stereo watermark for the target video, the method further includes:
step 105: determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
step 106: and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
It is understood that after the simulated lens stereo space of the target video shot by the unmanned aerial vehicle is established, the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video can be determined frame by frame according to the simulated lens stereo space. Further, the display state of the three-dimensional watermark is adjusted according to the relative position relationship between the unmanned aerial vehicle corresponding to each frame of image in the target video and the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
It can be understood that, as at least one of the flight path and the focal length of the lens changes, the scaling of the target object in the video also changes, and at this time, the scaling size of the three-dimensional watermark can be dynamically adjusted according to the change of the scaling of the target object, so as to ensure that the size of the watermark is scaled synchronously with the size of the target object. For example, in a certain frame image of the target video, the proportion of the target object is 1, and in the next adjacent frame image, the proportion of the target object is 0.5, that is, in two adjacent frame images, the target object is reduced by one time, at this time, the three-dimensional stereo watermark can be reduced by one time according to the scaling proportion of the target object, so that the dynamic adjustment of the scaling size of the three-dimensional stereo watermark is realized, and the watermark display effect is optimized.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
It is understood that the position of the unmanned aerial vehicle relative to the target object may be changed during the process of shooting the target video, so that the unmanned aerial vehicle may have different offset angles relative to the target object in different frame images. In this embodiment, a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video is calculated according to the flight trajectory information and the flight attitude information, and then a rotation angle of the three-dimensional watermark relative to the simulated lens stereo space is adjusted according to the dynamic offset angle, so that the three-dimensional watermark can dynamically rotate along with a change of the offset angle of the unmanned aerial vehicle.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
Specifically, in the process of shooting the target video by the unmanned aerial vehicle, in order to ensure the stability of the shooting lens, the angle of the cradle head is dynamically adjusted according to the flight track and the change of the flight attitude of the unmanned aerial vehicle, for example, the pitch angle of the cradle head is adjusted according to the change of the flight altitude of the unmanned aerial vehicle, and the yaw angle of the cradle head is adjusted according to the change of the flight attitude of the unmanned aerial vehicle. In this embodiment, by obtaining a dynamic pitch angle and a dynamic yaw angle of the pan/tilt head during shooting of a target video, a pitch rotation angle of the three-dimensional watermark with respect to the stereoscopic space of the analog lens is adjusted according to the dynamic pitch angle, and a lateral rotation angle of the three-dimensional watermark with respect to the stereoscopic space of the analog lens is adjusted according to the dynamic yaw angle, so that the three-dimensional watermark and the stereoscopic space of the analog lens are better fused, and a watermark display effect is optimized.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
Specifically, in the process of shooting a target video by the unmanned aerial vehicle, according to the change of the flight trajectory, the dynamic height of the unmanned aerial vehicle relative to the target object also changes. In the method, the dynamic height of the unmanned aerial vehicle relative to the target object is obtained according to the flight track information of the unmanned aerial vehicle, and then the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space is adjusted according to the dynamic height, so that the display state of the three-dimensional watermark is adjusted according to the change of the dynamic height of the unmanned aerial vehicle. For example, when the dynamic height is lower than a preset height threshold, the three-dimensional watermark may be in an upright state with respect to the reference ground plane of the analog lens stereo space, and when the dynamic height is equal to or higher than the preset height threshold, the three-dimensional watermark may be dynamically adjusted to be in a tiled state with respect to the reference ground plane of the analog lens stereo space, so that the three-dimensional watermark may be ensured to be more clearly presented at a high-altitude shooting angle.
In one embodiment, the method further comprises:
step 107: and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
It can be understood that, because the unmanned aerial vehicle is in a flying state when shooting the target video, the unmanned aerial vehicle is inevitably influenced by environmental factors to cause instability of a flying posture, for example, the unmanned aerial vehicle is influenced by wind speed change in a shooting environment to cause short-time jitter or short-time change of flying speed, so that a holder angle and a lens view field angle are influenced, and the display state of the three-dimensional watermark may also be changed due to the short-time disturbance, so that a watermark display effect is influenced. In this embodiment, by modifying the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the pan-tilt angle information, and the lens view angle information, for example, adjusting the rotation angle of the three-dimensional watermark according to the flight attitude information, the influence of the short-time change of the flight attitude of the unmanned aerial vehicle on the display state of the three-dimensional watermark can be reduced, and the watermark display effect can be further optimized.
In one embodiment, before receiving the target watermark information, the method further includes:
reading the target video from the unmanned aerial vehicle and playing the target video off line;
and generating a watermark editing identifier on the offline playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
Specifically, when the unmanned aerial vehicle shoots the target video, the dynamic shooting parameter information can be recorded and stored in association with the target video. When a three-dimensional watermark needs to be added to a target video, a user can establish communication connection with the unmanned aerial vehicle through an intelligent terminal such as a mobile phone and the like, so that the target video and the dynamic shooting parameter information stored in association with the target video are downloaded from the unmanned aerial vehicle, the target video is played and edited offline through video editing software on the intelligent terminal, and the three-dimensional watermark is added.
Referring to fig. 4A, 400 is an intelligent terminal, 410 is an offline playing interface of a target video, and 430 is a target object in the target video. When the target video is played offline through the video editing software on the intelligent terminal 400, the watermark editing identifier 411 may be generated on the offline playing interface 410, and then a three-dimensional watermark adding instruction for the target video may be received through the watermark editing identifier 411.
Referring to fig. 4B, after the watermark editing identifier 411 receives a three-dimensional watermark adding instruction for the target video, a watermark information input interface 413 may be generated on the offline playing interface 410 for inputting target watermark information. For example, the watermark information input interface may be a virtual keyboard, and further may receive text watermark information input by a user through the virtual keyboard; or, the watermark information input interface may also be a file selection window, and the corresponding image watermark information or animation watermark information may be selected through the file selection window.
It can be understood that in the shooting process of the target video, the target video can be obtained from the unmanned aerial vehicle in real time through the intelligent terminal and synchronously played on line, and dynamic shooting parameter information corresponding to the target video is obtained; and further generating a watermark editing identifier on the online playing interface of the target video so as to receive a three-dimensional watermark adding instruction aiming at the target video through the watermark editing identifier.
It can be understood that when the watermark information is input through the watermark information input interface, the video editing software may establish a simulated lens stereo space in which the unmanned aerial vehicle shoots the target video according to the dynamic shooting parameter information, and generate a corresponding three-dimensional stereo watermark, such as a text watermark "HELLOW" shown in fig. 4B, on the target video in real time according to the simulated lens stereo space.
Referring to fig. 4C, after generating the three-dimensional stereoscopic watermark for the target video, the method further includes:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
The editing instruction may be a touch operation instruction directly aiming at the three-dimensional watermark "HELLOW", for example, a touch operation instruction such as drag-and-drop, stretch, shrink, rotation, and the like, so as to manually adjust the display state of the three-dimensional watermark.
It is understood that after the three-dimensional stereo watermark for the target video is generated, a hiding instruction for the three-dimensional stereo watermark may also be received through the watermark editing identifier 411; and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction. It is to be understood that the hiding instruction for the three-dimensional stereoscopic watermark may also be a specific touch gesture directly on the playing interface of the target video.
Referring to fig. 4D, after the three-dimensional watermark for the target video is generated, as the target video is played, the display state of the three-dimensional watermark "HELLOW" is dynamically adjusted according to the change of the position relationship of the unmanned aerial vehicle relative to the target object 430, for example, dynamic scaling is performed according to the distance of the lens relative to the target object 430 or the change of the focal length of the lens, so as to finally realize the fusion of the three-dimensional watermark and the simulated lens stereo space.
It is understood that all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Referring to fig. 5, in an embodiment of the present invention, a three-dimensional watermark adding apparatus 500 is provided, including:
a watermark input unit 501, configured to receive target watermark information;
a parameter obtaining unit 502, configured to obtain dynamic shooting parameter information corresponding to a target video, where the dynamic shooting parameter information is used to record a dynamic shooting parameter of an unmanned aerial vehicle when shooting the target video;
a space simulation unit 503, configured to establish a simulated lens stereo space in which the unmanned aerial vehicle shoots the target video according to the dynamic shooting parameter information;
a watermark generating unit 504, configured to fuse the target watermark information with the simulated lens stereo space, and generate a three-dimensional stereo watermark for the target video.
In an embodiment, the parameter obtaining unit 502 is specifically configured to:
the method comprises the steps that dynamic shooting parameters of an unmanned aerial vehicle are obtained when the unmanned aerial vehicle shoots a target video, and dynamic shooting parameter information corresponding to the target video is generated;
and storing the dynamic shooting parameter information in association with the target video.
In one embodiment, the dynamic shooting parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view angle information of the unmanned aerial vehicle.
In an embodiment, the spatial simulation unit 503 is specifically configured to:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
Referring to fig. 6, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark adjusting unit 505 for:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
In an embodiment, the watermark adjusting unit 505 is further configured to:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes:
a video obtaining unit 506, configured to read and play the target video offline from the unmanned aerial vehicle;
and an identifier generating unit 507, configured to generate a watermark editing identifier on the offline playing interface of the target video, where the watermark editing identifier is used to receive a three-dimensional watermark adding instruction for the target video.
In one embodiment, the video obtaining unit 506 is further configured to obtain and synchronously play the target video online in real time from the unmanned aerial vehicle during the shooting process of the target video;
the identifier generating unit 507 is further configured to generate a watermark editing identifier on the online playing interface of the target video, where the watermark editing identifier is used to receive a three-dimensional watermark adding instruction for the target video.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark editing unit 508, configured to:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark hiding unit 509, configured to:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
In one embodiment, the three-dimensional stereoscopic watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark.
It can be understood that the functions and specific implementations of the units in the three-dimensional watermark adding apparatus 500 may also refer to the related descriptions in the method embodiments shown in fig. 1 to fig. 4, and are not described herein again.
Referring to fig. 8, in an embodiment of the present invention, a terminal 800 is provided, which includes a processor 801 and a memory 803, where the processor 801 is electrically connected to the memory 803, the memory 803 is used for storing executable program instructions, and the processor 801 is used for reading the executable program instructions in the memory 803 and performing the following operations:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
In one embodiment, the obtaining of the dynamic shooting parameter information corresponding to the target video includes:
acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
and storing the dynamic shooting parameter information in association with the target video.
In one embodiment, the dynamic shooting parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view angle information of the unmanned aerial vehicle.
In one embodiment, the establishing a simulated lens stereo space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information includes:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
In one embodiment, the operations further comprise:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
In one embodiment, the operations further comprise:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
In one embodiment, before receiving the target watermark information, the operations further include:
reading the target video from the unmanned aerial vehicle and playing the target video off line;
and generating a watermark editing identifier on the offline playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
In one embodiment, before receiving the target watermark information, the operations further include:
in the shooting process of a target video, acquiring the target video from an unmanned aerial vehicle in real time and synchronously playing the target video on line;
and generating a watermark editing identifier on the online playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
In one embodiment, after the generating the three-dimensional stereoscopic watermark for the target video, the operations further include:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
In one embodiment, after the generating the three-dimensional stereoscopic watermark for the target video, the operations further include:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
In one embodiment, the three-dimensional stereoscopic watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark.
It is understood that the specific steps of the operations executed by the processor 801 and the specific implementation thereof may also refer to the description in the method embodiments shown in fig. 1 to fig. 4, and are not described herein again.
The three-dimensional watermark adding method, the device and the terminal can establish the simulated lens stereo space of the target video shot by the unmanned aerial vehicle according to the dynamic shooting parameter information when the target video needs to be added with the watermark by acquiring the dynamic shooting parameter information of the unmanned aerial vehicle when the target video is shot, can quickly generate the three-dimensional watermark aiming at the target video by fusing the target watermark information and the simulated lens stereo space, and are beneficial to reducing the generation time of the three-dimensional watermark. Meanwhile, the display state of the three-dimensional watermark can be dynamically adjusted according to the dynamic shooting parameters, so that the display effect of the watermark is optimized.
It should be understood that the above-described embodiments are merely exemplary of the present invention, and should not be construed as limiting the scope of the present invention, but rather as embodying all or part of the above-described embodiments and equivalents thereof as may be made by those skilled in the art, and still fall within the scope of the invention as claimed.