CN119293764B - Dark watermark generation system and generation method thereof - Google Patents
Dark watermark generation system and generation method thereof Download PDFInfo
- Publication number
- CN119293764B CN119293764B CN202411832841.9A CN202411832841A CN119293764B CN 119293764 B CN119293764 B CN 119293764B CN 202411832841 A CN202411832841 A CN 202411832841A CN 119293764 B CN119293764 B CN 119293764B
- Authority
- CN
- China
- Prior art keywords
- embedding
- watermark
- data
- unit
- dark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/16—Program or content traceability, e.g. by watermarking
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Technology Law (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of digital copyright protection and information security, in particular to a dark watermark generation system and a generation method thereof, wherein the system comprises an input data module, a robust feature extraction module, a self-adaptive dark watermark generation module, a deep learning embedding strategy module and a dark watermark embedding module; the digital content protection system comprises an input data module, a robust feature extraction module, an adaptive dark watermark generation module, an embedding strategy module and a dark watermark embedding module, wherein the input data module is used for receiving digital content to be protected, the robust feature extraction module is used for generating a feature parameter for embedding a dark watermark and determining a robust feature position, the adaptive dark watermark generation module is used for mapping dark watermark information into a feature parameter space to generate adaptive dark watermark data, the embedding strategy module is used for generating a dark watermark embedding strategy, and the dark watermark embedding module is used for embedding the dark watermark data into the robust feature position of the preprocessed digital content. The invention realizes the high precision and anti-interference of watermark embedding by combining the robust feature extraction, the self-adaptive watermark generation and the deep learning embedding strategy.
Description
Technical Field
The invention relates to the technical fields of digital copyright protection and information security, in particular to a dark watermark generation system and a generation method thereof.
Background
In the field of digital rights protection, with the wide spread of digital contents on the internet and multimedia platforms, it is becoming more important to ensure the security of the contents and the effective protection of the rights, digital watermarking technology has been widely applied to various formats of data such as images, audio and video, and by embedding invisible or imperceptible marks into the contents themselves, tracking and rights identification of the contents are realized, however, when the conventional watermarking technology faces various signal processing (such as compression, noise adding, conversion, etc.) and malicious tampering, the robustness and the invisibility of the embedded watermark are often difficult to maintain, especially in high compression or complex signal processing operations, the embedded watermark information is easily damaged, resulting in watermark detection and recognition failure.
The prior art is difficult to effectively cope with the anti-interference requirement and the accuracy of embedded watermark in a complex environment, particularly, the embedded watermark is easy to weaken or even lose when facing frequent signal processing (such as image compression and format conversion), in addition, the prior art is also a great difficulty how to balance the embedding strength and the concealment, and the traditional method lacks an adaptive watermark generation mechanism and an intelligent embedding strategy, so that the embedding parameters cannot be flexibly adjusted according to different digital content characteristics, and the robustness of the watermark is not strong enough. Accordingly, the present invention proposes a dark watermark generation system and a generation method thereof to solve the above-mentioned problems.
Disclosure of Invention
Based on the above object, the present invention provides a dark watermark generation system and a generation method thereof.
A dark watermark generation system comprises an input data module, a robust feature extraction module, an adaptive dark watermark generation module, a deep learning embedding strategy module and a dark watermark embedding module, wherein:
the input data module is used for receiving the digital content to be protected, and preprocessing the digital content to remove noise and redundant information, so as to obtain preprocessed digital content;
the robust feature extraction module is connected with the input data module, processes the preprocessed digital content by adopting multi-scale analysis and frequency domain transformation, generates the feature parameters embedded by the dark watermark, and determines the robust feature position;
the self-adaptive dark watermark generation module is connected with the robust feature extraction module, and is used for generating self-adaptive dark watermark data by mapping the dark watermark information into a feature parameter space by utilizing a mixed domain dark watermark generation algorithm based on feature parameters and preset dark watermark information;
The embedding strategy module is connected with the self-adaptive dark watermark generation module and is used for calculating the embedding position and the embedding parameter of the dark watermark according to the self-adaptive dark watermark data and the robust characteristic and generating a dark watermark embedding strategy;
And the dark watermark embedding module is connected with the embedding strategy module and is used for embedding the dark watermark data into the robust characteristic position of the preprocessed digital content according to the dark watermark embedding strategy to finish watermark embedding operation and obtain the digital content embedded with the dark watermark.
Optionally, the input data module includes a data receiving unit, a format identifying unit, a noise removing unit and a redundant information removing unit, wherein:
the data receiving unit is used for receiving digital content data to be protected, wherein the digital content data comprises image, audio and video formats, and the received data is transmitted to the format recognition unit;
the format recognition unit is used for carrying out format recognition on the received digital content data and confirming that the content format is image, audio or video;
the system comprises a format identification unit, a noise removal unit, a video format data processing unit and a data processing unit, wherein the format identification unit is connected with the format identification unit and is used for executing a corresponding noise removal algorithm according to a content format so as to reduce irrelevant noise components in data, wherein the image format data adopts the noise removal algorithm based on frequency domain filtering;
The redundant information removing unit is connected with the noise removing unit and used for identifying and removing redundant parts in the data, the image format data removes redundant pixel information, the audio format data removes silent sections and the video format data removes redundant frames, and finally the preprocessed digital content is obtained.
Optionally, the noise removing unit specifically includes:
The image denoising subunit is used for removing noise based on frequency domain filtering for the image format data;
The audio denoising subunit is used for performing time domain filtering processing on the audio format data;
And the video denoising subunit is used for applying the joint denoising processing of the space and the time domain to the video format data.
Optionally, the robust feature extraction module includes a multi-scale analysis unit, a frequency domain transformation unit, a feature parameter generation unit, and a robust feature position determination unit, where:
The multi-scale analysis unit is used for carrying out multi-scale decomposition processing on the preprocessed digital content to generate representations with different resolutions and scales;
The frequency domain transformation unit is connected with the multi-scale analysis unit and is used for performing discrete Fourier transformation on each scale component, converting the scale component of the space domain into a frequency domain representation and obtaining a corresponding frequency component;
The characteristic parameter generating unit is connected with the frequency domain transforming unit and is used for extracting characteristic parameters from frequency domain data of each scale, particularly selecting frequency components with frequency amplitude values larger than a given threshold value from frequency domain representation of each scale, and calculating average amplitude values and relative positions of the corresponding frequency components to form the characteristic parameters;
The robust feature position determining unit is connected with the feature parameter generating unit and used for determining robust feature positions suitable for embedding the dark watermark according to the generated feature parameters, the robust feature positions are selected as positions of a plurality of frequency components with highest frequency amplitude values, and the optimal coordinate area for embedding the dark watermark is obtained by calculating the space coordinate offset of the positions.
Optionally, the robust feature location determining unit:
the frequency component ordering subunit is used for arranging the frequency domain components provided by the characteristic parameter generating unit in descending order according to the amplitude value;
A position selection subunit for selecting a previous frequency component from the sorted frequency components The components with highest amplitude value form an initial robust feature position set;
A coordinate offset calculation subunit connected to the position selection subunit for calculating an initial robust feature position setAnd determining the optimal coordinate area for embedding the dark watermark by using the space coordinate offset of each position in the image.
Optionally, the adaptive dark watermark generation module comprises a watermark information preprocessing unit, a characteristic parameter mapping unit and an adaptive watermark data generation unit, wherein:
the watermark information preprocessing unit is used for preprocessing preset dark watermark information, specifically, representing the dark watermark information into a binary sequence, wherein each bit represents one element of the watermark information, and then encrypting the binary sequence by using an encryption algorithm to generate an encrypted watermark sequence;
the characteristic parameter mapping unit is connected with the watermark information preprocessing unit and is used for mapping the encrypted watermark sequence into a characteristic parameter space;
the specific steps of mapping include:
Step 1, representing characteristic parameters as vectors, wherein each element represents one characteristic parameter;
step 2, determining a mapping proportion coefficient according to the length of the encrypted watermark sequence and the length of the characteristic parameter vector;
step 3, calculating the mapping position index of each bit of the encrypted watermark information in the characteristic parameter space;
Step 4, mapping the watermark information after each bit encryption to the corresponding characteristic parameters to form a mapping pair;
The self-adaptive watermark data generation unit is connected with the characteristic parameter mapping unit and is used for generating self-adaptive dark watermark data according to mapping pairs, specifically, the amplitude of the corresponding characteristic parameter is adjusted according to the value of watermark bits for each pair of mapping pairs, if the watermark bits are 1, the amplitude of the characteristic parameter is increased, if the watermark bits are 0, the amplitude of the characteristic parameter is reduced, and all the adjusted characteristic parameters are combined to form the self-adaptive dark watermark data.
Optionally, the embedding strategy module comprises a feature input unit, a model training unit, an embedding position calculating unit and an embedding parameter optimizing unit, wherein:
The characteristic input unit is used for receiving the self-adaptive dark watermark data and the robust characteristic, and combining the self-adaptive dark watermark data and the robust characteristic to form a characteristic vector for deep learning processing;
The model training unit is used for training the feature vector by utilizing the deep neural network model based on the preprocessed digital content and learning the mapping relation between the digital content features and the watermark embedding strategy;
The embedded position calculating unit is used for calculating the optimal embedded position of the dark watermark according to the input feature vector by using the trained deep learning model and determining the embedded coordinate area of the dark watermark;
and the embedded parameter optimizing unit optimizes watermark embedding parameters including embedding strength and embedding mode according to the calculated embedded position and the self-adaptive dark watermark data to generate a dark watermark embedding strategy.
Optionally, the embedded parameter optimization unit includes:
the embedded strength calculating subunit is used for determining the optimal strength of the embedded of the dark watermark according to the robust characteristic value of the embedded position, and comprises the following specific steps:
characteristic value analysis, firstly calculating average amplitude value of characteristic value of embedded position Sum of variancesThe formula is: ; Wherein, the method comprises the steps of, wherein, Representing the total number of feature points embedded in the location,Represent the firstAmplitude values of the characteristic points;
determination of embedding Strength based on the mean amplitude and variance of the feature points To enhance robustness and invisibility, the embedding strength calculation formula is: Wherein, the method comprises the steps of, Is an experience coefficient; Representing the influence coefficient of the amplitude variance on the embedding strength;
The embedding mode optimizing subunit is used for determining an optimal embedding mode according to the embedding strength and the dark watermark data, and comprises the following specific steps:
The method comprises the steps of selecting an embedding mode according to the frequency attribute of the embedded position characteristic, selecting amplitude modulation embedding if the embedded position belongs to a low-frequency characteristic region, and selecting phase modulation embedding if the embedded position belongs to an intermediate-frequency characteristic region, wherein the judging basis expression of the embedding mode is as follows:
;
Wherein, Frequency components representing the current feature location,AndFrequency ranges representing a low frequency and an intermediate frequency, respectively;
the embedding parameters are adjusted, the parameters under different embedding modes are optimized according to the self-adaptive dark watermark data, and the embedding strength is improved for amplitude modulation embedding For phase modulation embedding, the embedded phase offset is adjusted according to the value of watermark data bitThe calculation formula is as follows: Wherein, the method comprises the steps of, wherein, As a result of the phase modulation factor,For watermark data bits, if the watermark bit is 1, thenIf the watermark bit is 0, then。
Optionally, the dark watermark embedding module comprises an embedding decision unit and a watermark data embedding unit, wherein:
the embedding decision unit is used for receiving the embedding strategy from the embedding parameter optimization unit, and comprises embedding strength and embedding mode;
the watermark data embedding unit is used for executing specific dark watermark data embedding operation, and embedding the dark watermark data into the robust characteristic position of the preprocessed digital content according to an embedding strategy;
The method comprises the following specific steps:
Positioning the embedded position, namely positioning specific embedded coordinates in the preprocessed digital content according to the robust feature position information;
The watermark data processing comprises the steps of encoding watermark information according to an embedding strategy, adjusting the embedding depth and mode of each data bit, and particularly adjusting the amplitude or phase of the data according to the embedding mode to ensure the correct embedding of each data bit;
The data embedding is carried out, namely, the watermark data is embedded according to the encoded watermark data at the determined robust characteristic position;
and after the embedding is completed, verifying the robustness of the watermark by simulating attacks including compression or noise addition, and ensuring that the watermark can still be correctly detected and recovered under different attack conditions.
A method for generating a dark watermark is realized by the dark watermark generation system, and comprises the following steps:
S1, receiving digital content to be protected, and removing noise and redundant parts in the digital content through a noise removing and redundant information removing step to obtain preprocessed digital content;
S2, analyzing the preprocessed digital content through multi-scale analysis and frequency domain transformation to generate characteristic parameters embedded by the dark watermark, and determining robust characteristic positions;
S3, mapping the dark watermark information to a robust feature parameter space by using the dark watermark information and the feature parameters through a mixed domain dark watermark generation algorithm to generate self-adaptive dark watermark data;
S4, calculating the embedding position and the embedding parameter of the dark watermark based on the self-adaptive dark watermark data and the robust characteristic, and generating a dark watermark embedding strategy;
S5, embedding the dark watermark data into robust feature positions of the preprocessed digital content according to a dark watermark embedding strategy to finish watermark embedding operation;
And S6, after the embedding is completed, verifying the watermark embedding effect through simulating an attack signal, and ensuring that the embedded dark watermark has robustness under various attack conditions.
The invention has the beneficial effects that:
The method realizes the accurate control of watermark embedding position and parameters by combining with robust feature extraction, self-adaptive dark watermark generation and deep learning embedding strategy, extracts robust features suitable for dark watermark embedding through multi-scale analysis and frequency domain transformation, can effectively cope with various signal processing operations, ensures that the embedded watermark still maintains stronger anti-interference performance under complex environments such as compression, noise adding, format conversion and the like, and simultaneously, the self-adaptive dark watermark generation module dynamically adjusts watermark data according to the characteristic parameters of digital content, thereby improving the concealment and invisibility of the watermark, ensuring that the watermark is not destroyed and simultaneously furthest reducing the influence on original content.
According to the invention, through the combined analysis of the self-adaptive watermark data and the robust features, the optimal embedding position and the optimal embedding strength are generated, so that the watermark embedding operation has higher flexibility and intelligent level, and finally the embedded dark watermark can keep stable detection effect under various complex attack conditions, thereby greatly improving the copyright protection capability of the digital content in the propagation process.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only of the invention and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a dark watermark generation system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a dark watermark generation method according to an embodiment of the invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and to specific embodiments. While the invention has been described herein in detail in order to make the embodiments more detailed, the following embodiments are preferred and can be embodied in other forms as well known to those skilled in the art, and the accompanying drawings are only for the purpose of describing the embodiments more specifically and are not intended to limit the invention to the specific forms disclosed herein.
It should be noted that references in the specification to "one embodiment," "an example embodiment," "some embodiments," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Generally, the terminology may be understood, at least in part, from the use of context. For example, the term "one or more" as used herein may be used to describe any feature, structure, or characteristic in a singular sense, or may be used to describe a combination of features, structures, or characteristics in a plural sense, depending at least in part on the context. In addition, the term "based on" may be understood as not necessarily intended to convey an exclusive set of factors, but may instead, depending at least in part on the context, allow for other factors that are not necessarily explicitly described.
As shown in FIG. 1, the dark watermark generation system comprises an input data module, a robust feature extraction module, an adaptive dark watermark generation module, a deep learning embedding strategy module and a dark watermark embedding module, wherein:
The input data module is used for receiving the digital content to be protected, and preprocessing the digital content to remove noise and redundant information, so as to obtain preprocessed digital content suitable for embedding the dark watermark;
the robust feature extraction module is connected with the input data module, processes the preprocessed digital content by adopting multi-scale analysis and frequency domain transformation, generates the feature parameters embedded by the dark watermark, and determines the robust feature position;
the self-adaptive dark watermark generation module is connected with the robust feature extraction module, and is used for generating self-adaptive dark watermark data by mapping the dark watermark information into a feature parameter space by utilizing a mixed domain dark watermark generation algorithm based on feature parameters and preset dark watermark information;
The embedding strategy module is connected with the self-adaptive dark watermark generation module and is used for calculating the embedding position and the embedding parameter of the dark watermark according to the self-adaptive dark watermark data and the robust characteristic and generating a dark watermark embedding strategy;
And the dark watermark embedding module is connected with the embedding strategy module and is used for embedding the dark watermark data into the robust characteristic position of the preprocessed digital content according to the dark watermark embedding strategy to finish watermark embedding operation and obtain the digital content embedded with the dark watermark.
The input data module comprises a data receiving unit, a format identifying unit, a noise removing unit and a redundant information clearing unit, wherein:
The data receiving unit is used for receiving digital content data to be protected, wherein the digital content data comprises image, audio and video formats, and the received data is transmitted to the format recognition unit;
The format recognition unit is used for carrying out format recognition on the received digital content data, confirming that the content format is image, audio or video, and transmitting the data to the corresponding noise removal unit and redundant information removal unit according to the format characteristics;
the system comprises a format identification unit, a noise removal unit, a video format data processing unit and a data processing unit, wherein the format identification unit is connected with the format identification unit and is used for executing a corresponding noise removal algorithm according to a content format so as to reduce irrelevant noise components in data, wherein the image format data adopts the noise removal algorithm based on frequency domain filtering;
The redundant information removing unit is connected with the noise removing unit and used for identifying and removing redundant parts in data, the image format data is used for removing redundant pixel information, the audio format data is used for removing silence segments, the video format data is used for removing redundant frames, and finally the preprocessed digital content suitable for embedding the dark watermark is obtained.
The noise removing unit specifically includes:
The image denoising subunit is used for removing noise based on frequency domain filtering for the image format data;
the frequency domain filtering steps are as follows:
First, discrete Fourier Transform (DFT) is performed on image data, a spatial domain image is converted to a frequency domain representation, and a calculation formula is: Wherein, the method comprises the steps of, wherein, The pixel values representing the spatial domain image,Representing the frequency domain image values,For the width of the image to be the same,As the height of the image is to be taken,AndAs a function of the frequency coordinates,Representing imaginary units;
Then, a band-pass filter is applied to remove high-frequency and low-frequency components in the frequency domain, only intermediate-frequency components are reserved, and the transfer function of the band-pass filter is as follows:
,
Wherein, As a function of the band-pass filter,The distance in frequency is represented by the distance in frequency,AndRespectively represent the low-frequency and high-frequency cut-off frequencies of the filter, in particular when the frequency distanceAt a set low frequency cut-off frequencyWith high frequency cut-off frequencyThe band-pass filter outputs a value of 1 when the IF component of the signal is preserved and when the frequency is at a distanceOutside of this range (i.e. smaller thanOr is greater than) When the output value is 0, the corresponding high-frequency and low-frequency components are removed;
And finally, performing inverse discrete Fourier transform on the filtered image data to restore to a spatial domain, and obtaining denoised image data.
The audio denoising subunit is used for performing time domain filtering processing on the audio format data;
The time domain filtering processing steps are as follows:
firstly, the audio signal is sampled to obtain discrete signals WhereinRepresent the firstThe audio value of the individual sample points,Is a discrete time index;
Then, the audio signal is smoothed by a moving average filter, and the filter formula is: Wherein, the method comprises the steps of, wherein, Represent the firstThe denoised audio values for the sample points,Representing half the size of the sliding window,Indicating the relative position within the sliding window;
and finally, smoothing the audio signal through a moving average filter to obtain denoised audio data.
The video denoising subunit is used for applying joint denoising processing of space and time domain to the video format data;
the joint denoising processing steps of the space domain and the time domain are as follows:
Firstly, a Gaussian filter is applied to each frame in a video to perform spatial denoising, and a calculation formula is as follows: Wherein, the method comprises the steps of, wherein, The value of the gaussian filter kernel is indicated,AndThe spatial coordinates are represented as such,AndAs the coordinates of the center of the filtering,Is the standard deviation of the filter;
Then, the pixels between adjacent frames are subjected to time domain denoising, the pixel difference between the frames is calculated, the pixels with small differences are reserved, the rapidly-changing noise is removed, and a time domain denoising formula is as follows:
,
Wherein, Representing the current frameFrame in coordinatesThe pixel value at which it is located, Representing the pixel value at the same position of the previous frame,Is a pixel difference threshold, the working principle of which is that by calculating the current frame (the firstFrame) and the previous frame (the firstFrame) at the same pixel locationDetermining whether to retain the pixel of the current frame by pixel value difference between the current frame and the previous frame is less than thresholdWhen the pixel value of the current frameIs preserved when the pixel value difference is greater thanWhen using the pixel value of the previous frameThe method can remove the noise which changes rapidly and keep the inter-frame information with higher continuity, thereby effectively reducing the time domain noise in the video data;
And finally, obtaining the denoised video data through the joint denoising processing of the space and the time domain.
Through the processing steps of the noise removing unit, the data in different formats can be effectively removed in the preprocessing stage, and the high efficiency and the data definition of the dark watermark embedding are ensured.
The robust feature extraction module comprises a multi-scale analysis unit, a frequency domain transformation unit, a feature parameter generation unit and a robust feature position determination unit, wherein:
The system comprises a multiscale analysis unit, a wavelet transformation unit, a signal processing unit and a signal processing unit, wherein the multiscale analysis unit is used for carrying out multiscale decomposition processing on the preprocessed digital content to generate representations with different resolutions and scales;
the frequency domain transformation unit is connected with the multi-scale analysis unit and is used for carrying out Discrete Fourier Transform (DFT) on each scale component, converting the scale component of the space domain into a frequency domain representation and obtaining a corresponding frequency component;
The characteristic parameter generating unit is connected with the frequency domain transforming unit and is used for extracting characteristic parameters from frequency domain data of each scale, particularly selecting frequency components with frequency amplitude values larger than a given threshold value from frequency domain representation of each scale, and calculating average amplitude values and relative positions of the corresponding frequency components to form the characteristic parameters;
The specific calculation steps for extracting the characteristic parameters are as follows:
In the frequency domain representation of each scale, all frequency components are traversed WhereinAndRepresenting frequency coordinates;
for each frequency component Amplitude of (a) of (b)Make a determination whenGreater than a preset thresholdWhen the component is incorporated into the feature setIn (3), namely: Wherein, the method comprises the steps of, wherein, The amplitude threshold value is set after noise removal and is used for selecting obvious frequency components;
For feature set The frequency components in (a) calculate the average amplitudeAnd relative positionWherein the average amplitudeThe calculation formula of (2) is as follows: Wherein, the method comprises the steps of, wherein, Is a feature setTotal number of medium components;
computing feature sets The relative positions of all frequency components in the spectrum are used for obtaining the average frequency coordinateThe calculation formula is as follows: And Wherein, the method comprises the steps of, wherein,AndRepresenting the average position coordinates of the frequency components in the feature set;
The obtained average amplitude Average positionAs a characteristic parameter to ensure that the characteristic parameter is stable and robust under compression and signal processing conditions.
The robust feature position determining unit is connected with the feature parameter generating unit and used for determining robust feature positions suitable for embedding the dark watermark according to the generated feature parameters, the robust feature positions are selected to be positions of a plurality of frequency components with highest frequency amplitude, the optimal coordinate area for embedding the dark watermark is obtained through calculating the space coordinate offset of the positions so as to ensure that the dark watermark still has significance and stability after the content is compressed or signal processed, and the feature parameters suitable for embedding the dark watermark and the robust feature positions are obtained through the unit processing steps of the robust feature extracting module so as to enhance the robustness and the anti-interference performance of the dark watermark after the compression and signal processing operation and ensure the effective storage of watermark information.
Robust feature position determination unit:
A frequency component ordering subunit for arranging the frequency domain components provided by the characteristic parameter generating unit in descending order according to the amplitude value, setting a characteristic set WhereinAndRespectively represent the firstThe frequency coordinates of the individual frequency components,Represent the firstThe amplitudes of the frequency components are ordered so that the ordering result meets WhereinIs the total number of frequency components in the feature set;
A position selection subunit for selecting a previous frequency component from the sorted frequency components The components with highest amplitude value form an initial robust feature position setThe method comprises the following steps: Wherein, the method comprises the steps of, wherein, Representing the number of selected frequency components to ensure that robust feature positions are concentrated on components with larger amplitudes, thereby ensuring the stability of the feature positions after compression and signal processing;
a coordinate offset calculation subunit connected to the position selection subunit for calculating an initial robust feature position set And determining the optimal coordinate area for embedding the dark watermark by using the space coordinate offset of each position in the image.
The specific calculation steps are as follows:
first, to the collection All frequency component position coordinates in (a)Performing space transformation to obtain corresponding space coordinate offset valueThe transformation formula is as follows:; Wherein, the method comprises the steps of, wherein, AndRespectively represent the firstThe abscissa and ordinate of the individual frequency components in the spatial domain,The width of the digital content is defined as the width of the digital content,The height of the digital content is determined by the user,Is the total number of frequency sampling points;
then, calculate the set Average coordinates of all spatial coordinate offset values in (a)As a central location for dark watermark embedding:; Wherein, the method comprises the steps of, AndThe average abscissa and the average ordinate of the spatial coordinates of all the selected frequency components are respectively;
Finally, by For the center, combine the distribution radius of the selected frequency componentsDetermining the optimal coordinate area, radiusThe calculation formula of (2) is as follows: Wherein, the method comprises the steps of, wherein, Representing a central positionThe average distance to each feature location is used to define the range of the optimal coordinate area.
The self-adaptive dark watermark generation module comprises a watermark information preprocessing unit, a characteristic parameter mapping unit and a self-adaptive watermark data generation unit, wherein:
The watermark information preprocessing unit is used for preprocessing preset dark watermark information, specifically, representing the dark watermark information into a binary sequence, wherein each bit represents one element of the watermark information, and then encrypting the binary sequence by using an encryption algorithm to generate an encrypted watermark sequence so as to enhance the security of the watermark;
the characteristic parameter mapping unit is connected with the watermark information preprocessing unit and is used for mapping the encrypted watermark sequence into a characteristic parameter space;
the specific steps of mapping include:
Step 1, representing characteristic parameters as vectors, wherein each element represents one characteristic parameter;
step 2, determining a mapping proportion coefficient according to the length of the encrypted watermark sequence and the length of the characteristic parameter vector;
step 3, calculating the mapping position index of each bit of the encrypted watermark information in the characteristic parameter space;
Step 4, mapping the watermark information after each bit encryption to the corresponding characteristic parameters to form a mapping pair;
The self-adaptive watermark data generation unit is connected with the characteristic parameter mapping unit and is used for generating self-adaptive dark watermark data according to the mapping pairs, specifically, the amplitude of the corresponding characteristic parameter is adjusted according to the value of the watermark bit for each pair of mapping pairs, if the watermark bit is 1, the amplitude of the characteristic parameter is increased, if the watermark bit is 0, the amplitude of the characteristic parameter is reduced, all the adjusted characteristic parameters are combined to form the self-adaptive dark watermark data, and the self-adaptive dark watermark generation module maps preset dark watermark information into the characteristic parameter space to generate the self-adaptive dark watermark data, so that a foundation is provided for the subsequent watermark embedding process.
The embedding strategy module comprises a characteristic input unit, a model training unit, an embedding position calculating unit and an embedding parameter optimizing unit, wherein:
The characteristic input unit is used for receiving the self-adaptive dark watermark data and the robust characteristic, and combining the self-adaptive dark watermark data and the robust characteristic to form a characteristic vector for deep learning processing;
The model training unit is used for training the feature vector by utilizing the deep neural network model based on the preprocessed digital content and learning the mapping relation between the digital content features and the watermark embedding strategy;
The embedded position calculating unit is used for calculating the optimal embedded position of the dark watermark according to the input feature vector by using the trained deep learning model and determining the embedded coordinate area of the dark watermark;
And the embedded parameter optimizing unit optimizes watermark embedded parameters including embedded strength and embedded mode according to the calculated embedded position and the self-adaptive dark watermark data to generate a dark watermark embedded strategy so as to ensure the invisibility and robustness of the watermark.
The embedded parameter optimization unit includes:
the embedded strength calculating subunit is used for determining the optimal strength of the embedded of the dark watermark according to the robust characteristic value of the embedded position, and comprises the following specific steps:
characteristic value analysis, firstly calculating average amplitude value of characteristic value of embedded position Sum of variancesThe formula is: ; Wherein, the method comprises the steps of, wherein, Representing the total number of feature points embedded in the location,Represent the firstAcquiring characteristic intensity distribution characteristics of the embedded position through characteristic value analysis;
determination of embedding Strength based on the mean amplitude and variance of the feature points To enhance robustness and invisibility, the embedding strength calculation formula is: Wherein, the method comprises the steps of, Is an empirical coefficient and is used for adjusting the reference of the embedding strength; The influence coefficient of amplitude variance on the embedded strength is represented, and the change of the embedded strength under different characteristic distribution is regulated;
The embedding mode optimizing subunit is used for determining an optimal embedding mode according to the embedding strength and the dark watermark data, and comprises the following specific steps:
The method comprises the steps of selecting an embedding mode according to the frequency attribute of the embedded position characteristic, selecting amplitude modulation embedding if the embedded position belongs to a low-frequency characteristic region, and selecting phase modulation embedding if the embedded position belongs to an intermediate-frequency characteristic region, wherein the judging basis expression of the embedding mode is as follows:
;
Wherein, Frequency components representing the current feature location,AndFrequency ranges representing a low frequency and an intermediate frequency, respectively;
the embedding parameters are adjusted, the parameters under different embedding modes are optimized according to the self-adaptive dark watermark data, and the embedding strength is improved for amplitude modulation embedding For phase modulation embedding, the embedding phase offset is adjusted according to the value (0 or 1) of the watermark data bitThe calculation formula is as follows: Wherein, the method comprises the steps of, wherein, As a result of the phase modulation factor,For watermark data bits, if the watermark bit is 1, thenIf the watermark bit is 0, thenThe embedding parameter optimizing unit can generate an optimal dark watermark embedding strategy according to the embedding position and the self-adaptive dark watermark data by the synergistic effect of the embedding strength calculating subunit and the embedding mode optimizing subunit, and the embedding accuracy, the invisibility and the robustness of the dark watermark are ensured.
The dark watermark embedding module comprises an embedding decision unit and a watermark data embedding unit, wherein:
the embedding decision unit is used for receiving the embedding strategy from the embedding parameter optimization unit, and comprises embedding strength and embedding mode;
the watermark data embedding unit is used for executing specific dark watermark data embedding operation, and embedding the dark watermark data into the robust characteristic position of the preprocessed digital content according to an embedding strategy;
The method comprises the following specific steps:
Positioning the embedded position, namely positioning specific embedded coordinates in the preprocessed digital content according to the robust feature position information;
The watermark data processing comprises the steps of encoding watermark information according to an embedding strategy, adjusting the embedding depth and mode of each data bit, and particularly adjusting the amplitude or phase of the data according to the embedding mode to ensure the correct embedding of each data bit;
The data embedding is carried out, namely, the watermark data is embedded according to the encoded watermark data at the determined robust characteristic position, the watermark data is accurately embedded into the digital content by using a frequency domain or time domain method, the specific method depends on the embedding mode (amplitude modulation or phase modulation) decided before, the amplitude of the frequency component is adjusted for amplitude modulation, and the phase of the frequency component is adjusted for phase modulation;
The embedded effect verification comprises the steps of verifying the robustness of the watermark through simulation attack including compression or noise addition after the embedded is finished, ensuring that the watermark can still be correctly detected and recovered under different attack conditions, and the dark watermark embedding module can effectively embed the dark watermark data into the robust feature position of the preprocessed digital content through the units and the steps, so that the watermark embedding operation is finished, and the invisibility, the robustness and the safety of the dark watermark are ensured.
As shown in fig. 2, a method for generating a dark watermark is implemented by the above-mentioned system for generating a dark watermark, and includes the following steps:
s1, receiving digital content to be protected, and removing noise and redundant parts in the digital content through a noise removing and redundant information removing step to obtain preprocessed digital content so as to ensure that the processed content has the basic condition of embedding a dark watermark;
S2, analyzing the preprocessed digital content through multi-scale analysis and frequency domain transformation to generate characteristic parameters embedded by the dark watermark, and determining robust characteristic positions;
S3, mapping the dark watermark information to a robust feature parameter space by using the dark watermark information and the feature parameters through a mixed domain dark watermark generation algorithm to generate self-adaptive dark watermark data;
S4, calculating the embedding position and the embedding parameter of the dark watermark based on the self-adaptive dark watermark data and the robust characteristic, and generating a dark watermark embedding strategy;
S5, embedding the dark watermark data into robust feature positions of the preprocessed digital content according to a dark watermark embedding strategy to finish watermark embedding operation;
and S6, after the embedding is finished, verifying the watermark embedding effect through simulating attack signals (such as compression, noise adding and the like), and ensuring that the embedded dark watermark has robustness under various attack conditions.
The invention is intended to cover any alternatives, modifications, equivalents, and variations that fall within the spirit and scope of the invention. In the following description of preferred embodiments of the invention, specific details are set forth in order to provide a thorough understanding of the invention, and the invention will be fully understood to those skilled in the art without such details. In other instances, well-known methods, procedures, flows, components, circuits, and the like have not been described in detail so as not to unnecessarily obscure aspects of the present invention.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411832841.9A CN119293764B (en) | 2024-12-13 | 2024-12-13 | Dark watermark generation system and generation method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411832841.9A CN119293764B (en) | 2024-12-13 | 2024-12-13 | Dark watermark generation system and generation method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119293764A CN119293764A (en) | 2025-01-10 |
CN119293764B true CN119293764B (en) | 2025-04-01 |
Family
ID=94161621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411832841.9A Active CN119293764B (en) | 2024-12-13 | 2024-12-13 | Dark watermark generation system and generation method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119293764B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344763A (en) * | 2021-08-09 | 2021-09-03 | 江苏羽驰区块链科技研究院有限公司 | Robust watermarking algorithm based on JND and oriented to screen shooting |
CN119049487A (en) * | 2024-07-23 | 2024-11-29 | 西安电子科技大学 | Wavelet packet domain self-adaptive quantization digital audio watermark embedding and extracting method based on fibonacci sequence and particle swarm optimization |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7840005B2 (en) * | 2002-01-22 | 2010-11-23 | Digimarc Corporation | Synchronization of media signals |
CN103974144B (en) * | 2014-05-23 | 2017-07-18 | 华中师范大学 | A kind of video digital watermark method of feature based change of scale invariant point and micro- scene detection |
CN115861017A (en) * | 2022-12-12 | 2023-03-28 | 浙江工商大学 | High-resolution image watermarking method based on multi-scale cross fusion residual error network and two-dimensional code |
CN117745509B (en) * | 2024-02-20 | 2024-04-26 | 四川数盾科技有限公司 | Digital watermark embedding method, system, equipment and medium based on Fourier transformation |
-
2024
- 2024-12-13 CN CN202411832841.9A patent/CN119293764B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344763A (en) * | 2021-08-09 | 2021-09-03 | 江苏羽驰区块链科技研究院有限公司 | Robust watermarking algorithm based on JND and oriented to screen shooting |
CN119049487A (en) * | 2024-07-23 | 2024-11-29 | 西安电子科技大学 | Wavelet packet domain self-adaptive quantization digital audio watermark embedding and extracting method based on fibonacci sequence and particle swarm optimization |
Also Published As
Publication number | Publication date |
---|---|
CN119293764A (en) | 2025-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Akhaee et al. | Contourlet-based image watermarking using optimum detector in a noisy environment | |
CN107945097B (en) | Lu-lolly image watermarking method based on joint statistical model correction | |
CN113222802B (en) | Digital image watermarking method based on anti-attack | |
CN106097241B (en) | Reversible Information Hiding Method Based on Eight Neighborhood Pixels | |
CN105120294B (en) | A kind of jpeg format image sources discrimination method | |
Malonia et al. | Digital image watermarking using discrete wavelet transform and arithmetic progression technique | |
CN101582158A (en) | Method for embedding and authenticating watermark of digital image | |
CN112465028B (en) | A perceptual visual safety assessment method and system | |
Hu et al. | A Geometric Distortion Resilient Image Watermark Algorithm Based on DWT-DFT. | |
Wang et al. | Robust image watermarking via perceptual structural regularity-based JND model | |
CN119205479B (en) | A method and model framework for generating 3D image watermarks based on AIGC | |
CN106097236B (en) | Frequency domain robust image reversible water mark method based on Non-negative Matrix Factorization | |
CN119293764B (en) | Dark watermark generation system and generation method thereof | |
CN102142130A (en) | Watermark embedding method and device based on wavelet-domain enhanced image masks | |
Liu et al. | Optimization-based image watermarking algorithm using a maximum-likelihood decoding scheme in the complex wavelet domain | |
CN102298765B (en) | Image digital fingerprint tracing method | |
CN107910010A (en) | Digital watermark detection method based on multi-parameter Weibull statistical modelings | |
Basheer Taha | Digital image watermarking algorithm based on texture masking model | |
Han et al. | Image cascade matching method based on an improved random sampling consistency algorithm | |
CN108805786B (en) | Steganalysis method and device based on least significant bit matching | |
CN117635411B (en) | Digital watermark processing method based on mixed domain decomposition technology | |
Yi et al. | An improved watermarking method based on neural network for color image | |
CN120183052B (en) | Unsupervised face counterfeiting detection method and system based on frequency domain supplementation | |
Liang et al. | Ridgelet-based robust and perceptual watermarking for images | |
Jiao et al. | Framelet image watermarking considering dynamic visual masking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |