[go: up one dir, main page]

CN113987539B - Federal learning model safety protection method and system based on safety shuffling and differential privacy - Google Patents

Federal learning model safety protection method and system based on safety shuffling and differential privacy

Info

Publication number
CN113987539B
CN113987539B CN202111270844.4A CN202111270844A CN113987539B CN 113987539 B CN113987539 B CN 113987539B CN 202111270844 A CN202111270844 A CN 202111270844A CN 113987539 B CN113987539 B CN 113987539B
Authority
CN
China
Prior art keywords
scrambling
federal learning
learning model
matrix
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111270844.4A
Other languages
Chinese (zh)
Other versions
CN113987539A (en
Inventor
粟勇
刘圣龙
江伊雯
刘文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA REALTIME DATABASE CO LTD
Nanjing Nanrui Ruizhong Data Co ltd
State Grid Jiangsu Electric Power Co Ltd
State Grid Electric Power Research Institute
Big Data Center of State Grid Corp of China
Original Assignee
CHINA REALTIME DATABASE CO LTD
Nanjing Nanrui Ruizhong Data Co ltd
State Grid Jiangsu Electric Power Co Ltd
State Grid Electric Power Research Institute
Big Data Center of State Grid Corp of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA REALTIME DATABASE CO LTD, Nanjing Nanrui Ruizhong Data Co ltd, State Grid Jiangsu Electric Power Co Ltd, State Grid Electric Power Research Institute, Big Data Center of State Grid Corp of China filed Critical CHINA REALTIME DATABASE CO LTD
Priority to CN202111270844.4A priority Critical patent/CN113987539B/en
Publication of CN113987539A publication Critical patent/CN113987539A/en
Application granted granted Critical
Publication of CN113987539B publication Critical patent/CN113987539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Storage Device Security (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a federal learning model safety protection method and system based on safe shuffling and differential privacy.A federal model owner utilizes a differential privacy technology to noise federal learning model parameters to generate noisy model parameters, then utilizes a user authorization key and a safe shuffling algorithm to encrypt the model parameters, and sends the encrypted federal learning model parameters to a user; when a user locally uses the federal learning model, the user authorization key and the secure shuffling algorithm are firstly utilized to decrypt the model parameter ciphertext to obtain the federal learning model with noise, and the user can obtain a desired output result by taking own data as the input of the model. The invention not only protects the privacy of the original model, but also can effectively protect the safety of the original model and ensure that the user can obtain the available model using result.

Description

Federal learning model safety protection method and system based on safety shuffling and differential privacy
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a federal learning model safety protection method and system based on safe shuffling and differential privacy.
Background
Federal learning is an artificial intelligence technology which is being widely studied and used, and aims to develop high-efficiency machine learning among multiple participants or multiple computing nodes on the premise of guaranteeing information security during large data exchange, protecting terminal data and personal data privacy and guaranteeing legal compliance. Therefore, federal learning can solve the problem that data cannot go out of local machine learning tasks, so that training sample data privacy of cooperative participants is protected, and the problem of data island is solved. However, at present, although federal learning solves the data privacy problems of training samples of all participants, the use of the existing related differential privacy protection technology focuses on sample privacy and parameter privacy in the model training process, and can not solve the problem that federal learning models release locally used model safety privacy. As is well known, the bang learning model is a result of cooperation of multiple parties, the original federal learning model is a data asset of a model owner, and the problem that the privacy of release and use of the federal learning model is still to be solved is guaranteed. Therefore, how to safely release the original federal learning model and ensure the usability of the model by the user is an important technical difficulty.
Disclosure of Invention
Aiming at the defects of the prior art, the federal learning model safety protection method and system based on safe shuffling and differential privacy can protect the privacy of federal learning model owners and ensure the availability of federal models obtained by users.
The federal learning model safety protection method based on safe shuffling and differential privacy comprises the following steps:
(1) Noise is added to the model parameters learned by the Union based on a differential privacy Gaussian mechanism, and model parameters with noise are generated;
(2) Encrypting the model parameters subjected to differential privacy noise adding by using a user authorization key and a secure shuffling algorithm, and transmitting the encrypted federal learning model parameters to a user;
(3) Decrypting the model parameter ciphertext by using the user authorization key and the secure shuffling algorithm to obtain a noisy federal learning model;
(4) And taking the data of the user as the input of the federal learning model with noise to obtain a desired output result.
Further, the implementation process of the step (1) is as follows:
Parameters of the federal learning model pi form an m×n matrix a Π, and each element in the matrix a Π is subjected to noise processing by using a gaussian mechanism in differential privacy to obtain a parameter matrix a 'Π of the federal learning model pi' with noise:
A′Π(i,j)=AΠ(i,j)+α (1)
wherein the Gaussian mechanism provides relaxed (epsilon, delta) -differential privacy, noise ratio sigma is greater than or equal to cDeltas/epsilon, constant Epsilon (0, 1); sensitivityThe gaussian noise distribution α -N (0, σ 2) satisfies (epsilon, delta) -differential privacy, α being the noise value added by each data in the matrix, sensitivity representing the largest difference in the output of the query function s for adjacent data sets.
Further, the step (2) includes the steps of:
(21) Reading an m multiplied by n noisy federal learning model parameter matrix A Π;
(22) Initializing logic mapping control parameters s, d, f and g, wherein s is a chaotic control parameter, d and f are logic mapping control parameters, x n、yn is mapped respectively, g is a coupling item, and an initial iterative mapping item=200 is given to key= { x, y }, wherein x and y are two initial values of the chaotic mapping;
(23) Taking the key as an initial value, obtaining m multiplied by n pairs of chaotic sequence values by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and respectively storing the m multiplied by n pairs of chaotic sequence values in one-dimensional arrays P and Q with the size of m multiplied by n:
(24) Operating the following secure shuffling algorithm 1 on elements in P and Q to obtain two integer value one-dimensional matrixes P 'and Q';
(25) Ordering by P 'and Q' to generate two one-dimensional pseudorandom sequence matrixes P 'and Q' with length of m multiplied by n, wherein the element values of the matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ];
(26) The following transformation is performed on each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapped into a two-dimensional scrambling matrix X, Y of size mxn;
wherein x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively;
(27) And then, carrying out position scrambling on the temporary model parameter scrambling intermediate result by using Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
Further, the implementation process of the step (3) is as follows:
(31) Given the same key= { x, y } as the encryption process, generating a scrambling matrix X, Y from the key x, y;
(32) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using X to obtain a noisy parameter model.
Based on the same inventive concept, the invention further provides a federal learning model safety protection system based on safe shuffling and differential privacy, which comprises a parameter processing module, an encryption module and a decryption module, wherein the parameter processing module is used for adding noise to the model parameters learned by the Union based on a differential privacy Gaussian mechanism to generate model parameters with noise, the encryption module is used for encrypting the model parameters subjected to differential privacy noise adding by using a user authorization key and a safe shuffling algorithm and sending the encrypted federal learning model parameters to a user, and the decryption module is used for decrypting the model parameter ciphertext by using the user authorization key and the safe shuffling algorithm to obtain the federal learning model with noise.
Further, the working process of the parameter processing module is as follows:
Parameters of the federal learning model pi form an m×n matrix a Π, and each element in the matrix a Π is subjected to noise processing by using a gaussian mechanism in differential privacy to obtain a parameter matrix a 'Π of the federal learning model pi' with noise:
A′Π(i,j)=AΠ(i,j)+α (1)
wherein the Gaussian mechanism provides relaxed (epsilon, delta) -differential privacy, noise ratio sigma is greater than or equal to cDeltas/epsilon, constant Epsilon (0, 1); sensitivityThe gaussian noise distribution α -N (0, σ 2) satisfies (epsilon, delta) -differential privacy, α being the noise value added by each data in the matrix, sensitivity representing the largest difference in the output of the query function s for adjacent data sets.
Further, the working process of the encryption module is as follows:
(S1) reading an m multiplied by n noisy federal learning model parameter matrix A Π;
(S2) initializing logic mapping control parameters S, d, f and g, wherein S is a chaotic control parameter, d and f are logic mapping control parameters, x n、yn is mapped respectively, g is a coupling term, and an initial iterative mapping item=200, and a key= { x, y }, wherein x and y are two initial values of the chaotic mapping;
(S3) taking the key as an initial value, obtaining m multiplied by n pairs of chaotic sequence values by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and storing the m multiplied by n pairs of chaotic sequence values in one-dimensional arrays P and Q respectively:
(S4) operating the following secure shuffling algorithm 1 on the elements in P and Q to obtain two integer value one-dimensional matrixes P 'and Q';
(S5) ordering by P 'and Q' to generate two one-dimensional pseudo-random sequence matrixes P 'and Q' with the length of m multiplied by n, wherein the element values of the matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ];
(S6) performing the following transformation on each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapping it into a two-dimensional scrambling matrix X, Y of size mxn;
wherein x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively;
And (S7) firstly scrambling the matrix A Π by using a scrambling matrix X to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using a scrambling matrix Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
Further, the working process of the decryption module is as follows:
(H1) Given the same key= { x, y } as the encryption process, generating a scrambling matrix X, Y from the key x, y;
(H2) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using X to obtain a noisy parameter model.
Compared with the prior art, the method has the beneficial effects that the model owner utilizes differential privacy to protect the privacy of a real federal learning model, noise is added to the trained model parameters in the aspect of noise addition, the interference to a neural network training model is avoided, the usability of the federal learning model with noise is ensured, the combination of safe shuffling and an authorized key ensures that only an authorized user can obtain the federal learning model with noise, and therefore the safe release of the federal learning model for local use is achieved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of forward shuffling of the federal learning model;
Fig. 3 is a flow chart of reverse shuffling of the federal learning model.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
The invention provides a federal learning model release local use safety protection method based on safety shuffling and differential privacy, which protects model assets of model owners and ensures availability of federal learning models obtained by users. As shown in fig. 1, the method specifically comprises the following steps:
and step 1, adding noise to federally learned model parameters by using differential privacy by a federal model owner to generate noisy model parameters. The parameters are described in Table 1:
Table 1 parameter description
Assuming that parameters of the federal learning model pi form an mxn matrix a Π, performing noise processing on each element in the matrix a Π by using a gaussian mechanism in differential privacy to obtain a parameter matrix a Π of the federal learning model pi' with noise, and the mathematical formula is as follows:
A′Π(i,j)=AΠ(i,j)+α (1)
The Gaussian mechanism can provide relaxed (epsilon, delta) -differential privacy, and in order to ensure that the increased Gaussian noise distribution alpha-N (0, sigma 2) meets the (epsilon, delta) -differential privacy, alpha is the noise value increased by each data in the matrix, the noise proportion sigma is set to be equal to or larger than cDeltas/[ epsilon ], wherein the constant E (0, 1), sensitivity Sensitivity represents the maximum difference in the output of the query function s for adjacent datasets.
And 2, encrypting the model parameters by using the user authorization key and the secure shuffling algorithm, and transmitting the encrypted federal learning model parameters to the user, as shown in fig. 2.
1) The m×n noisy federal learning model parameter matrix a Π is read in.
2) Initializing logical mapping control parameters s=4, d=0.9, f=0.9, g=0.1, and initial iterative mapping iter=200, given key= { x, y }, where x and y are two initial values of the chaotic map.
3) Taking the key as an initial value, obtaining m multiplied by n pairs of chaotic sequence values by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and respectively storing the m multiplied by n pairs of chaotic sequence values in one-dimensional arrays P and Q with the size of m multiplied by n, wherein the following formula is as follows:
4) The following secure shuffling algorithm operation is performed on the elements in P and Q to obtain two integer valued one-dimensional matrices P 'and Q', as shown in table 2:
Table 2 secure shuffling algorithm
5) And ordering by P 'and Q' to generate two one-dimensional pseudorandom sequence matrixes P 'and Q' with the length of m multiplied by n, wherein the element values of the two one-dimensional pseudorandom sequence matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ].
6) The following transforms are performed for each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapped into a two-dimensional scrambling matrix X, Y of size mxn:
where x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively.
7) And firstly scrambling the matrix A Π by using a scrambling matrix X to obtain a temporary model parameter scrambling intermediate result, and then scrambling the position of the temporary model parameter scrambling intermediate result by using a scrambling matrix Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
And 3, when the user locally uses the federal learning model, decrypting the model parameter ciphertext by using the user authorization key and the secure shuffling algorithm to obtain a noisy federal learning model, wherein the user takes the data as the input of the noisy federal learning model to obtain a desired output result as shown in fig. 3.
After receiving the noisy federal learning model parameter scrambling ciphertext and the key = { x, y }, the user needs to execute the inverse operation of the federal learning model owner to operate the noisy parameter model pi, and uses the model to process own data. The specific process is as follows:
1) Given that the key= { x, y } which is the same as the encryption process of the owner of the federal learning model is the same as the encryption process of the owner, generating a scrambling matrix X, Y by the key x, y;
2) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then scrambling the temporary model parameter scrambling intermediate result by using X to obtain a usable noisy parameter model.
Based on the same inventive concept, the invention further provides a federal learning model safety protection system based on safe shuffling and differential privacy, which comprises a parameter processing module, an encryption module and a decryption module, wherein the parameter processing module is used for adding noise to the model parameters learned by the Union based on a differential privacy Gaussian mechanism to generate model parameters with noise, the encryption module is used for encrypting the model parameters subjected to differential privacy noise adding by using a user authorization key and a safe shuffling algorithm and sending the encrypted federal learning model parameters to a user, and the decryption module is used for decrypting the model parameter ciphertext by using the user authorization key and the safe shuffling algorithm to obtain the federal learning model with noise.
The working process of the parameter processing module is as follows:
Parameters of the federal learning model pi form an m×n matrix a Π, and each element in the matrix a Π is subjected to noise processing by using a gaussian mechanism in differential privacy to obtain a parameter matrix a 'Π of the federal learning model pi' with noise:
A′Π(i,j)=AΠ(i,j)+α (1)
wherein the Gaussian mechanism provides relaxed (epsilon, delta) -differential privacy, noise ratio sigma is greater than or equal to cDeltas/epsilon, constant Epsilon (0, 1); sensitivityThe gaussian noise distribution α -N (0, σ 2) satisfies (epsilon, delta) -differential privacy, α being the noise value added by each data in the matrix, sensitivity representing the largest difference in the output of the query function s for adjacent data sets.
The working process of the encryption module is as follows:
(S1) reading an m multiplied by n noisy federal learning model parameter matrix A Π;
(S2) initializing logic mapping control parameters S, d, f and g, wherein S is a chaotic control parameter, d and f are logic mapping control parameters, x n、yn is mapped respectively, g is a coupling term, and an initial iterative mapping item=200, and a key= { x, y }, wherein x and y are two initial values of the chaotic mapping;
(S3) taking the key as an initial value, obtaining m multiplied by n pairs of chaotic sequence values by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and storing the m multiplied by n pairs of chaotic sequence values in one-dimensional arrays P and Q respectively:
(S4) operating the following secure shuffling algorithm 1 on the elements in P and Q to obtain two integer value one-dimensional matrixes P 'and Q';
(S5) ordering by P 'and Q' to generate two one-dimensional pseudo-random sequence matrixes P 'and Q' with the length of m multiplied by n, wherein the element values of the matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ];
(S6) performing the following transformation on each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapping it into a two-dimensional scrambling matrix X, Y of size mxn;
wherein x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively;
And (S7) firstly scrambling the matrix A Π by using a scrambling matrix X to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using a scrambling matrix Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
The working process of the decryption module is as follows:
(H1) Given the same key= { x, y } as the encryption process, generating a scrambling matrix X, Y from the key x, y;
(H2) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using X to obtain a noisy parameter model.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the specific embodiments of the present invention without departing from the spirit and scope of the present invention, and any modifications and equivalents are intended to be included in the scope of the claims of the present invention.

Claims (7)

1. The federal learning model safety protection method based on safe shuffling and differential privacy is characterized by comprising the following steps of:
(1) Noise is added to the model parameters learned by the Union based on a differential privacy Gaussian mechanism, and model parameters with noise are generated;
(2) Encrypting the model parameters subjected to differential privacy noise adding by using a user authorization key and a secure shuffling algorithm, and transmitting the encrypted federal learning model parameters to a user;
(3) Decrypting the model parameter ciphertext by using the user authorization key and the secure shuffling algorithm to obtain a noisy federal learning model;
(4) Taking the data of the user as the input of the federal learning model with noise to obtain a desired output result;
The step (2) comprises the following steps:
(21) Reading an m multiplied by n noisy federal learning model parameter matrix A Π;
(22) Initializing logic mapping control parameters s, d, f and g, wherein s is a chaos control parameter, d and f are logic mapping control parameters, mapping is carried out on a chaos sequence value x n、yn respectively, g is a coupling item, and initial iterative mapping iter=200, and a key= { x, y } is given, wherein x and y are two initial values of chaos mapping;
(23) Taking the key as an initial value, obtaining m multiplied by n chaotic sequence values x n、yn by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and respectively storing the m multiplied by n chaotic sequence values in one-dimensional arrays P and Q with the size of m multiplied by n:
(24) Running a safe shuffling algorithm on elements in P and Q, and calculating to obtain two integer value one-dimensional matrixes P 'and Q';
(25) Ordering by P 'and Q' to generate two one-dimensional pseudorandom sequence matrixes P 'and Q' with length of m multiplied by n, wherein the element values of the matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ];
(26) The following transformation is performed on each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapped into a two-dimensional scrambling matrix X, Y of size mxn;
wherein x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively;
(27) And then, carrying out position scrambling on the temporary model parameter scrambling intermediate result by using Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
2. The federal learning model security protection method based on secure shuffling and differential privacy according to claim 1, wherein the implementation procedure of step (1) is as follows:
Parameters of the federal learning model pi form an m×n matrix a Π, and each element in the matrix a Π is subjected to noise processing by using a gaussian mechanism in differential privacy to obtain a parameter matrix a 'Π of the federal learning model pi' with noise:
A'Π(i,j)=AΠ(i,j)+α (1)
wherein the Gaussian mechanism provides relaxed (epsilon, delta) -differential privacy, noise ratio sigma is greater than or equal to cDeltas/epsilon, constant Sensitivity toThe gaussian noise distribution α -N (0, σ 2) satisfies (epsilon, delta) -differential privacy, α is the noise value added by each data in the matrix, D is the dataset, D' represents the neighbor dataset that differs from D by only one record, and sensitivity represents the maximum difference in the output of the query function s for the neighbor dataset.
3. The federal learning model security protection method based on secure shuffling and differential privacy according to claim 1, wherein the implementation procedure of step (3) is as follows:
(31) Given a key = { x, y }, which is the same as the encryption process of the owner of the federal learning model, a scrambling matrix X, Y is generated from the key x, y, which is the same as the encryption process;
(32) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using X to obtain a noisy parameter model.
4. A federal learning model safety protection system based on safe shuffling and differential privacy, which adopts the method as defined in any one of claims 1 to 3, comprises a parameter processing module, an encryption module and a decryption module, wherein the parameter processing module is used for carrying out noise adding on model parameters learned by the Union based on a differential privacy Gaussian mechanism to generate model parameters with noise, the encryption module is used for encrypting the model parameters subjected to the noise adding of the differential privacy by using a user authorization key and a safe shuffling algorithm and sending the encrypted federal learning model parameters to a user, and the decryption module is used for decrypting model parameter ciphertext by using the user authorization key and the safe shuffling algorithm to obtain the federal learning model with noise.
5. The federal learning model security protection system based on secure shuffling and differential privacy of claim 4, wherein the parameter processing module operates as follows:
Parameters of the federal learning model pi form an m×n matrix a Π, and each element in the matrix a Π is subjected to noise processing by using a gaussian mechanism in differential privacy to obtain a parameter matrix a 'Π of the federal learning model pi' with noise:
A'Π(i,j)=AΠ(i,j)+α (1)
wherein the Gaussian mechanism provides relaxed (epsilon, delta) -differential privacy, noise ratio sigma is greater than or equal to cDeltas/epsilon, constant Sensitivity toThe gaussian noise distribution α -N (0, σ 2) satisfies (epsilon, delta) -differential privacy, α is the noise value added by each data in the matrix, D is the dataset, D' represents the neighbor dataset that differs from D by only one record, and sensitivity represents the maximum difference in the output of the query function s for the neighbor dataset.
6. The federal learning model security protection system based on secure shuffling and differential privacy of claim 4, wherein the cryptographic module operates as follows:
(S1) reading an m multiplied by n noisy federal learning model parameter matrix A Π;
(S2) initializing logic mapping control parameters S, d, f and g, wherein S is a chaotic control parameter, d and f are logic mapping control parameters, x n、yn is mapped respectively, g is a coupling term, and an initial iterative mapping item=200, and a key= { x, y }, wherein x and y are two initial values of the chaotic mapping;
(S3) taking a key as an initial value, obtaining m multiplied by n chaotic sequence values x n、yn by iteratively mapping m multiplied by n+iter pairs of chaotic sequence values and discarding the iter pairs of chaotic sequence values, and storing the m multiplied by n chaotic sequence values in one-dimensional arrays P and Q with the size of m multiplied by n respectively:
(S4) running a secure shuffling algorithm on elements in the P and the Q, and calculating to obtain two integer value one-dimensional matrixes P 'and Q';
(S5) ordering by P 'and Q' to generate two one-dimensional pseudo-random sequence matrixes P 'and Q' with the length of m multiplied by n, wherein the element values of the matrixes P 'and Q' are different integers in [0, m multiplied by n-1 ];
(S6) performing the following transformation on each element P "(k) and Q" (k) in the one-dimensional random sequences P "and Q", and mapping it into a two-dimensional scrambling matrix X, Y of size mxn;
wherein x (i, j) and y (i, j) are elements of the two-dimensional scrambling matrix X, Y, respectively;
And (S7) firstly scrambling the matrix A Π by using a scrambling matrix X to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using a scrambling matrix Y to obtain a final federal learning model parameter scrambling ciphertext with noise.
7. The federal learning model security protection system based on secure shuffling and differential privacy of claim 4, wherein the decryption module operates as follows:
(H1) Given the same key= { x, y } as the encryption process, generating a scrambling matrix X, Y from the key x, y;
(H2) And firstly scrambling the noisy federal learning model parameter scrambling ciphertext by using a scrambling matrix Y to obtain a temporary model parameter scrambling intermediate result, and then carrying out position scrambling on the temporary model parameter scrambling intermediate result by using X to obtain a noisy parameter model.
CN202111270844.4A 2021-10-29 2021-10-29 Federal learning model safety protection method and system based on safety shuffling and differential privacy Active CN113987539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111270844.4A CN113987539B (en) 2021-10-29 2021-10-29 Federal learning model safety protection method and system based on safety shuffling and differential privacy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111270844.4A CN113987539B (en) 2021-10-29 2021-10-29 Federal learning model safety protection method and system based on safety shuffling and differential privacy

Publications (2)

Publication Number Publication Date
CN113987539A CN113987539A (en) 2022-01-28
CN113987539B true CN113987539B (en) 2025-07-22

Family

ID=79744275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111270844.4A Active CN113987539B (en) 2021-10-29 2021-10-29 Federal learning model safety protection method and system based on safety shuffling and differential privacy

Country Status (1)

Country Link
CN (1) CN113987539B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817985A (en) * 2022-04-22 2022-07-29 广东电网有限责任公司 Privacy protection method, device, equipment and storage medium for electricity consumption data
CN116074085A (en) * 2023-01-15 2023-05-05 浙江工业大学 A data security protection method for an intelligent networked car machine
CN119341747A (en) * 2024-09-24 2025-01-21 武汉大学 Polynomial operation acceleration method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166784A (en) * 2018-01-17 2019-08-23 重庆邮电大学 A kind of adapting to image texture area steganographic algorithm based on block of pixels
CN112966298A (en) * 2021-03-01 2021-06-15 广州大学 Composite privacy protection method, system, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376267A (en) * 2014-11-20 2015-02-25 内江师范学院 Image shuffling encrypting method based on fractional order chaotic mapping
CN110572253B (en) * 2019-09-16 2023-03-24 济南大学 Method and system for enhancing privacy of federated learning training data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166784A (en) * 2018-01-17 2019-08-23 重庆邮电大学 A kind of adapting to image texture area steganographic algorithm based on block of pixels
CN112966298A (en) * 2021-03-01 2021-06-15 广州大学 Composite privacy protection method, system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113987539A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN113987539B (en) Federal learning model safety protection method and system based on safety shuffling and differential privacy
CN113538203B (en) Image encryption method and device based on novel two-dimensional composite chaotic mapping and SHA-256
CN116582246B (en) Vector geospatial data exchange cipher watermarking method based on chaos and zero watermarking
CN114679250B (en) Image encryption algorithm based on mixed chaos and Arnold transformation
Zhou A quantum image encryption method based on DNACNot
CN117440101A (en) Image encryption algorithm and electronic equipment based on multi-chaotic system and circular DNA operation
CN112887075B (en) Encryption method of similar full-connection network image based on plaintext correlation
CN108199828A (en) A kind of color image Encryption Algorithm and device
CN104851071A (en) Digital image encryption method based on three-dimensional chaotic system
CN116305211A (en) Image encryption processing method and device
CN115580687A (en) Multi-image encryption method based on variable parameter hyperchaotic system and S-shaped diffusion
Wang et al. Chaotic image encryption algorithm based on dynamic spiral scrambling transform and deoxyribonucleic acid encoding operation
Wang et al. Quantum cryptosystem and circuit design for color image based on novel 3D Julia-fractal chaos system
CN112805704A (en) Method and system for protecting data
Hao et al. A novel color image encryption algorithm based on the fractional order laser chaotic system and the DNA mutation principle
Huang et al. Image encryption based on a novel memristive chaotic system, Grain-128a algorithm and dynamic pixel masking
Rodríguez-Muñoz et al. Chaos-based authentication of encrypted images under MQTT for IoT protocol
CN114143413A (en) Image data PUF (physical unclonable function) security encryption system and encryption method
CN105005961B (en) Suitable for the information disguising and restoring method of TIN digital elevation model
CN118018659A (en) Image encryption and decryption method and system based on SM2 and DNA
CN116232562B (en) Model reasoning method and device
Mukherjee et al. Fibonacci based text hiding using image cryptography
CN109559269A (en) A kind of method and terminal of image encryption
CN106683030B (en) Quantum multi-image encryption algorithm based on quantum multi-image model and three-dimensional transformation
CN109948353A (en) Asymmetric multi-image encryption method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 211106 No. 19 Chengxin Avenue, Jiangning Economic and Technological Development Zone, Nanjing City, Jiangsu Province

Applicant after: STATE GRID ELECTRIC POWER RESEARCH INSTITUTE Co.,Ltd.

Applicant after: Nanjing Nanrui Ruizhong Data Co.,Ltd.

Applicant after: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd.

Applicant after: Big data center of State Grid Corporation of China

Address before: 211106 No. 19 Chengxin Avenue, Jiangning Economic and Technological Development Zone, Nanjing City, Jiangsu Province

Applicant before: STATE GRID ELECTRIC POWER RESEARCH INSTITUTE Co.,Ltd.

Country or region before: China

Applicant before: CHINA REALTIME DATABASE Co.,Ltd.

Applicant before: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd.

Applicant before: Big data center of State Grid Corporation of China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant