[go: up one dir, main page]

TWI846601B - Operating system and method for a fully homomorphic encryption neural network model - Google Patents

Operating system and method for a fully homomorphic encryption neural network model Download PDF

Info

Publication number
TWI846601B
TWI846601B TW112135707A TW112135707A TWI846601B TW I846601 B TWI846601 B TW I846601B TW 112135707 A TW112135707 A TW 112135707A TW 112135707 A TW112135707 A TW 112135707A TW I846601 B TWI846601 B TW I846601B
Authority
TW
Taiwan
Prior art keywords
vector
plaintext
generate
neural network
network model
Prior art date
Application number
TW112135707A
Other languages
Chinese (zh)
Other versions
TW202514437A (en
Inventor
劉姿利
顧昱得
何明倩
許之凡
陳維超
劉峰豪
張明清
洪士灝
Original Assignee
英業達股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英業達股份有限公司 filed Critical 英業達股份有限公司
Priority to TW112135707A priority Critical patent/TWI846601B/en
Application granted granted Critical
Publication of TWI846601B publication Critical patent/TWI846601B/en
Publication of TW202514437A publication Critical patent/TW202514437A/en

Links

Images

Landscapes

  • Complex Calculations (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)
  • Emergency Protection Circuit Devices (AREA)

Abstract

An operating method for a fully homomorphic encrypted neural network model is provided, wherein the fully homomorphic encrypted neural network model includes a plurality of layers, and the method performed by a processor includes: for one of the plurality of layers, encrypting a plaintext input with a first encryption algorithm to generate a ciphertext vector, performing a convolution operation according to the ciphertext vector to generate a result vector, transforming the result vector into a plurality of result ciphertexts adopting a second encryption algorithm, inputting the plurality of result ciphertexts into an activation function to generate a plurality of encrypted activation values, and repacking the plurality of encrypted activation values to generate an output vector adopting the first encryption algorithm.

Description

全同態加密神經網路模型的運作系統及方法Operation system and method of fully homomorphic encrypted neural network model

本發明關於全同態加密及神經網路,特別是一種全同態加密神經網路模型的運作系統及方法。The present invention relates to fully homomorphic encryption and neural networks, and in particular to an operating system and method of a fully homomorphic encryption neural network model.

機器學習即服務(Machine Learning as a Service, MLaaS)讓使用者上傳資料至運行於雲端平台上的神經網路模型進行推理。儘管MLaaS極具便利性,但由於使用者的私人資料被提交或儲存外部環境中,如何維護資料隱私成為一個重要的議題。Machine Learning as a Service (MLaaS) allows users to upload data to a neural network model running on a cloud platform for reasoning. Although MLaaS is extremely convenient, since users' private data is submitted or stored in an external environment, how to maintain data privacy becomes an important issue.

全同態加密(Fully Homomorphic Encryption, FHE)允許直接對加密資料進行計算。換言之,神經網路模型在運算過程中不需要使用到原始資料,因此能確保使用者的資料隱私。目前,有兩種常用的FHE方案用於加密神經網路推理:CKKS及FHEW/TFHE。CKKS方案支援浮點數運算,因此可以高效率地執行線性運算。然而,它並不支援非多項式操作。FHEW/TFHE支持輕量級的位元或整數操作,而且包括功能性引導(functional bootstrapping)技術,能夠實現非多項式操作。因此,若結合上述兩個方案,利用CKKS的線性運算優勢和FHEW/TFHE的功能性自舉優勢,似乎可以應用在加密神經網路模型。然而,現有的實驗數據證明,單純結合這兩種加密方案建立的神經網路模型的推理準確度明顯地下降。Fully Homomorphic Encryption (FHE) allows direct computation on encrypted data. In other words, the neural network model does not need to use the original data during the computation process, thus ensuring the privacy of the user's data. Currently, there are two commonly used FHE schemes for encrypted neural network reasoning: CKKS and FHEW/TFHE. The CKKS scheme supports floating-point operations, so linear operations can be performed efficiently. However, it does not support non-polynomial operations. FHEW/TFHE supports lightweight bit or integer operations, and includes functional bootstrapping technology to implement non-polynomial operations. Therefore, if the above two schemes are combined, taking advantage of the linear operation advantages of CKKS and the functional bootstrapping advantages of FHEW/TFHE, it seems that it can be applied to encrypted neural network models. However, existing experimental data show that the inference accuracy of the neural network model established by simply combining these two encryption schemes decreases significantly.

有鑑於此,本發明提出一種全同態加密神經網路模型的運作系統及方法,藉此解決單純結合CKKS和 FHEW/TFHE兩種方案導致的神經網路模型推理精確度下降的問題。In view of this, the present invention proposes an operating system and method for a fully homomorphic encrypted neural network model, thereby solving the problem of decreased reasoning accuracy of the neural network model caused by simply combining the CKKS and FHEW/TFHE schemes.

依據本發明一實施例的一種全同態加密神經網路模型的運作方法,其中全同態加密神經網路模型包括多個層,所述方法包括以一處理器執行:對所述多個層的其中一者,以第一加密演算法加密明文輸入以產生密文向量,依據密文向量進行卷積運算以產生結果向量,將結果向量轉換為採用第二加密演算法的多個結果密文,輸入所述多個結果密文至激勵函數以產生多個加密激勵值,以及將所述多個加密激勵值轉換為以產生採用第一加密演算法的輸出向量。According to an embodiment of the present invention, a method for operating a fully homomorphic encrypted neural network model, wherein the fully homomorphic encrypted neural network model includes multiple layers, the method includes executing with a processor: for one of the multiple layers, encrypting the plaintext input with a first encryption algorithm to generate a ciphertext vector, performing a convolution operation on the ciphertext vector to generate a result vector, converting the result vector into multiple result ciphertexts using a second encryption algorithm, inputting the multiple result ciphertexts into an incentive function to generate multiple encrypted incentive values, and converting the multiple encrypted incentive values to generate an output vector using the first encryption algorithm.

依據本發明一實施例的一種全同態加密神經網路模型的運作系統,包括記憶體及處理器。記憶體用於儲存多個指令。處理器電性連接記憶體以執行所述多個指令,這些指令用於對全同態加密神經網路模型的多個層的其中一者進行下列操作:以第一加密演算法加密明文輸入以產生密文向量,依據密文向量進行卷積運算以產生一結果向量,將結果向量轉換為採用第二加密演算法的多個結果密文,輸入所述多個結果密文至激勵函數以產生多個加密激勵值,以及將所述多個加密激勵值轉換為以產生採用第一加密演算法的輸出向量。According to an embodiment of the present invention, an operating system of a fully homomorphic encrypted neural network model includes a memory and a processor. The memory is used to store multiple instructions. The processor is electrically connected to the memory to execute the multiple instructions, which are used to perform the following operations on one of the multiple layers of the fully homomorphic encrypted neural network model: encrypting plaintext input with a first encryption algorithm to generate a ciphertext vector, performing a convolution operation on the ciphertext vector to generate a result vector, converting the result vector into multiple result ciphertexts using a second encryption algorithm, inputting the multiple result ciphertexts into an incentive function to generate multiple encrypted incentive values, and converting the multiple encrypted incentive values to generate an output vector using the first encryption algorithm.

以上之關於本揭露內容之說明及以下之實施方式之說明係用以示範與解釋本發明之精神與原理,並且提供本發明之專利申請範圍更進一步之解釋。The above description of the disclosed content and the following description of the implementation methods are used to demonstrate and explain the spirit and principle of the present invention, and provide a further explanation of the scope of the patent application of the present invention.

以下在實施方式中詳細敘述本發明之詳細特徵以及特點,其內容足以使任何熟習相關技藝者了解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之構想及特點。以下之實施例係進一步詳細說明本發明之觀點,但非以任何觀點限制本發明之範疇。The detailed features and characteristics of the present invention are described in detail in the following embodiments, and the content is sufficient for any person skilled in the relevant art to understand the technical content of the present invention and implement it accordingly. According to the content disclosed in this specification, the scope of the patent application and the drawings, any person skilled in the relevant art can easily understand the concept and characteristics of the present invention. The following embodiments are to further illustrate the viewpoints of the present invention, but are not to limit the scope of the present invention by any viewpoint.

圖1是依據本發明一實施例的全同態加密神經網路模型的運作系統的方塊圖。如圖1所示,運作系統10包括記憶體1以及電性連接記憶體1的處理器3。Fig. 1 is a block diagram of an operating system of a fully homomorphic encrypted neural network model according to an embodiment of the present invention. As shown in Fig. 1, the operating system 10 includes a memory 1 and a processor 3 electrically connected to the memory 1.

記憶體1用於儲存多個指令。在一實施例中,記憶體1可採用下列範例中的一或數者實作,動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)、靜態隨機存取記憶體(Static Random Access Memory,SRAM)、雙倍資料率同步動態隨機存取記憶體(Double Data Rate Synchronous Dynamic Random Access Memory,DDR SDRAM)、快閃記憶體及硬碟。本發明不限制記憶體1的類型及數量。The memory 1 is used to store multiple instructions. In one embodiment, the memory 1 can be implemented by one or more of the following examples: dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), flash memory and hard disk. The present invention does not limit the type and quantity of the memory 1.

處理器3用於以執行記憶體1儲存的多個指令。在一實施例中,處理器3可採用下列範例中的一或數者實作:個人電腦、網路伺服器、微控制器(microcontroller,MCU)、應用處理器(application processor,AP)、現場可程式化閘陣列(field programmable gate array,FPGA)、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)系統晶片(system-on-a-chip,SOC)以及深度學習加速器(deep learning accelerator)。本發明不限制處理器3的類型及數量。The processor 3 is used to execute multiple instructions stored in the memory 1. In one embodiment, the processor 3 can be implemented by one or more of the following examples: a personal computer, a network server, a microcontroller (MCU), an application processor (AP), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) system-on-a-chip (SOC), and a deep learning accelerator. The present invention does not limit the type and quantity of the processor 3.

所述全同態加密神經網路模型具有多個層,所述多個指令用於對其中一層進行圖2所示的流程。圖2是依據本發明一實施例的全同態加密神經網路模型的運作方法的流程圖,包括步驟S1至步驟S5。圖3是依據本發明一實施例的全同態加密神經網路模型的運作方法的示意圖,展示圖2中每個步驟的輸入、輸出及對應的操作。The fully homomorphic encrypted neural network model has multiple layers, and the multiple instructions are used to perform the process shown in Figure 2 on one of the layers. Figure 2 is a flow chart of the operation method of the fully homomorphic encrypted neural network model according to an embodiment of the present invention, including steps S1 to S5. Figure 3 is a schematic diagram of the operation method of the fully homomorphic encrypted neural network model according to an embodiment of the present invention, showing the input, output and corresponding operation of each step in Figure 2.

步驟S1,以第一加密演算法加密明文輸入以產生密文向量。Step S1, encrypting the plaintext input using a first encryption algorithm to generate a ciphertext vector.

在一實施例中,第一加密演算法是CKKS(Cheon-Kim-Kim-Song)演算法,可參考 “Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: ASIACRYPT. pp. 409–437. Springer (2017)” 。CKKS支援浮點數向量的算術運算。在加密過程中,明文輸入(浮點數向量)先被編碼成為明文多項式,再被加密為密文向量。因此,步驟S1的操作可表示為CKKS(m)=Enc(Ecd(m)),其中m ∈ Rn表示明文輸入,Ecd(·) 表示編碼操作,Enc(·)表示加密操作。In one embodiment, the first encryption algorithm is the CKKS (Cheon-Kim-Kim-Song) algorithm, which can be found in “Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: ASIACRYPT. pp. 409–437. Springer (2017)”. CKKS supports arithmetic operations on floating-point vectors. In the encryption process, the plaintext input (floating-point vector) is first encoded into a plaintext polynomial and then encrypted into a ciphertext vector. Therefore, the operation of step S1 can be expressed as CKKS(m)=Enc(Ecd(m)), where m ∈ Rn represents the plaintext input, Ecd(·) represents the encoding operation, and Enc(·) represents the encryption operation.

請注意,步驟S1適用於神經網路模型中第一次執行的卷積層。若是位於神經網路模型中間的卷積層,由於來自上一層的結果已經是密文,因此可以省略步驟S1,直接從步驟S2開始進行。Please note that step S1 is applicable to the first convolution layer executed in the neural network model. If it is a convolution layer in the middle of the neural network model, since the result from the previous layer is already encrypted, step S1 can be omitted and the process can be started directly from step S2.

步驟S2,依據密文向量進行卷積運算以產生結果向量。Step S2, performing a convolution operation on the ciphertext vector to generate a result vector.

在一實施例中,卷積運算可表示為fconv’(x)=W’(x)+b’,其中W’表示卷積權重,b’表示偏移值(bias)。步驟S2的操作可表示為CKKS.eval(fconv’, {ctx0, ctx1, …})={ctx’0, ctx’1, …},其中{ctx0, ctx1, …}表示密文向量,{ctx’0, ctx’1, …}表示結果向量。In one embodiment, the convolution operation can be expressed as fconv’(x)=W’(x)+b’, where W’ represents the convolution weight and b’ represents the bias. The operation of step S2 can be expressed as CKKS.eval(fconv’, {ctx0, ctx1, …})={ctx’0, ctx’1, …}, where {ctx0, ctx1, …} represents the ciphertext vector and {ctx’0, ctx’1, …} represents the result vector.

CKKS支援的評估函數CKKS.eval(f, ·)的輸入是明文函數f,以及位於f後面的f的參數,例如明文輸入m或是密文ctx ∈ CKKS(m)。此函數的運作是在CKKS中使用這些參數執行函數,並且回傳ctx ∈ CKKS(f(·))。例如:CKKS.eval(+, ctx, v)回傳 ctx’ ∈ CKKS(m + v).CKKS supports the evaluation function CKKS.eval(f, ·) which takes as input a plaintext function f and the arguments to f following f, e.g. plaintext input m or ciphertext ctx ∈ CKKS(m). The function operates by running the function in CKKS with these arguments and returns ctx ∈ CKKS(f(·)). For example: CKKS.eval(+, ctx, v) returns ctx’ ∈ CKKS(m + v).

步驟S3,將結果向量轉換為採用第二加密演算法的多個結果密文。Step S3, converting the result vector into a plurality of result ciphertexts using a second encryption algorithm.

在一實施例中,第二加密演算法關聯於容錯學習(Learn with Error, LWE)。LWE的加密操作可表示為LWE⌈m⌋,其中⌈·⌋表示捨入和編碼操作,m ∈ R為純量。In one embodiment, the second encryption algorithm is related to Learn with Error (LWE). The encryption operation of LWE can be expressed as LWE⌈m⌋, where ⌈·⌋ represents rounding and encoding operations, and m ∈ R is a scalar.

LWE有兩種實作方式:第一種為FHEW,請參考Ducas, L., Micciancio, D.: Fhew: bootstrapping homomorphic encryption in less than a second. In: EUROCRYPT. pp. 617–640. Springer (2015);第二種為TFHE,請參考Chillotti, I., Gama, N., Georgieva, M., Izabachène, M.: TFHE: fast fully homomorphic encryption over the torus. Journal of Cryptology 33(1), 34–91 (2020)。There are two implementations of LWE: the first is FHEW, see Ducas, L., Micciancio, D.: Fhew: bootstrapping homomorphic encryption in less than a second. In: EUROCRYPT. pp. 617–640. Springer (2015); the second is TFHE, see Chillotti, I., Gama, N., Georgieva, M., Izabachène, M.: TFHE: fast fully homomorphic encryption over the torus. Journal of Cryptology 33(1), 34–91 (2020).

步驟S3中提到的轉換操作應用到PEGASUS,這是一個不用解密就可以在CKKS密文和LWE密文間進行轉換的框架,請參考jie Lu, W., Huang, Z., Hong, C., Ma, Y., Qu, H.: PEGASUS: Bridging polynomial and non-polynomial evaluations in homomorphic encryption. In: 2021 IEEE Symposium on Security and Privacy. pp. 1057–1073. IEEE Computer Society Press (May 2021)。The conversion operation mentioned in step S3 is applied to PEGASUS, a framework for converting between CKKS ciphertext and LWE ciphertext without decryption, see jie Lu, W., Huang, Z., Hong, C., Ma, Y., Qu, H.: PEGASUS: Bridging polynomial and non-polynomial evaluations in homomorphic encryption. In: 2021 IEEE Symposium on Security and Privacy. pp. 1057–1073. IEEE Computer Society Press (May 2021).

因此,步驟S3可表示為 pegasus.extract(ctx)={ctx’i}。ctx ∈ CKKS(m)表示結果向量,其中m ∈ Rn。{ctx’i }∈ LWE(⌈mi⌋)表示所述多個結果密文,其中 mi ∈ m 且 0 ≤ i < n。Therefore, step S3 can be expressed as pegasus.extract(ctx)={ctx’i}. ctx ∈ CKKS(m) represents the result vector, where m ∈ Rn. {ctx’i }∈ LWE(⌈mi⌋) represents the multiple result ciphertexts, where mi ∈ m and 0 ≤ i < n.

步驟S4,輸入所述多個結果密文至激勵函數以產生多個加密激勵值。Step S4, inputting the plurality of result ciphertexts into the incentive function to generate a plurality of encrypted incentive values.

在一實施例中,激勵函數是整流線性單元(Rectified Linear Unit, ReLU)。PEGASUS 採用離散(fine-grained)形式的查找表(look-up table, LUT)近似來評估非多項式函數,如ReLU。步驟S4可表示為pegasus.eval(f ACT, ctx i)=ctx’ i,其中f ACT表示激勵函數,ctx i∈ LWE(⌈m i⌋)表示所述多個結果密文,ctx’ i∈ LWE(⌈f ACT’(m i)⌋)表示所述多個加密激勵值。 In one embodiment, the excitation function is a rectified linear unit (ReLU). PEGASUS uses a discrete (fine-grained) look-up table (LUT) approximation to evaluate non-polynomial functions, such as ReLU. Step S4 can be expressed as pegasus.eval(f ACT , ctx i )=ctx' i , where f ACT represents the excitation function, ctx i ∈ LWE(⌈m i ⌋) represents the multiple result ciphertexts, and ctx' i ∈ LWE(⌈f ACT' (m i )⌋) represents the multiple encrypted excitation values.

步驟S5,將所述多個加密激勵值轉換為採用第一加密演算法的輸出向量。Step S5, converting the plurality of encryption incentive values into output vectors using a first encryption algorithm.

在一實施例中,步驟S5可表示為pegasus.repack({cxt’ i}=ctx,其中ctx’ i∈ LWE(⌈m i⌋)表示所述多個加密激勵值,ctx ∈ CKKS(m)表示輸出向量。 In one embodiment, step S5 can be expressed as pegasus.repack({cxt' i }=ctx, where ctx' i ∈ LWE(⌈m i ⌋) represents the multiple encrypted incentive values, and ctx ∈ CKKS(m) represents the output vector.

請注意:若單純地以CKKS-FHEW/TFHE混合加密推理框架及PEGASUS實現全同態加密神經網路模型,其準確率將嚴重下降。Please note: If the fully homomorphic encrypted neural network model is simply implemented using the CKKS-FHEW/TFHE hybrid encryption inference framework and PEGASUS, its accuracy will be severely reduced.

在加密神經網路推理過程中累積的數值誤差導致準確率下降,而PEGASUS框架中的縮減(scale-down)程序產生這些數值誤差,所述程序在對大域(large –domain)密文應用功能性引導時限制了數值精度。The numerical errors accumulated during the inference of encrypted neural networks lead to a decrease in accuracy, and these numerical errors are generated by the scale-down procedure in the PEGASUS framework, which limits the numerical precision when applying functional guidance to large-domain ciphertexts.

FHEW支援功能性引導(functional bootstrapping)。功能性引導是一種引導(bootstrapping)類型,在同態加密操作中用來減少誤差累積,它涉及建立特定函數的查找表,從而實現高效率地計算結果。雖然它需要較少的引導迭代和較少的計算,但它僅支援小訊息域(message domain)有效。 如果要支援大訊息域,則會導致大量的計算成本。FHEW supports functional bootstrapping. Functional bootstrapping is a type of bootstrapping used in homomorphic encryption operations to reduce error accumulation. It involves building a lookup table of a specific function to efficiently compute the result. Although it requires fewer bootstrapping iterations and less computation, it is only effective for small message domains. If a large message domain is to be supported, it will incur a lot of computational cost.

常規的功能性引導方法只能應用於訊息域被限制在查找表大小的 LWE 密文,通常約為 2 10。 然而,在處理由CKKS密文轉換過來的LWE密文時,通常需要更大的訊息域。為了解決這個問題,PEGASUS 引入了一種在CKKS和FHEW之間轉換的技術。該技術涉及在從 CKKS 到 FHEW 的轉換過程中將輸入密文縮小到較小的訊息域,從而能夠擴展常規的功能性引導以支持更大的訊息域。 然而,這種擴展的代價降低精確度。 Conventional functional guidance methods can only be applied to LWE ciphertexts whose message domain is limited to the size of the lookup table, usually about 2 10. However, when processing LWE ciphertexts converted from CKKS ciphertexts, a larger message domain is usually required. To address this problem, PEGASUS introduces a technique for converting between CKKS and FHEW. The technique involves reducing the input ciphertext to a smaller message domain during the conversion from CKKS to FHEW, thereby being able to expand conventional functional guidance to support larger message domains. However, this expansion comes at the cost of reduced accuracy.

如果LWE 功能性引導的輸入值範圍與其預定輸入域沒有對齊,將導致輸入域無法滿足輸入值的範圍,或者只使用少量的查找表項目(entry)。請參考圖4(a)及圖4(b)。圖4(a)展示了LWE功能性引導的預定輸入域[-4, 4)大於輸入值範圍[-0.3, 1.7)的範例。因此只使用三個查找表項目將輸入值 x 映射到輸出 y。在應用功能性引導時,範圍不匹配會產生較大的數值誤差,最終影響神經網路模型的準確性。If the input value range of LWE functional guidance is not aligned with its predetermined input domain, the input domain will not match the input value range, or only a small number of lookup table entries will be used. Please refer to Figure 4(a) and Figure 4(b). Figure 4(a) shows an example where the predetermined input domain [-4, 4) of LWE functional guidance is larger than the input value range [-0.3, 1.7). Therefore, only three lookup table entries are used to map the input value x to the output y. When applying functional guidance, range mismatch will produce large numerical errors, which will ultimately affect the accuracy of the neural network model.

因此,本發明提出一種查找表感知(LUT-aware)模型微調方法,使輸入值的範圍與密文的訊息域對齊。通過將輸入值與查找表項目對齊,如圖4 (b)所示,可以減輕不匹配的情況。具體而言,在依據訓練資料集建立全同態加密神經網路模型之後,需要針對其中的模型權重以及激勵函數進行調整。請參考圖5。圖5是依據本發明一實施例的查找表感知(LUT-aware)模型微調方法的流程圖。圖5的執行流程至少在步驟S2「依據密文向量進行卷積運算以產生結果向量」之前。Therefore, the present invention proposes a lookup table-aware (LUT-aware) model fine-tuning method to align the range of input values with the message domain of the ciphertext. By aligning the input value with the lookup table item, as shown in Figure 4 (b), the mismatch can be alleviated. Specifically, after establishing a fully homomorphic encrypted neural network model based on the training dataset, it is necessary to adjust the model weights and incentive functions therein. Please refer to Figure 5. Figure 5 is a flow chart of a lookup table-aware (LUT-aware) model fine-tuning method according to an embodiment of the present invention. The execution process of Figure 5 is at least before step S2 "performing a convolution operation based on the ciphertext vector to generate a result vector".

步驟P1,對於模型的每一層,多次執行訓練程序以產生多個明文激勵值。圖6是訓練程序的流程圖,包括步驟P11,以明文輸入進行卷積運算以產生明文向量;以及步驟P12,輸入明文向量至激勵函數以產生所述多個明文激勵值中的一者。Step P1, for each layer of the model, the training procedure is executed multiple times to generate multiple plaintext incentive values. Figure 6 is a flow chart of the training procedure, including step P11, performing a convolution operation on the plaintext input to generate a plaintext vector; and step P12, inputting the plaintext vector to the incentive function to generate one of the multiple plaintext incentive values.

步驟P2,依據所述多個明文激勵值的範圍設定線性映射範圍。Step P2, setting a linear mapping range according to the ranges of the plurality of plaintext stimulus values.

由於全同態加密神經網路模型在實際應用時,激勵函數的輸入(即:結果密文)被加密,激勵函數的輸入的範圍也無法預先決定,而是根據輸入到神經網路的輸入值而變化。本發明假設訓練資料和測試資料的分佈是相似的。因此,為了估計這些輸入的範圍,在步驟P1中,使用已訓練模型對訓練資料集中的明文輸入進行推理。在這個過程中,觀察並記錄在每一層的激勵函數輸出的最小值和最大值,分別表示為a i和b i。若[a i,b i)表示第i層的激勵函數的輸出範圍,則線性映射範圍設定為[-B, +B),其中B=max(|ai|, |bi|),其中i=1, 2, …, n。換言之,+B的值大於每一層的激勵函數的輸出最大值,-B的值小於激勵函數的輸出最小值。 Since the input of the excitation function (i.e., the result ciphertext) is encrypted when the fully homomorphic encrypted neural network model is actually applied, the range of the input of the excitation function cannot be predetermined, but varies according to the input value input to the neural network. The present invention assumes that the distribution of the training data and the test data is similar. Therefore, in order to estimate the range of these inputs, in step P1, the plaintext input in the training data set is inferred using the trained model. In this process, the minimum and maximum values of the output of the excitation function at each layer are observed and recorded, respectively represented as ai and bi . If [a i ,b i ) represents the output range of the excitation function of the i-th layer, the linear mapping range is set to [-B, +B), where B=max(|ai|, |bi|), where i=1, 2, …, n. In other words, the value of +B is greater than the maximum output of the excitation function of each layer, and the value of -B is less than the minimum output of the excitation function.

步驟P3,對於每一層,依據所述多個明文激勵值的範圍及線性映射範圍決定線性映射函數。以下舉一個簡單的範例說明線性映射函數的決定方式,但不以範例中的數值為限制。假設a i= 2, b i= 102, B = 1000。則線性映射函數:f(x) = (x - z) * s’,其中z表示零點,s’表示調整因子。線性映射函數中的參數的計算方式如下: Step P3, for each layer, determine the linear mapping function according to the range of the plurality of plaintext excitation values and the linear mapping range. A simple example is given below to illustrate the determination of the linear mapping function, but the numerical values in the example are not limited. Assume that a i = 2, b i = 102, B = 1000. Then the linear mapping function is: f(x) = (x - z) * s', where z represents the zero point and s' represents the adjustment factor. The parameters in the linear mapping function are calculated as follows:

輸出範圍:s = b - a = 100Output range: s = b - a = 100

零點:z = (a + b) / 2 = 52。Zero point: z = (a + b) / 2 = 52.

線性映射範圍: S = B - (-B) = 2000Linear mapping range: S = B - (-B) = 2000

調整因子:s’ = S / s = 20Adjustment factor: s’ = S / s = 20

簡單驗證根據上述得到的線性映射函數f(x) = (x-52) * 20如下:A simple verification based on the above obtained linear mapping function f(x) = (x-52) * 20 is as follows:

f(a) = f(2) = (2 - 52) * 20 = -1000 = -Bf(a) = f(2) = (2 - 52) * 20 = -1000 = -B

f(b) = f(102) = (102 - 52) / 20 = 1000 = Bf(b) = f(102) = (102 - 52) / 20 = 1000 = B

步驟P4,依據線性映射函數更新卷積運算的權重。Step P4, update the weight of the convolution operation according to the linear mapping function.

對輸入進行線性映射可確保輸入的區間與訊息域對齊。將線性映射函數合併到卷積權重和偏移值中可以避免額外的計算成本或減少運算時的記憶體用量。在一實施例中,步驟P4可表示為 。其中 表示用於步驟S2的更新權重後的卷積運算。 表示線性映射函數,可將範圍為 的輸入 線性映射到範圍 表示依據訓練資料集建立的模型的原始卷積運算。下列舉一個簡易的範例說明更新權重的方式: Linear mapping of the input ensures that the input interval is aligned with the information domain. Incorporating the linear mapping function into the convolution weights and offset values can avoid additional computational costs or reduce memory usage during calculations. In one embodiment, step P4 can be expressed as .in represents the convolution operation after updating the weights in step S2. Represents a linear mapping function, which can be ranged to Input Linear mapping to range . represents the original convolution operation of the model built based on the training data set. The following is a simple example to illustrate how to update the weights:

假設原始卷積運算表示為y = Wx + b,線性映射函數表示為f(x) = (x - z) * s’,則 f(y) = (Wx + b) * s’ - z * s’ = Wx * s’ + b * s’ - z * s’ = (W * s’) * x + (b * s’ - z * s’) = W’x + b’,其中更新後的權重W’ = W * s’,更新後的偏移值b’ = b * s’ - z * s’。 Assuming that the original convolution operation is represented by y = Wx + b, and the linear mapping function is represented by f(x) = (x - z) * s’, then f(y) = (Wx + b) * s’ - z * s’ = Wx * s’ + b * s’ - z * s’ = (W * s’) * x + (b * s’ - z * s’) = W’x + b’, where the updated weight W’ = W * s’, and the updated offset value b’ = b * s’ - z * s’.

步驟P5,依據線性映射函數的反函數更新激勵函數。Step P5, update the incentive function according to the inverse function of the linear mapping function.

步驟P5可表示為 ,其中 表示更新後的激勵函數, 表示依據訓練資料集建立的原始激勵函數, 表示線性映射函數的反函數。由於激勵函數的輸入已經被線性映射函數調整過,因此,激勵函數本身也需要進行對應的調整,使得輸出具有相同的結果,即 。如此,可以大幅減少因為功能性引導受限的精度導致的誤差。 Step P5 can be expressed as ,in represents the updated incentive function, represents the original activation function established based on the training data set, represents the inverse function of the linear mapping function. Since the input of the stimulus function has been adjusted by the linear mapping function, the stimulus function itself also needs to be adjusted accordingly so that the output has the same result, that is, In this way, errors caused by the limited accuracy of functional guidance can be greatly reduced.

表格一,其中D代表網路深度,L代表全連接層數量。 模型 D5L1 D8L1 D11L1 D7L3 明文DNN精確度 60% 72.5% 77.5% 80% 加密DNN精確度-未使用LUT-aware 47.5% 55% 62.5% 72.5% 加密DNN精確度-使用LUT-aware (本發明) 57.5% 70% 75% 80% 總時間 881秒 2092秒 9458秒 2558秒 Table 1, where D represents the network depth and L represents the number of fully connected layers. Model D5L1 D8L1 D11L1 D7L3 Plaintext DNN Accuracy 60% 72.5% 77.5% 80% Encrypted DNN Accuracy - Not LUT-aware 47.5% 55% 62.5% 72.5% Encrypting DNN Accuracy - Using LUT-aware (Present Invention) 57.5% 70% 75% 80% Total time 881 seconds 2092 seconds 9458 seconds 2558 seconds

請參考表格1。發明人進行了一個全面的實驗,在CIFAR-10這個比MNIST資料集更複雜的彩色資料集上,驗證本發明提出的LUT感知模型微調方法的有效性。實驗結果顯示,與未使用本發明的方法的神經網路模型相比,應用本發明提升的準確度可以從7.5%(如D7L3)到15%(如D8L1)。另外,微調後的神經網路模型的準確度最高可以達到原始神經網路模型的準確度(如D7L3)。Please refer to Table 1. The inventors conducted a comprehensive experiment to verify the effectiveness of the LUT perception model fine-tuning method proposed in the present invention on CIFAR-10, a color dataset that is more complex than the MNIST dataset. The experimental results show that compared with the neural network model that does not use the method of the present invention, the accuracy of the application of the present invention can be improved from 7.5% (such as D7L3) to 15% (such as D8L1). In addition, the accuracy of the fine-tuned neural network model can reach the accuracy of the original neural network model at most (such as D7L3).

綜上所述,本發明提出的全同態加密神經網路模型的運作系統及方法,繼承CKKS中線性運算運算的優點,以及FHEW/TFHE中功能性引導的優點,對於全同態加密神經網路模型,CKKS方案可用於卷積運算,FHEW-TFHE功能性引方案可用於非多項式激勵函數和引導。本發明應用PEGASUS在CKKS密文及LWE密文之間進行轉換,並且提出一種查找表感知微調方法來調整模型權重及激勵函數,從而提升全同態加密神經網路模型的準確度。In summary, the operating system and method of the fully homomorphic encrypted neural network model proposed in the present invention inherit the advantages of linear operation in CKKS and the advantages of functional guidance in FHEW/TFHE. For the fully homomorphic encrypted neural network model, the CKKS scheme can be used for convolution operation, and the FHEW-TFHE functional guidance scheme can be used for non-polynomial incentive function and guidance. The present invention applies PEGASUS to convert between CKKS ciphertext and LWE ciphertext, and proposes a lookup table-aware fine-tuning method to adjust the model weights and incentive functions, thereby improving the accuracy of the fully homomorphic encrypted neural network model.

雖然本發明以前述之實施例揭露如上,然其並非用以限定本發明。在不脫離本發明之精神和範圍內,所為之更動與潤飾,均屬本發明之專利保護範圍。關於本發明所界定之保護範圍請參考所附之申請專利範圍。Although the present invention is disclosed as above with the aforementioned embodiments, it is not intended to limit the present invention. Any changes and modifications made within the spirit and scope of the present invention are within the scope of patent protection of the present invention. Please refer to the attached patent application for the scope of protection defined by the present invention.

10:系統 1:記憶體 3:處理器 S1-S5,P1-P5, P11-P12:步驟 10: System 1: Memory 3: Processor S1-S5, P1-P5, P11-P12: Steps

圖1是依據本發明一實施例的全同態加密神經網路模型的運作系統的方塊圖; 圖2是依據本發明一實施例的全同態加密神經網路模型的運作方法的流程圖; 圖3是依據本發明一實施例的全同態加密神經網路模型的運作方法的示意圖; 圖4(a)及圖4(b)是範圍對齊的範例示意圖; 圖5是依據本發明一實施例的查找表感知模型微調方法的流程圖;以及 圖6是圖5中訓練程序的細部流程圖。 FIG1 is a block diagram of an operating system of a fully homomorphic encrypted neural network model according to an embodiment of the present invention; FIG2 is a flow chart of an operating method of a fully homomorphic encrypted neural network model according to an embodiment of the present invention; FIG3 is a schematic diagram of an operating method of a fully homomorphic encrypted neural network model according to an embodiment of the present invention; FIG4(a) and FIG4(b) are example schematic diagrams of range alignment; FIG5 is a flow chart of a lookup table-aware model fine-tuning method according to an embodiment of the present invention; and FIG6 is a detailed flow chart of the training procedure in FIG5.

S1-S5:步驟 S1-S5: Steps

Claims (8)

一種全同態加密神經網路模型的運作方法,其中該全同態加密神經網路模型包括多個層,所述方法包括以一處理器執行: 對該些層的其中一者,以一第一加密演算法加密一明文輸入以產生一密文向量; 依據該密文向量進行一卷積運算以產生一結果向量; 將該結果向量轉換為採用一第二加密演算法的多個結果密文; 輸入該些結果密文至一激勵函數以產生多個加密激勵值;以及 將該些加密激勵值轉換為採用該第一加密演算法的一輸出向量。 A method for operating a fully homomorphic encrypted neural network model, wherein the fully homomorphic encrypted neural network model includes multiple layers, and the method includes executing with a processor: For one of the layers, encrypt a plaintext input using a first encryption algorithm to generate a ciphertext vector; Perform a convolution operation based on the ciphertext vector to generate a result vector; Convert the result vector into multiple result ciphertexts using a second encryption algorithm; Input the result ciphertexts into an incentive function to generate multiple encrypted incentive values; and Convert the encrypted incentive values into an output vector using the first encryption algorithm. 如請求項1所述全同態加密神經網路模型的運作方法,更包括: 在依據該密文向量進行該卷積運算以產生該結果向量之前,對於該些層的每一者,多次執行一訓練程序以產生多個明文激勵值,其中該訓練程序包括: 以該明文輸入進行該卷積運算以產生一明文向量;及 輸入該明文向量至該激勵函數以產生該些明文激勵值中的一者; 依據該些明文激勵值的範圍設定一線性映射範圍; 對於該些層的每一者,依據該些明文激勵值的範圍及該線性映射範圍決定一線性映射函數; 依據該線性映射函數更新該卷積運算的權重;以及 依據該線性映射函數的反函數更新該激勵函數。 The operation method of the fully homomorphic encryption neural network model as described in claim 1 further includes: Before performing the convolution operation based on the ciphertext vector to generate the result vector, for each of the layers, a training procedure is executed multiple times to generate multiple plaintext incentive values, wherein the training procedure includes: Performing the convolution operation with the plaintext input to generate a plaintext vector; and Inputting the plaintext vector to the incentive function to generate one of the plaintext incentive values; Setting a linear mapping range based on the range of the plaintext incentive values; For each of the layers, determining a linear mapping function based on the range of the plaintext incentive values and the linear mapping range; Updating the weight of the convolution operation based on the linear mapping function; and Update the activation function according to the inverse function of the linear mapping function. 如請求項1所述全同態加密神經網路模型的運作方法,其中該第一加密演算法是CKKS(Cheon-Kim-Kim-Song)演算法,且該第二加密演算法關聯於容錯學習(Learn with Error)。An operating method of a fully homomorphic encrypted neural network model as described in claim 1, wherein the first encryption algorithm is a CKKS (Cheon-Kim-Kim-Song) algorithm, and the second encryption algorithm is related to learn with error. 如請求項1所述全同態加密神經網路模型的運作方法,其中該激勵函數是整流線性單元。An operating method of a fully homomorphic encrypted neural network model as described in claim 1, wherein the excitation function is a rectified linear unit. 一種全同態加密神經網路模型的運作系統,包括: 一記憶體,用於儲存多個指令;以及 一處理器,電性連接該記憶體以執行該些指令,該些指令用於對該全同態加密神經網路模型的多個層的其中一者進行下列操作: 以一第一加密演算法加密一明文輸入以產生一密文向量; 依據該密文向量進行一卷積運算以產生一結果向量; 將該結果向量轉換為採用一第二加密演算法的多個結果密文; 輸入該些結果密文至一激勵函數以產生多個加密激勵值;以及 將該些加密激勵值轉換為採用該第一加密演算法的一輸出向量。 An operating system for a fully homomorphic encrypted neural network model includes: a memory for storing a plurality of instructions; and a processor electrically connected to the memory to execute the instructions, wherein the instructions are used to perform the following operations on one of the plurality of layers of the fully homomorphic encrypted neural network model: encrypting a plaintext input with a first encryption algorithm to generate a ciphertext vector; performing a convolution operation based on the ciphertext vector to generate a result vector; converting the result vector into a plurality of result ciphertexts using a second encryption algorithm; inputting the result ciphertexts into an incentive function to generate a plurality of encrypted incentive values; and converting the encrypted incentive values into an output vector using the first encryption algorithm. 如請求項5所述全同態加密神經網路模型的運作系統,其中該些指令更包括下列操作: 在依據該密文向量進行該卷積運算以產生該結果向量之前,對於該些層的每一者,多次執行一訓練程序以產生多個明文激勵值,其中該訓練程序包括: 以該明文輸入進行該卷積運算以產生一明文向量;及 輸入該明文向量至該激勵函數以產生該些明文激勵值中的一者; 依據該些明文激勵值的範圍設定一線性映射範圍; 對於該些層的每一者,依據該些明文激勵值的範圍及該線性映射範圍決定一線性映射函數; 依據該線性映射函數更新該卷積運算的權重;以及 依據該線性映射函數的反函數更新該激勵函數。 The operating system of the fully homomorphic encrypted neural network model as described in claim 5, wherein the instructions further include the following operations: Before performing the convolution operation based on the ciphertext vector to generate the result vector, for each of the layers, a training procedure is executed multiple times to generate multiple plaintext incentive values, wherein the training procedure includes: Performing the convolution operation with the plaintext input to generate a plaintext vector; and Inputting the plaintext vector to the incentive function to generate one of the plaintext incentive values; Setting a linear mapping range based on the range of the plaintext incentive values; For each of the layers, determining a linear mapping function based on the range of the plaintext incentive values and the linear mapping range; Updating the weight of the convolution operation based on the linear mapping function; and Update the activation function according to the inverse function of the linear mapping function. 如請求項5所述全同態加密神經網路模型的運作系統,其中該第一加密演算法是CKKS(Cheon-Kim-Kim-Song)演算法,且該第二加密演算法關聯於容錯學習(Learn with Error)。An operating system of a fully homomorphic encrypted neural network model as described in claim 5, wherein the first encryption algorithm is a CKKS (Cheon-Kim-Kim-Song) algorithm, and the second encryption algorithm is related to learn with error. 如請求項5所述全同態加密神經網路模型的運作系統,其中該激勵函數是整流線性單元。An operating system of a fully homomorphic encrypted neural network model as described in claim 5, wherein the excitation function is a rectified linear unit.
TW112135707A 2023-09-19 2023-09-19 Operating system and method for a fully homomorphic encryption neural network model TWI846601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112135707A TWI846601B (en) 2023-09-19 2023-09-19 Operating system and method for a fully homomorphic encryption neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112135707A TWI846601B (en) 2023-09-19 2023-09-19 Operating system and method for a fully homomorphic encryption neural network model

Publications (2)

Publication Number Publication Date
TWI846601B true TWI846601B (en) 2024-06-21
TW202514437A TW202514437A (en) 2025-04-01

Family

ID=92541874

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112135707A TWI846601B (en) 2023-09-19 2023-09-19 Operating system and method for a fully homomorphic encryption neural network model

Country Status (1)

Country Link
TW (1) TWI846601B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761557A (en) * 2021-09-02 2021-12-07 积至(广州)信息技术有限公司 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
CN115860094A (en) * 2022-11-03 2023-03-28 南京大学 Binary Convolutional Neural Network Implementation Method and System Based on Homomorphic Encryption
US20230216657A1 (en) * 2022-01-06 2023-07-06 International Business Machines Corporation Analysis and debugging of fully-homomorphic encryption
CN116527824A (en) * 2023-07-03 2023-08-01 北京数牍科技有限公司 Method, device and equipment for training graph convolution neural network
CN116547941A (en) * 2020-11-20 2023-08-04 国际商业机器公司 Secure re-encryption of homomorphic encrypted data
CN116633520A (en) * 2022-02-18 2023-08-22 三星电子株式会社 Homomorphic Encryption Operation Accelerator and Operation Method of Homomorphic Encryption Operation Accelerator
CN116667996A (en) * 2023-05-30 2023-08-29 华东师范大学 A Verifiable Federated Learning Method Based on Hybrid Homomorphic Encryption

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116547941A (en) * 2020-11-20 2023-08-04 国际商业机器公司 Secure re-encryption of homomorphic encrypted data
CN113761557A (en) * 2021-09-02 2021-12-07 积至(广州)信息技术有限公司 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
US20230216657A1 (en) * 2022-01-06 2023-07-06 International Business Machines Corporation Analysis and debugging of fully-homomorphic encryption
CN116633520A (en) * 2022-02-18 2023-08-22 三星电子株式会社 Homomorphic Encryption Operation Accelerator and Operation Method of Homomorphic Encryption Operation Accelerator
CN115860094A (en) * 2022-11-03 2023-03-28 南京大学 Binary Convolutional Neural Network Implementation Method and System Based on Homomorphic Encryption
CN116667996A (en) * 2023-05-30 2023-08-29 华东师范大学 A Verifiable Federated Learning Method Based on Hybrid Homomorphic Encryption
CN116527824A (en) * 2023-07-03 2023-08-01 北京数牍科技有限公司 Method, device and equipment for training graph convolution neural network

Also Published As

Publication number Publication date
TW202514437A (en) 2025-04-01

Similar Documents

Publication Publication Date Title
JP7729938B2 (en) Homomorphic encryption methods applied to private information retrieval
Alexandru et al. Cloud-based MPC with encrypted data
Orsini et al. Overdrive2k: Efficient secure MPC over from somewhat homomorphic encryption
CN112671802B (en) Data sharing method and system based on oblivious transmission protocol
Belaïd et al. Tight private circuits: Achieving probing security with the least refreshing
CN112543091B (en) Multi-key Fully Homomorphic Encryption with Fixed Ciphertext Length
JP7774024B2 (en) How to perform non-polynomial operations on homomorphic ciphertexts.
CN119623538A (en) Operation system and method of fully homomorphic encryption neural network model
Cheon et al. MHz2k: MPC from HE over Z 2 k with new packing, simpler reshare, and better ZKP
Alexandru et al. Secure multi-party computation for cloud-based control
Escudero et al. More efficient dishonest majority secure computation over Z 2 k via galois rings
Case et al. Fully homomorphic encryption with k-bit arithmetic operations
CN106788963A (en) A kind of full homomorphic cryptography method of identity-based on improved lattice
Hövelmanns et al. A note on failing gracefully: Completing the picture for explicitly rejecting fujisaki-okamoto transforms using worst-case correctness
Ge et al. Tighter qcca-secure key encapsulation mechanism with explicit rejection in the quantum random oracle model
TWI846601B (en) Operating system and method for a fully homomorphic encryption neural network model
CN117938345A (en) Privacy protection outsourcing calculation method for cloud computing
Shang et al. Two-round quantum homomorphic encryption scheme based on matrix decomposition: T. Shang et al.
Chen et al. Two-server verifiable homomorphic secret sharing for high-degree polynomials
Ogura et al. An improvement of key generation algorithm for Gentry’s homomorphic encryption scheme
Escudero et al. Dishonest majority multi-verifier zero-knowledge proofs for any constant fraction of corrupted verifiers
Chabanne et al. Embedded proofs for verifiable neural networks
Klemsa Setting up efficient TFHE parameters for multivalue plaintexts and multiple additions
Zhang et al. Improving the leakage rate of ciphertext-policy attribute-based encryption for cloud computing
CN107425974B (en) A Hardware Implementation Method of KP Operation on FourQ Elliptic Curve