[go: up one dir, main page]

TWI856602B - Universal memory for in-memory computing and operation method thereof - Google Patents

Universal memory for in-memory computing and operation method thereof Download PDF

Info

Publication number
TWI856602B
TWI856602B TW112113062A TW112113062A TWI856602B TW I856602 B TWI856602 B TW I856602B TW 112113062 A TW112113062 A TW 112113062A TW 112113062 A TW112113062 A TW 112113062A TW I856602 B TWI856602 B TW I856602B
Authority
TW
Taiwan
Prior art keywords
transistor
read
write
memory
write transistor
Prior art date
Application number
TW112113062A
Other languages
Chinese (zh)
Other versions
TW202431262A (en
Inventor
李峯旻
曾柏皓
林昱佑
李明修
Original Assignee
旺宏電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 旺宏電子股份有限公司 filed Critical 旺宏電子股份有限公司
Publication of TW202431262A publication Critical patent/TW202431262A/en
Application granted granted Critical
Publication of TWI856602B publication Critical patent/TWI856602B/en

Links

Images

Landscapes

  • Semiconductor Memories (AREA)
  • Static Random-Access Memory (AREA)

Abstract

A universal memory for in-memory computing and an operation method thereof are provided. The universal memory includes at least one write word line, at least one unit cell and at least one read word line. The unit cell includes a write transistor and a read transistor. The gate of the write transistor is connected to the write word line. The write transistor is a transistor with adjustable threshold voltage. The gate of the read transistor is connected to the drain or the source of the write transistor. The read word line is connected to the drain or the source of the read transistor. The universal memory is used for a training mode and an inference mode. In the training mode and the inference mode, a weight is stored at different locations of the unit cell.

Description

適用於記憶體內運算之通用記憶體及其操作方法 General purpose memory suitable for in-memory operations and its operation method

本揭露是有關於一種記憶體及其操作方法,且特別是有關於一種適用於記憶體內運算之通用記憶體及其操作方法。 This disclosure relates to a memory and an operating method thereof, and in particular to a general purpose memory suitable for in-memory operations and an operating method thereof.

在人工智慧模型的運算中,需要大量的資料在記憶體與處理器之間進行搬移,而形成范紐曼瓶頸(Von-Neumann bottleneck)。為了提高運算的效率,提出了一種記憶體內運算架構。 In the operation of artificial intelligence models, a large amount of data needs to be moved between the memory and the processor, forming a Von-Neumann bottleneck. In order to improve the efficiency of the operation, an in-memory operation architecture is proposed.

人工智慧模型的運算包含訓練模式(training)與推論模式(inference)。在訓練模式中,需要對記憶體重複進行編程與抹除的動作,以改變權重值(weight),而需要採用可靠度(endurance)較高的記憶體。在推論模式中,則需要維持住權重值,以進行推論的計算,而需要採用保持力(retention)較高的記憶體。 The operation of artificial intelligence models includes training mode and inference mode. In the training mode, the memory needs to be repeatedly programmed and erased to change the weight value, and a memory with higher endurance is required. In the inference mode, the weight value needs to be maintained for inference calculation, and a memory with higher retention is required.

然而,可靠度高的記憶體與保持力高的記憶體通常是不同型態的記憶體。在傳統的記憶體技術中,難以找到能夠同 時具有高可靠度與高保持力的記憶體,而沒有任何記憶體可以同時適用於人工智慧運算的訓練模式與推論模式。 However, high-reliability memory and high-retention memory are usually different types of memory. In traditional memory technology, it is difficult to find a memory that can have both high reliability and high retention, and no memory can be used for both the training mode and the inference mode of artificial intelligence computing.

本揭露係有關於一種適用於記憶體內運算之通用記憶體及其操作方法,其利用2T結構,使得通用記憶體可以適用於人工智慧之訓練模式與推論模式。在訓練模式及推論模式,權重值儲存於單位記憶胞之不同處。通用記憶體執行於訓練模式時,可以提供如同動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)的高可靠度,以滿足大量的權重值更新動作;通用記憶體執行於推論模式時,可以供如同非揮發性記憶體之非揮發性與高保持力,以使權重值能夠在低耗電的情況下保持良好。 This disclosure is about a universal memory suitable for in-memory computing and its operation method, which uses a 2T structure to make the universal memory suitable for training mode and inference mode of artificial intelligence. In the training mode and inference mode, the weight value is stored in different places of the unit memory cell. When the universal memory is executed in the training mode, it can provide high reliability like dynamic random access memory (DRAM) to meet a large number of weight value update actions; when the universal memory is executed in the inference mode, it can provide non-volatility and high retention like non-volatile memory, so that the weight value can be maintained well under low power consumption.

根據本揭露之一方面,提出一種適用於記憶體內運算(In-Memory Computing,IMC)之通用記憶體。通用記憶體包括至少一寫入字元線、至少一單位記憶胞及至少一讀取字元線。單位記憶胞包括一寫入電晶體及一讀取電晶體。寫入電晶體之閘極連接於寫入字元線。寫入電晶體係為一臨界電壓可調電晶體。讀取電晶體之閘極連接於寫入電晶體之汲極或源極。讀取字元線連接於讀取電晶體之汲極或源極。在一訓練模式中,介於寫入電晶體與讀取電晶體之間之一儲存節點的一儲存電位代表單位記憶胞之一權重值(weight)。在一推論模式中,寫入電晶體之一臨界電壓代表單位記憶胞之權重值。 According to one aspect of the present disclosure, a universal memory suitable for in-memory computing (IMC) is proposed. The universal memory includes at least one write word line, at least one unit memory cell and at least one read word line. The unit memory cell includes a write transistor and a read transistor. The gate of the write transistor is connected to the write word line. The write transistor is a critical voltage adjustable transistor. The gate of the read transistor is connected to the drain or source of the write transistor. The read word line is connected to the drain or source of the read transistor. In a training mode, a storage potential at a storage node between a write transistor and a read transistor represents a weight value of a unit memory cell. In an inference mode, a critical voltage of the write transistor represents the weight value of a unit memory cell.

根據本揭露之另一方面,提出一種適用於記憶體內運算之通用記憶體的操作方法。通用記憶體包括至少一單位記憶胞,單位記憶胞包括一寫入電晶體及一讀取電晶體。讀取電晶體之閘極連接於寫入電晶體之汲極或源極。通用記憶體的操作方法包括以下步驟。進行一訓練模式之一權重變更程序。在訓練模式之權重變更程序中,對介於寫入電晶體與讀取電晶體之間的一儲存節點充電或放電至一儲存電位。儲存節點之儲存電位代表單位記憶胞之一權重值(weight)。進行一推論模式之一權重設定程序。在推論模式之權重設定程序中,對寫入電晶體進行編程或抹除,以改變寫入電晶體之一臨界電壓。寫入電晶體之臨界電壓代表單位記憶胞之權重值。 According to another aspect of the present disclosure, a method for operating a universal memory suitable for in-memory calculations is proposed. The universal memory includes at least one unit memory cell, and the unit memory cell includes a write transistor and a read transistor. The gate of the read transistor is connected to the drain or source of the write transistor. The method for operating the universal memory includes the following steps. A weight change procedure of a training mode is performed. In the weight change procedure of the training mode, a storage node between the write transistor and the read transistor is charged or discharged to a storage potential. The storage potential of the storage node represents a weight value (weight) of the unit memory cell. A weight setting procedure of an inference mode is performed. In the weight setting process of the inference mode, the write transistor is programmed or erased to change a critical voltage of the write transistor. The critical voltage of the write transistor represents the weight value of the unit memory cell.

根據本揭露之再一方面,提出一種適用於記憶體內運算之通用記憶體。通用記憶體包括至少一寫入字元線、至少一單位記憶胞及至少一讀取字元線。單位記憶胞包括一寫入電晶體及一讀取電晶體。寫入電晶體之閘極連接於寫入字元線。寫入電晶體係為一臨界電壓可調電晶體。讀取電晶體之閘極連接於寫入電晶體之汲極或源極。讀取字元線連接於讀取電晶體之汲極或源極。通用記憶體通用於一訓練模式及一推論模式。在訓練模式及推論模式,一權重值儲存於單位記憶胞之不同處。 According to another aspect of the present disclosure, a universal memory suitable for in-memory operations is provided. The universal memory includes at least one write word line, at least one unit memory cell and at least one read word line. The unit memory cell includes a write transistor and a read transistor. The gate of the write transistor is connected to the write word line. The write transistor is a critical voltage adjustable transistor. The gate of the read transistor is connected to the drain or source of the write transistor. The read word line is connected to the drain or source of the read transistor. The universal memory is common to a training mode and an inference mode. In training mode and inference mode, a weight value is stored in different locations of the unit memory cell.

為了對本揭露之上述及其他方面有更佳的瞭解,下文特舉實施例,並配合所附圖式詳細說明如下: In order to better understand the above and other aspects of this disclosure, the following is a specific example, and the attached drawings are used to explain in detail as follows:

200,300,400:記憶體 200,300,400: memory

210,220,230,240,310ij,410ij,510:單位記憶胞 210,220,230,240,310ij,410ij,510: unit memory cell

211,221,231,241:電阻 211,221,231,241:Resistance

311ij:可變電阻 311ij: variable resistor

411ij:固定電阻 411ij: Fixed resistor

500:通用記憶體 500: General memory

511:寫入電晶體 511: Write to transistor

512:讀取電晶體 512: Read transistors

a:輸出值 a: output value

b:偏移量 b: offset

BL2:位元線 BL2: Bit line

CV110,CV111:特性曲線 CV110,CV111:Characteristic curve

CVi:曲線 CVi: Curve

f:活化函數 f: activation function

FG:電荷儲存層 FG: Charge storage layer

G1,G2,G3,G4,Gij:電導 G1,G2,G3,G4,Gij: Conductivity

I1,I2,I3,I4,Ii:讀取電流 I1,I2,I3,I4,Ii: read current

I:總和電流 I: Total current

M1:訓練模式 M1: Training mode

M2:推論模式 M2: Inference mode

ND:節點 ND: Node

P11:權重變更程序 P11: Weight change procedure

P12,P22:權重保持程序 P12, P22: Weight retention procedure

P13,P23:讀取運算程序 P13, P23: Reading operation program

P20:預放電程序 P20: Pre-discharge procedure

P21:權重設定程序 P21: Weight setting procedure

RBL:讀取位元線 RBL: Read Bit Line

RWL:讀取字元線 RWL: Read character line

SN:儲存節點 SN: Storage Node

V1,V2,V3,V4:電壓 V1,V2,V3,V4: voltage

VSN0,VSN1,VSN1i:儲存電位 VSN0, VSN1, VSN1i: storage potential

VtR,VtW,VtW0,VtW1,VtW1i:臨界電壓 VtR, VtW, VtW0, VtW1, VtW1i: critical voltage

Vpass:導通偏壓 Vpass: conduction bias

VWWL,VWWL0,VWWL1,VWBL0,VWBL1:偏壓 VWWL, VWWL0, VWWL1, VWBL0, VWBL1: Bias voltage

WBL:寫入位元線 WBL: Write Bit Line

W1,Wi,Wij,WN:權重值 W1,Wi,Wij,WN: weight value

WL2:字元線 WL2: character line

WWL:寫入字元線 WWL: Write Character Line

X1,Xi,XN:輸入訊號 X1,Xi,XN: input signal

z:運算值 z: operation value

第1圖繪示根據一實施例之人工智慧模型之節點的示意圖。 Figure 1 shows a schematic diagram of a node of an artificial intelligence model according to an embodiment.

第2圖繪示進行乘積和運算之記憶體。 Figure 2 shows the memory for performing product and sum operations.

第3圖繪示根據一實施例用以執行訓練模式(training)之記憶體。 FIG. 3 illustrates a memory used to perform training according to one embodiment.

第4圖繪示根據一實施例用以執行推論模式(inference)之記憶體。 FIG. 4 illustrates a memory used to perform inference according to one embodiment.

第5圖繪示根據一實施例之適用於記憶體內運算(In-Memory Computing,IMC)之單位記憶胞。 FIG. 5 shows a unit memory cell suitable for in-memory computing (IMC) according to an embodiment.

第6圖繪示根據一實施例之通用記憶體。 FIG. 6 illustrates a general purpose memory according to one embodiment.

第7圖繪示通用記憶體之操作方法的流程圖。 Figure 7 shows a flow chart of the operation method of the general memory.

第8圖繪示根據一實施例之寫入電晶體在訓練模式下的特性曲線圖。 Figure 8 shows a characteristic curve of a write transistor in a training mode according to an embodiment.

第9圖繪示根據一實施例之讀取電晶體在訓練模式下的特性曲線圖。 FIG. 9 shows a characteristic curve of a read transistor in a training mode according to an embodiment.

第10A圖示例說明單位記憶胞於訓練模式之權重變更程序中寫入「0」之權重值。 Figure 10A shows an example of writing a weight value of "0" into a unit memory cell during the weight change process in the training mode.

第10B圖示例說明單位記憶胞於訓練模式之權重保持程序。 Figure 10B illustrates the weight maintenance process of a unit memory cell in training mode.

第10C圖示例說明單位記憶胞於訓練模式之讀取運算程序。 Figure 10C illustrates the reading operation process of a unit memory cell in training mode.

第11A圖示例說明單位記憶胞於訓練模式之權重變更程序中寫入「1」之權重值。 Figure 11A shows an example of writing a weight value of "1" into a unit memory cell during the weight change process in the training mode.

第11B圖示例說明單位記憶胞於訓練模式之權重保持程序。 Figure 11B illustrates the weight maintenance process of a unit memory cell in training mode.

第11C圖示例說明單位記憶胞於訓練模式之讀取運算程序。 Figure 11C illustrates the reading operation process of a unit memory cell in training mode.

第12圖繪示根據一實施例之寫入電晶體在推論模式下的特性曲線圖。 Figure 12 shows a characteristic curve of a write transistor in an inference mode according to an embodiment.

第13圖繪示根據一實施例之讀取電晶體在推論模式下的特性曲線圖。 FIG. 13 shows a characteristic curve diagram of a read transistor in an inference mode according to an embodiment.

第14A圖示例說明單位記憶胞於推論模式之權重設定程序中寫入「0」之權重值。 Figure 14A shows an example of writing a weight value of "0" to a unit memory cell during the weight setting process in the inference mode.

第14B圖示例說明單位記憶胞於推論模式之權重保持程序。 Figure 14B illustrates the weight maintenance process of a unit memory cell in the inference mode.

第14C圖示例說明單位記憶胞於推論模式之讀取運算程序。 Figure 14C illustrates the reading operation process of a unit memory cell in the inference mode.

第15A圖示例說明單位記憶胞於推論模式之權重設定程序中寫入「1」之權重值。 Figure 15A shows an example of writing a weight value of "1" into a unit memory cell during the weight setting process in the inference mode.

第15B圖示例說明單位記憶胞於推論模式之權重保持程序。 Figure 15B illustrates the weight maintenance process of a unit memory cell in the inference mode.

第15C圖示例說明單位記憶胞於推論模式之讀取運算程序。 Figure 15C illustrates the reading operation process of a unit memory cell in the inference mode.

第16圖繪示讀取位元線的電流與電壓關係圖。 Figure 16 shows the relationship between the current and voltage of the read bit line.

第17圖示例說明儲存電位。 Figure 17 shows an example of stored potential.

第18圖示例說明寫入電晶體之臨界電壓。 Figure 18 illustrates an example of the critical voltage written into a transistor.

第19圖繪示推論模式之預放電程序及讀取運算程序的電壓曲線圖。 Figure 19 shows the voltage curves of the pre-discharge process and the read operation process in the inference mode.

第20圖繪示根據一實施例之寫入電晶體在推論模式下的特性曲線圖。 Figure 20 shows a characteristic curve of a write transistor in an inference mode according to an embodiment.

第21圖示例說明預放電程序。 Figure 21 shows an example of the pre-discharge procedure.

請參照第1圖,其繪示根據一實施例之人工智慧模型之節點ND的示意圖。節點ND接收數筆輸入訊號Xi後,這些輸入訊號Xi與權重值(weight)Wi進行乘積和運算(multiply-accumulate,MAC)並加上偏移量b後,得到一運算值z。運算值z經由活化函數(activation function)f的運算後,得到輸出值a。輸出值a會再輸入至下一層的節點。 Please refer to Figure 1, which shows a schematic diagram of a node ND of an artificial intelligence model according to an embodiment. After the node ND receives several input signals Xi, these input signals Xi and weight values (weight) Wi are multiplied and calculated (multiply-accumulate, MAC) and offset b is added to obtain an operation value z. The operation value z is calculated by the activation function (activation function) f to obtain the output value a. The output value a will be input to the node of the next layer.

從第1圖來看,乘積和運算是人工智慧運算中相當重要的一個動作。請參照第2圖,其繪示進行乘積和運算之記憶體200。記憶體200例如是包括數個單位記憶胞210、220、230、240。各個單位記憶胞210、220、230、240例如是包括一電阻211、221、231、241。電阻211、221、231、241分別具有電導G1、G2、G3、G4。將電壓V1、V2、V3、V4分別輸入至位元線BL2時,將於字元線WL2分別形成讀取電流I1、I2、I3、I4。讀取電流I1相當於電壓V1與電導G1之乘積;讀取電流I2相當於電壓V2與電導G2之乘積;讀取電流I3相當於電壓V3與電導G3之乘積;讀取電流I4相當於電壓V4與電導G4之乘積。總和電流I則相當於電壓V1、V2、V3、V4與電導G1、G2、G3、G4的乘積和。若以電壓V1、V2、V3、V4代表輸入訊號Xi,電導G1、G2、G3、G4代表權重值Wi,則如下式(1)所述,總和電流I代表輸入訊號Xi與權重值Wi之乘積和。透過第2圖之記憶體200即可實現人工智慧運算中的乘積和運算。 From FIG. 1, it can be seen that the product-sum operation is a very important operation in artificial intelligence operations. Please refer to FIG. 2, which shows a memory 200 that performs a product-sum operation. The memory 200 includes, for example, a plurality of unit memory cells 210, 220, 230, 240. Each unit memory cell 210, 220, 230, 240 includes, for example, a resistor 211, 221, 231, 241. The resistors 211, 221, 231, 241 have conductivities G1, G2, G3, G4, respectively. When the voltages V1, V2, V3, V4 are respectively input to the bit line BL2, read currents I1, I2, I3, I4 are respectively formed on the word line WL2. The read current I1 is equal to the product of the voltage V1 and the conductance G1; the read current I2 is equal to the product of the voltage V2 and the conductance G2; the read current I3 is equal to the product of the voltage V3 and the conductance G3; the read current I4 is equal to the product of the voltage V4 and the conductance G4. The total current I is equal to the product of the voltage V1, V2, V3, V4 and the conductance G1, G2, G3, G4. If the voltage V1, V2, V3, V4 represents the input signal Xi, and the conductance G1, G2, G3, G4 represents the weight value Wi, then as described in the following formula (1), the total current I represents the product of the input signal Xi and the weight value Wi. The memory 200 in Figure 2 can be used to implement the product and sum operations in artificial intelligence operations.

I=Σ i (W i * X i )=Σ i (G i * V i )..............................................(1) I=Σ i ( W i * X i )=Σ i ( G i * V i )........................ .............(1)

請參照第3圖,其繪示根據一實施例用以執行訓練模式(training)之記憶體300。記憶體300例如是包括矩陣排列之單位記憶胞310ij。各個單位記憶胞310ij例如是具有可變電阻311ij。可變電阻311ij具有電導Gij。這些電導Gij代表權重值Wij。在執行訓練模式的過程中,權重值Wij需要不斷地更新,故採用可變電阻311ij之記憶體300可以順利執行訓練模式。 Please refer to Figure 3, which shows a memory 300 for executing a training mode (training) according to an embodiment. The memory 300 includes, for example, unit memory cells 310ij arranged in a matrix. Each unit memory cell 310ij has, for example, a variable resistor 311ij. The variable resistor 311ij has a conductance Gij. These conductances Gij represent weight values Wij. In the process of executing the training mode, the weight values Wij need to be continuously updated, so the memory 300 using the variable resistor 311ij can smoothly execute the training mode.

請參照第4圖,其繪示根據一實施例用以執行推論模式(inference)之記憶體400。記憶體400例如是包括矩陣排列之單位記憶胞410ij。各個單位記憶胞410ij例如是具有固定電阻411ij。固定電阻411ij具有電導Gij。這些電導Gij代表權重值Wij。在執行推論模式的過程中,權重值Wij早已設定完成,且不該隨意變更,故採用固定電阻411ij之記憶體400可以順利執行推論模式。 Please refer to Figure 4, which shows a memory 400 for executing an inference mode according to an embodiment. The memory 400 includes, for example, unit memory cells 410ij arranged in a matrix. Each unit memory cell 410ij has, for example, a fixed resistor 411ij. The fixed resistor 411ij has a conductance Gij. These conductances Gij represent weight values Wij. In the process of executing the inference mode, the weight values Wij have already been set and should not be changed at will, so the memory 400 using the fixed resistor 411ij can smoothly execute the inference mode.

訓練模式與推論模式的要求不同。舉例來說,執行訓練模式之記憶體需要具有較高的可靠度(endurance),以滿足大量的權重值Wi更新動作;執行推論模式之記憶體需要具有非揮發性(non-volatility)與較高的保持力(retention),以使權重值Wi能夠在低耗電的情況下保持良好。一般而言,這兩種型態的記憶體截然不同,例如第3圖之記憶體300與第4圖之記憶體400就採用了截然不同的可變電阻311ij與固定電阻411ij。 The requirements of training mode and inference mode are different. For example, the memory running in training mode needs to have higher reliability (endurance) to meet a large number of weight value Wi update actions; the memory running in inference mode needs to have non-volatility (non-volatility) and higher retention (retention) so that the weight value Wi can be maintained well under low power consumption. Generally speaking, these two types of memory are completely different. For example, the memory 300 in Figure 3 and the memory 400 in Figure 4 use completely different variable resistors 311ij and fixed resistors 411ij.

請參照第5圖,其繪示根據一實施例之適用於記憶體內運算(In-Memory Computing,IMC)之單位記憶胞510。In-Memory Computing又稱為Computing In-Memory、Processing In-Memory(PIM)、或In-Memory Processing。單位記憶胞510包括一寫入電晶體511及一讀取電晶體512。由於單位記憶胞510係由兩個電晶體所組成,故又稱為2T結構。寫入電晶體511係為一臨界電壓可調電晶體。寫入電晶體511之閘極具有一電荷儲存層FG。讀取電晶體512之閘極連接於寫入電晶體511之汲極或源極。 Please refer to Figure 5, which shows a unit memory cell 510 suitable for in-memory computing (IMC) according to an embodiment. In-Memory Computing is also called Computing In-Memory, Processing In-Memory (PIM), or In-Memory Processing. The unit memory cell 510 includes a write transistor 511 and a read transistor 512. Since the unit memory cell 510 is composed of two transistors, it is also called a 2T structure. The write transistor 511 is a critical voltage adjustable transistor. The gate of the write transistor 511 has a charge storage layer FG. The gate of the read transistor 512 is connected to the drain or source of the write transistor 511.

寫入電晶體511需要具有較低的關閉電流(off-current),以確保良好的資料保持能力,其通道層的材質例如是氧化銦鎵鋅(IGZO)、氧化銦(In2O3)、矽(Si)、鍺(Ge)或三價族-五價族材料。讀取電晶體512需要具有較高的開啟電流(on-current),以確保讀取準確度,其通道層的材質例如是氧化銦鎵鋅(IGZO)、氧化銦(In2O3)、矽(Si)、鍺(Ge)或三價族-五價族材料。 The write transistor 511 needs to have a lower off-current to ensure good data retention, and the material of its channel layer is, for example, indium gallium zinc oxide (IGZO), indium oxide (In2O3), silicon (Si), germanium (Ge), or trivalent-pentavalent materials. The read transistor 512 needs to have a higher on-current to ensure reading accuracy, and the material of its channel layer is, for example, indium gallium zinc oxide (IGZO), indium oxide (In2O3), silicon (Si), germanium (Ge), or trivalent-pentavalent materials.

請參照第6圖,其繪示根據一實施例之通用記憶體500。通用記憶體500包括矩陣排列之單位記憶胞510。通用記憶體500包括一或多條寫入字元線WWL、一或多條讀取字元線RWL、一或多條寫入位元線WBL、一或多條讀取位元線RBL及數個單位記憶胞510。寫入電晶體511之閘極連接於寫入字元線WWL,寫入電晶體511之汲極與源極之其中之一連接於寫入位元 線WBL,寫入電晶體511之汲極與源極之其中之另一連接於讀取電晶體512之閘極。讀取電晶體512之閘極連接於寫入電晶體511之汲極或源極,讀取電晶體512之汲極與源極之其中之一連接於讀取位元線RBL,讀取電晶體512之汲極與源極之其中之另一連接於讀取字元線RWL。 Please refer to FIG. 6, which shows a universal memory 500 according to an embodiment. The universal memory 500 includes unit memory cells 510 arranged in a matrix. The universal memory 500 includes one or more write word lines WWL, one or more read word lines RWL, one or more write bit lines WBL, one or more read bit lines RBL and a plurality of unit memory cells 510. The gate of the write transistor 511 is connected to the write word line WWL, one of the drain and the source of the write transistor 511 is connected to the write bit line WBL, and the other of the drain and the source of the write transistor 511 is connected to the gate of the read transistor 512. The gate of the read transistor 512 is connected to the drain or source of the write transistor 511, one of the drain and source of the read transistor 512 is connected to the read bit line RBL, and the other of the drain and source of the read transistor 512 is connected to the read word line RWL.

在本實施例中,通用記憶體500適用於人工智慧之訓練模式與推論模式。也就是說,通用記憶體500執行於訓練模式時,可以提供如同動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)的高可靠度,以滿足大量的權重值Wi更新動作;通用記憶體500執行於推論模式時,可以供如同非揮發性記憶體之非揮發性與高保持力,以使權重值Wi能夠在低耗電的情況下保持良好。以下分別說明通用記憶體500之訓練模式與推論模式的運作。 In this embodiment, the general purpose memory 500 is applicable to the training mode and the inference mode of artificial intelligence. That is to say, when the general purpose memory 500 is executed in the training mode, it can provide the high reliability of dynamic random access memory (DRAM) to meet the large number of weight value Wi update actions; when the general purpose memory 500 is executed in the inference mode, it can provide the non-volatility and high retention of non-volatile memory, so that the weight value Wi can be maintained well under low power consumption. The following describes the operation of the training mode and the inference mode of the general purpose memory 500 respectively.

請參照第7圖,其繪示通用記憶體500之操作方法的流程圖。通用記憶體500同時適用於人工智慧之訓練模式M1與推論模式M2。訓練模式M1包含一權重變更程序P11、一權重保持程序P12及一讀取運算程序P13。權重變更程序P11用以變更權重值Wi;權重保持程序P12用以短暫保持權重值Wi;讀取運算程序P13用以讀取出權重值Wi,並同時進行乘積運算。在訓練模式M1中,權重變更程序P11、權重保持程序P12及讀取運算程序P13會重複執行,以藉由不斷調整權重值Wi來優化人工智慧模型。 Please refer to Figure 7, which shows a flow chart of the operation method of the general memory 500. The general memory 500 is applicable to both the training mode M1 and the inference mode M2 of artificial intelligence. The training mode M1 includes a weight change program P11, a weight retention program P12, and a read operation program P13. The weight change program P11 is used to change the weight value Wi; the weight retention program P12 is used to temporarily retain the weight value Wi; the read operation program P13 is used to read the weight value Wi and perform a multiplication operation at the same time. In the training mode M1, the weight change program P11, the weight retention program P12, and the read operation program P13 are repeatedly executed to optimize the artificial intelligence model by continuously adjusting the weight value Wi.

推論模式M2包含一權重設定程序P21、一權重保持程 序P22及一讀取運算程序P23。權重設定程序P21用以設定權重值Wi;權重保持程序P22用以保持權重值Wi;讀取運算程序P23用以讀取出權重值Wi,並同時進行乘積運算。在推論模式M2中,權重值Wi不會經常性變更。 The inference mode M2 includes a weight setting program P21, a weight holding program P22 and a reading operation program P23. The weight setting program P21 is used to set the weight value Wi; the weight holding program P22 is used to hold the weight value Wi; the reading operation program P23 is used to read the weight value Wi and perform a multiplication operation at the same time. In the inference mode M2, the weight value Wi will not change frequently.

以下先說明訓練模式M1的運作。請參照第8圖,其繪示根據一實施例之寫入電晶體511在訓練模式M1下的特性曲線圖。在訓練模式M1下,寫入電晶體511之電荷儲存層FG不會被改變,故特性曲線不會變動。寫入電晶體511之閘極被施加較高的偏壓VWWL1時,寫入電晶體511可以被導通;寫入電晶體511之閘極被施加較低的偏壓VWWL0時,寫入電晶體511可以被關閉。 The operation of the training mode M1 is first described below. Please refer to Figure 8, which shows a characteristic curve of the write transistor 511 in the training mode M1 according to an embodiment. In the training mode M1, the charge storage layer FG of the write transistor 511 will not be changed, so the characteristic curve will not change. When a higher bias voltage VWWL1 is applied to the gate of the write transistor 511, the write transistor 511 can be turned on; when a lower bias voltage VWWL0 is applied to the gate of the write transistor 511, the write transistor 511 can be turned off.

請參照第9圖,其繪示根據一實施例之讀取電晶體512在訓練模式M1下的特性曲線圖。在訓練模式M1下,讀取電晶體512之閘極具有較高的儲存電位VSN1時,讀取電晶體512可以被導通;讀取電晶體512之閘極具有較低的儲存電位VSN0時,讀取電晶體512可以被關閉。 Please refer to Figure 9, which shows a characteristic curve of the read transistor 512 in the training mode M1 according to an embodiment. In the training mode M1, when the gate of the read transistor 512 has a higher storage potential VSN1, the read transistor 512 can be turned on; when the gate of the read transistor 512 has a lower storage potential VSN0, the read transistor 512 can be turned off.

請參照第10A圖,其示例說明單位記憶胞510於訓練模式M1之權重變更程序P11中寫入「0」之權重值Wi。在訓練模式M1中,權重值Wi係儲存於介於寫入電晶體511與讀取電晶體512之間的儲存節點SN。 Please refer to Figure 10A, which illustrates an example of writing a weight value Wi of "0" to the unit memory cell 510 in the weight change procedure P11 of the training mode M1. In the training mode M1, the weight value Wi is stored in the storage node SN between the write transistor 511 and the read transistor 512.

當單位記憶胞510於訓練模式M1之權重變更程序P11中欲寫入「0」之權重值Wi時,寫入字元線WWL被施加較高的偏壓VWWL1(例如是3V),以導通寫入電晶體511;寫入位元線 WBL被施加較低的偏壓VWBL0(例如是0V)。 When the unit memory cell 510 wants to write a weight value Wi of "0" in the weight change procedure P11 of the training mode M1, a higher bias voltage VWWL1 (e.g., 3V) is applied to the write word line WWL to turn on the write transistor 511; a lower bias voltage VWBL0 (e.g., 0V) is applied to the write bit line WBL.

由於寫入電晶體511已經導通,由寫入位元線WBL輸入的偏壓VWBL0可以輸入至儲存節點SN,以使儲存節點SN具有低於讀取電晶體512之臨界電壓VtR的儲存電位VSN0(例如是0V)。儲存節點SN的儲存電位VSN0可以代表單位記憶胞510之「0」的權重值Wi。 Since the write transistor 511 is turned on, the bias voltage VWBL0 inputted from the write bit line WBL can be inputted to the storage node SN, so that the storage node SN has a storage potential VSN0 (e.g., 0V) lower than the critical voltage VtR of the read transistor 512. The storage potential VSN0 of the storage node SN can represent the weight value Wi of "0" of the unit memory cell 510.

請參照第10B圖,其示例說明單位記憶胞510於訓練模式M1之權重保持程序P12。當單位記憶胞510於訓練模式M1欲暫時維持權重值Wi時,寫入字元線WWL被施加較低的偏壓VWWL0(例如是0V),以關閉寫入電晶體511。 Please refer to Figure 10B, which illustrates the weight maintenance procedure P12 of the unit memory cell 510 in the training mode M1. When the unit memory cell 510 wants to temporarily maintain the weight value Wi in the training mode M1, a lower bias voltage VWWL0 (for example, 0V) is applied to the write word line WWL to turn off the write transistor 511.

由於寫入電晶體511已經關閉,儲存節點SN之儲存電位VSN0不會變動。 Since the write transistor 511 is turned off, the storage potential VSN0 of the storage node SN will not change.

請參照第10C圖,其示例說明單位記憶胞510於訓練模式M1之讀取運算程序P13。當欲讀取單位記憶胞510之權重值Wi,並同時進行乘積運算時,寫入字元線WWL被施加較低的偏壓VWWL0(例如是0V),以關閉寫入電晶體511;讀取位元線RBL被施加輸入訊號Xi(例如是0.8V)。 Please refer to Figure 10C, which illustrates the example of the read operation procedure P13 of the unit memory cell 510 in the training mode M1. When the weight value Wi of the unit memory cell 510 is to be read and the multiplication operation is performed at the same time, a lower bias voltage VWWL0 (for example, 0V) is applied to the write word line WWL to turn off the write transistor 511; the input signal Xi (for example, 0.8V) is applied to the read bit line RBL.

由於儲存電位VSN0低於讀取電晶體512之臨界電壓VtR,故讀取電晶體512被關閉,而不會在讀取位元線RBL產生讀取電流Ii。讀取電流Ii為0相當於輸入訊號Xi與「0」之權重值Wi的乘積。 Since the storage potential VSN0 is lower than the critical voltage VtR of the read transistor 512, the read transistor 512 is turned off and no read current Ii is generated in the read bit line RBL. The read current Ii is 0, which is equivalent to the product of the input signal Xi and the weight value Wi of "0".

請參照第11A圖,其示例說明單位記憶胞510於訓練模 式M1之權重變更程序P11中寫入「1」之權重值Wi。在訓練模式M1中,權重值Wi係儲存於介於寫入電晶體511與讀取電晶體512之間的儲存節點SN。 Please refer to Figure 11A, which illustrates an example of writing a weight value Wi of "1" to the unit memory cell 510 in the weight change procedure P11 of the training mode M1. In the training mode M1, the weight value Wi is stored in the storage node SN between the write transistor 511 and the read transistor 512.

當單位記憶胞510於訓練模式M1之權重變更程序P11中欲寫入「1」之權重值Wi時,寫入字元線WWL被施加較高之偏壓VWWL1(例如是3V),以導通寫入電晶體511;寫入位元線WBL被施加較高之偏壓VWBL1(例如是1V)。 When the unit memory cell 510 wants to write a weight value Wi of "1" in the weight change procedure P11 of the training mode M1, a higher bias voltage VWWL1 (for example, 3V) is applied to the write word line WWL to turn on the write transistor 511; a higher bias voltage VWBL1 (for example, 1V) is applied to the write bit line WBL.

由於寫入電晶體511已經導通,由寫入位元線WBL輸入的偏壓VWBL1可以輸入至儲存節點SN,以使儲存節點SN具有高於讀取電晶體512之臨界電壓VtR的儲存電位VSN1(例如是1V)。儲存節點SN的儲存電位VSN1可以代表單位記憶胞510之「1」的權重值Wi。如上所述,在訓練模式M1之權重變更程序P11中,當權重值Wi改變,寫入電晶體511之臨界電壓VtW固定不變。 Since the write transistor 511 is already turned on, the bias voltage VWBL1 inputted from the write bit line WBL can be inputted to the storage node SN, so that the storage node SN has a storage potential VSN1 (e.g., 1V) higher than the critical voltage VtR of the read transistor 512. The storage potential VSN1 of the storage node SN can represent the weight value Wi of "1" of the unit memory cell 510. As described above, in the weight change procedure P11 of the training mode M1, when the weight value Wi changes, the critical voltage VtW of the write transistor 511 remains fixed.

請參照第11B圖,其示例說明單位記憶胞510於訓練模式M1之權重保持程序P12。當單位記憶胞510於訓練模式M1欲暫時維持權重值Wi時,寫入字元線WWL被施加較低之偏壓VWWL0(例如是0V),以關閉寫入電晶體511。 Please refer to Figure 11B, which illustrates the weight maintenance procedure P12 of the unit memory cell 510 in the training mode M1. When the unit memory cell 510 wants to temporarily maintain the weight value Wi in the training mode M1, a lower bias voltage VWWL0 (for example, 0V) is applied to the write word line WWL to turn off the write transistor 511.

由於寫入電晶體511已經關閉,儲存節點SN之儲存電位VSN1不會流失。 Since the write transistor 511 is turned off, the storage potential VSN1 of the storage node SN will not be lost.

請參照第11C圖,其示例說明單位記憶胞510於訓練模式M1之讀取運算程序P13。在訓練模式M1之讀取運算程序P13中, 權重值Wi固定不變。當欲讀取單位記憶胞510之權重值Wi,並同時進行乘積運算時,寫入字元線WWL被施加較低之偏壓VWWL0(例如是0V),以關閉寫入電晶體511;讀取位元線RBL被施加輸入訊號Xi(例如是0.8V)。 Please refer to Figure 11C, which illustrates the example of the read operation procedure P13 of the unit memory cell 510 in the training mode M1. In the read operation procedure P13 of the training mode M1, the weight value Wi is fixed. When the weight value Wi of the unit memory cell 510 is to be read and the multiplication operation is performed at the same time, the write word line WWL is applied with a lower bias voltage VWWL0 (for example, 0V) to turn off the write transistor 511; the read bit line RBL is applied with an input signal Xi (for example, 0.8V).

由於儲存電位VSN1高於讀取電晶體512之臨界電壓VtR,故讀取電晶體512會被開啟,而會在讀取位元線RBL產生讀取電流Ii。讀取電流Ii相當於輸入訊號Xi與「1」之權重值Wi的乘積。 Since the storage potential VSN1 is higher than the critical voltage VtR of the read transistor 512, the read transistor 512 will be turned on and a read current Ii will be generated on the read bit line RBL. The read current Ii is equal to the product of the input signal Xi and the weight value Wi of "1".

上述第10A~11C圖之操作示例可以整理如下表一,但表一僅為示例數值,並非用以侷限本發明。 The operation examples of Figures 10A to 11C above can be summarized in Table 1 below, but Table 1 is only an example value and is not intended to limit the present invention.

Figure 112113062-A0305-02-0015-1
Figure 112113062-A0305-02-0015-1
Figure 112113062-A0305-02-0016-2
Figure 112113062-A0305-02-0016-2

以下繼續說明推論模式M2。請參照第12圖,其繪示根據一實施例之寫入電晶體511在推論模式M2下的特性曲線圖。在推論模式M2下,寫入電晶體511之電荷儲存層FG可以設定為兩種電荷量,故寫入電晶體511具有兩種特性曲線CV110、CV111。寫入電晶體511之閘極被施加預定的偏壓VWWL時,若寫入電晶體511具有較高的臨界電壓VtW0(如特性曲線CV110所示),則寫入電晶體511會被關閉;若寫入電晶體511具有較低的臨界電壓VtW1(如特性曲線CV111所示),則寫入電晶體511會被開啟。 The following continues to explain the inference mode M2. Please refer to FIG. 12, which shows a characteristic curve diagram of the write transistor 511 in the inference mode M2 according to an embodiment. In the inference mode M2, the charge storage layer FG of the write transistor 511 can be set to two charge amounts, so the write transistor 511 has two characteristic curves CV110 and CV111. When a predetermined bias voltage VWWL is applied to the gate of the write transistor 511, if the write transistor 511 has a higher critical voltage VtW0 (as shown by the characteristic curve CV110), the write transistor 511 will be turned off; if the write transistor 511 has a lower critical voltage VtW1 (as shown by the characteristic curve CV111), the write transistor 511 will be turned on.

請參照第13圖,其繪示根據一實施例之讀取電晶體512在推論模式M2下的特性曲線圖。在推論模式M2下,讀取電晶體512之閘極具有較高的儲存電位VSN1時,讀取電晶體512可以被導通;讀取電晶體512之閘極具有較低的儲存電位VSN0時,讀取電晶體512可以被關閉。 Please refer to FIG. 13, which shows a characteristic curve diagram of the read transistor 512 in the inference mode M2 according to an embodiment. In the inference mode M2, when the gate of the read transistor 512 has a higher storage potential VSN1, the read transistor 512 can be turned on; when the gate of the read transistor 512 has a lower storage potential VSN0, the read transistor 512 can be turned off.

請參照第14A圖,其示例說明單位記憶胞510於推論模式M2之權重設定程序P21中寫入「0」之權重值Wi。寫入電晶體511係為一臨界電壓可調電晶體。寫入電晶體511之閘極具有電荷儲存層FG。在推論模式M2中,係以寫入電晶體511之臨界電壓VtW0代表「0」之權重值Wi。通常可使用Fowler-Nordheim隧穿和熱載流子注入機制(+FN/-FN)來修改存儲在電荷儲存層中的電荷量,以使寫入電晶體511具有較高之臨界電壓VtW0或較低之臨 界電壓VtW1(繪示於第15A~15C圖)。 Please refer to FIG. 14A, which illustrates an example of writing a weight value Wi of "0" into a unit memory cell 510 in the weight setting procedure P21 of the inference mode M2. The write transistor 511 is a critical voltage adjustable transistor. The gate of the write transistor 511 has a charge storage layer FG. In the inference mode M2, the critical voltage VtW0 of the write transistor 511 represents the weight value Wi of "0". The amount of charge stored in the charge storage layer can usually be modified using Fowler-Nordheim tunneling and hot carrier injection mechanisms (+FN/-FN) so that the write transistor 511 has a higher critical voltage VtW0 or a lower critical voltage VtW1 (shown in Figures 15A to 15C).

當單位記憶胞510於推論模式M2之權重設定程序P21中欲寫入「0」之權重值Wi時,透過寫入字元線WWL執行-FN機制,以使寫入電晶體511具有較高之臨界電壓VtW0。 When the unit memory cell 510 wants to write a weight value Wi of "0" in the weight setting procedure P21 of the inference mode M2, the -FN mechanism is executed through the write word line WWL to make the write transistor 511 have a higher critical voltage VtW0.

請參照第14B圖,其示例說明單位記憶胞510於推論模式M2之權重保持程序P22。當單位記憶胞510於推論模式M2欲維持權重值Wi時,寫入字元線WWL被施加較低的偏壓VWWL0(例如是0V),以關閉寫入電晶體511。 Please refer to Figure 14B, which illustrates the weight maintenance process P22 of the unit memory cell 510 in the inference mode M2. When the unit memory cell 510 wants to maintain the weight value Wi in the inference mode M2, a lower bias voltage VWWL0 (for example, 0V) is applied to the write word line WWL to turn off the write transistor 511.

請參照第14C圖,其示例說明單位記憶胞510於推論模式M2之讀取運算程序P23。當欲讀取單位記憶胞510之權重值Wi,並同時進行乘積運算時,寫入字元線WWL被施預定之偏壓VWWL(介於臨界電壓VtW1與臨界電壓VtW0之間);寫入位元線WBL被施加較高的偏壓VWBL1(高於讀取電晶體512的臨界電壓VtR)。偏壓VWWL低於臨界電壓VtW0,而無法使寫入電晶體511導通。因此,偏壓VWBL1無法達到讀取電晶體512,故讀取電晶體512會被關閉,而不會在讀取位元線RBL產生讀取電流Ii。讀取電流Ii為0相當於輸入訊號Xi與「0」之權重值Wi的乘積。 Please refer to FIG. 14C , which illustrates the read operation procedure P23 of the unit memory cell 510 in the inference mode M2. When the weight value Wi of the unit memory cell 510 is to be read and the multiplication operation is performed at the same time, the write word line WWL is applied with a predetermined bias voltage VWWL (between the critical voltage VtW1 and the critical voltage VtW0); the write bit line WBL is applied with a higher bias voltage VWBL1 (higher than the critical voltage VtR of the read transistor 512). The bias voltage VWWL is lower than the critical voltage VtW0, and the write transistor 511 cannot be turned on. Therefore, the bias voltage VWBL1 cannot reach the read transistor 512, so the read transistor 512 will be turned off and no read current Ii will be generated on the read bit line RBL. The read current Ii is 0, which is equivalent to the product of the input signal Xi and the weight value Wi of "0".

請參照第15A圖,其示例說明單位記憶胞510於推論模式M2之權重設定程序P21中寫入「1」之權重值Wi。當單位記憶胞510於推論模式M2之權重設定程序P21中欲寫入「1」之權重值Wi時,透過寫入字元線WWL執行+FN機制,以使寫入電晶體511具有較低之臨界電壓VtW1。 Please refer to Figure 15A, which illustrates an example of writing a weight value Wi of "1" to the unit memory cell 510 in the weight setting procedure P21 of the inference mode M2. When the unit memory cell 510 is to write a weight value Wi of "1" in the weight setting procedure P21 of the inference mode M2, the +FN mechanism is executed through the write word line WWL so that the write transistor 511 has a lower critical voltage VtW1.

請參照第15B圖,其示例說明單位記憶胞510於推論模式M2之權重保持程序P22。當單位記憶胞510於推論模式M2欲維持權重值Wi時,寫入字元線WWL被施加較低的偏壓VWWL0(例如是0V),以關閉寫入電晶體511。 Please refer to Figure 15B, which illustrates the weight maintenance process P22 of the unit memory cell 510 in the inference mode M2. When the unit memory cell 510 wants to maintain the weight value Wi in the inference mode M2, a lower bias voltage VWWL0 (for example, 0V) is applied to the write word line WWL to turn off the write transistor 511.

請參照第15C圖,其示例說明單位記憶胞510於推論模式M2之讀取運算程序P23。在推論模式M2之讀取運算程序P23中,權重值Wi固定不變。當欲讀取單位記憶胞510之權重值Wi,並同時進行乘積運算時,寫入字元線WWL被施預定之偏壓VWWL(介於臨界電壓VtW1與臨界電壓VtW0之間);寫入位元線WBL被施加較高之偏壓VWBL1(高於讀取電晶體512之臨界電壓VtR)。偏壓VWWL高於臨界電壓VtW1,而使寫入電晶體511導通。因此,偏壓VWBL1可以達到讀取電晶體512,故讀取電晶體512會被開啟,而在讀取位元線RBL產生讀取電流Ii。讀取電流Ii相當於輸入訊號Xi與「1」之權重值Wi的乘積。 Please refer to FIG. 15C , which illustrates an example of the read operation procedure P23 of the unit memory cell 510 in the inference mode M2. In the read operation procedure P23 of the inference mode M2, the weight value Wi is fixed. When the weight value Wi of the unit memory cell 510 is to be read and the multiplication operation is performed at the same time, the write word line WWL is applied with a predetermined bias voltage VWWL (between the critical voltage VtW1 and the critical voltage VtW0); the write bit line WBL is applied with a higher bias voltage VWBL1 (higher than the critical voltage VtR of the read transistor 512). The bias voltage VWWL is higher than the critical voltage VtW1, which turns on the write transistor 511. Therefore, the bias voltage VWBL1 can reach the read transistor 512, so the read transistor 512 will be turned on, and the read current Ii will be generated in the read bit line RBL. The read current Ii is equal to the product of the input signal Xi and the weight value Wi of "1".

上述第14A~15C圖之操作可以整理如下表二,但表二僅為示例數值,並非用以侷限本發明。 The operations of Figures 14A to 15C above can be summarized as shown in Table 2 below, but Table 2 is only an example value and is not intended to limit the present invention.

Figure 112113062-A0305-02-0018-3
Figure 112113062-A0305-02-0018-3
Figure 112113062-A0305-02-0019-4
Figure 112113062-A0305-02-0019-4

上述之權重值Wi係以「0」與「1」之二位元數值為例作說明。在另一實施例中,權重值Wi也可以是具有小數的類比數值。請參照第16~17圖,第16圖繪示讀取位元線RBL的電流與電壓關係圖,第17圖示例說明儲存電位VSN1i。如第16圖所示,曲線CVi對應於不同的過驅動電壓。如第17圖所示,過驅動電壓係為儲存電位VSN1i與讀取電晶體512之臨界電壓VtR的差值。對應於不同的儲存電位VSN1i,將形成不同程度的過驅動電壓。第16圖中上方的曲線CVi對應於較高的過驅動電壓。讀取位元線RBL形成的電流與過驅動電壓形成正相關。亦即,讀取位元線RBL形成的電流與儲存電位VSN1i形成正相關。因此,於儲存節點SN可以儲存各種不同程度的儲存電位 VSN1i,以使權重值Wi具有不同程度的類比數值。 The above-mentioned weight value Wi is explained using the two-bit value of "0" and "1" as an example. In another embodiment, the weight value Wi can also be an analog value with a decimal. Please refer to Figures 16 and 17. Figure 16 shows the current and voltage relationship diagram of the read bit line RBL, and Figure 17 illustrates the storage potential VSN1i. As shown in Figure 16, the curve CVi corresponds to different over-drive voltages. As shown in Figure 17, the over-drive voltage is the difference between the storage potential VSN1i and the critical voltage VtR of the read transistor 512. Corresponding to different storage potentials VSN1i, different degrees of over-drive voltage will be formed. The upper curve CVi in Figure 16 corresponds to a higher over-drive voltage. The current formed by the read bit line RBL is positively correlated with the overdrive voltage. That is, the current formed by the read bit line RBL is positively correlated with the storage potential VSN1i. Therefore, various levels of storage potentials VSN1i can be stored in the storage node SN, so that the weight value Wi has different levels of analog values.

也就是說,如第17圖所示,在訓練模式M1之讀取運算程序P13中,儲存節點SN之不同的儲存電位VSN1i可以使讀取電晶體512具有不同的導通程度,以於讀取電晶體512形成對應不同權重值Wi之不同的讀取電流Ii。 That is, as shown in FIG. 17 , in the read operation procedure P13 of the training mode M1, different storage potentials VSN1i of the storage node SN can make the read transistor 512 have different conduction levels, so as to form different read currents Ii corresponding to different weight values Wi in the read transistor 512 .

此外,請參照第18圖,其示例說明寫入電晶體511之臨界電壓VtW1i。在推論模式M2之讀取運算程序P23中,寫入電晶體511之不同的臨界電壓VtW1i可以使寫入電晶體511具有不同的導通程度,以於儲存節點SN形成不同的儲存電位VSN1i。儲存節點SN之不同的儲存電位VSN1i可以使讀取電晶體512具有不同的導通程度,以於讀取電晶體512形成對應不同權重值Wi之不同的讀取電流Ii。 In addition, please refer to Figure 18, which illustrates the critical voltage VtW1i of the write transistor 511. In the read operation procedure P23 of the inference mode M2, different critical voltages VtW1i of the write transistor 511 can make the write transistor 511 have different conduction levels, so as to form different storage potentials VSN1i at the storage node SN. Different storage potentials VSN1i of the storage node SN can make the read transistor 512 have different conduction levels, so as to form different read currents Ii corresponding to different weight values Wi at the read transistor 512.

此外,請參照第19~21圖,第19圖繪示推論模式M2之一預放電程序P20及讀取運算程序P23的電壓曲線圖,第20圖繪示根據一實施例之寫入電晶體511在推論模式M2下的特性曲線圖,第21圖示例說明預放電程序P20。如第19圖所示,在執行讀取運算程序P23之前,會先執行預放電程序P20。在預放電程序P20中,寫入字元線WWL被施加一導通偏壓Vpass,以導通寫入電晶體511。如第20圖所示,導通偏壓Vpass高於寫入電晶體511之較高的臨界電壓VtW0,以使寫入電晶體511確實被導通。如第21圖所示,寫入電晶體511被導通後,儲存節點SN被放電,使得留存於儲存節點SN的寄生電荷可以被清除,以 避免影響讀取運算程序P23的結果。 In addition, please refer to Figures 19 to 21. Figure 19 shows a voltage curve diagram of a pre-discharge process P20 and a read operation process P23 in the inference mode M2, Figure 20 shows a characteristic curve diagram of the write transistor 511 in the inference mode M2 according to an embodiment, and Figure 21 illustrates the pre-discharge process P20. As shown in Figure 19, the pre-discharge process P20 is executed before executing the read operation process P23. In the pre-discharge process P20, a conduction bias Vpass is applied to the write word line WWL to turn on the write transistor 511. As shown in Figure 20, the conduction bias Vpass is higher than the higher critical voltage VtW0 of the write transistor 511 so that the write transistor 511 is actually turned on. As shown in FIG. 21, after the write transistor 511 is turned on, the storage node SN is discharged, so that the parasitic charge remaining in the storage node SN can be cleared to avoid affecting the result of the read operation program P23.

根據上述實施例,採用2T結構之通用記憶體500可以適用於人工智慧之訓練模式M1與推論模式M2。在訓練模式M1及推論模式M2,權重值Wi儲存於單位記憶胞510之不同處。通用記憶體500執行於訓練模式M1時,可以提供如同動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)的高可靠度,以滿足大量的權重值Wi更新動作;通用記憶體500執行於推論模式M2時,可以供如同非揮發性記憶體之非揮發性與高保持力,以使權重值Wi能夠在低耗電的情況下保持良好。 According to the above embodiment, the universal memory 500 using the 2T structure can be applied to the training mode M1 and the inference mode M2 of artificial intelligence. In the training mode M1 and the inference mode M2, the weight value Wi is stored in different places of the unit memory cell 510. When the universal memory 500 is executed in the training mode M1, it can provide high reliability like dynamic random access memory (DRAM) to meet a large number of weight value Wi update actions; when the universal memory 500 is executed in the inference mode M2, it can provide non-volatility and high retention like non-volatile memory, so that the weight value Wi can be maintained well under low power consumption.

綜上所述,雖然本揭露已以實施例揭露如上,然其並非用以限定本揭露。本揭露所屬技術領域中具有通常知識者,在不脫離本揭露之精神和範圍內,當可作各種之更動與潤飾。因此,本揭露之保護範圍當視後附之申請專利範圍所界定者為準。 In summary, although the present disclosure has been disclosed as above by the embodiments, it is not intended to limit the present disclosure. Those with ordinary knowledge in the technical field to which the present disclosure belongs can make various changes and modifications without departing from the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the scope defined by the attached patent application.

510:單位記憶胞 510: Unit memory cell

511:寫入電晶體 511: Write transistor

512:讀取電晶體 512: Read transistors

FG:電荷儲存層 FG: Charge storage layer

RBL:讀取位元線 RBL: Read Bit Line

RWL:讀取字元線 RWL: Read character line

WBL:寫入位元線 WBL: Write Bit Line

WWL:寫入字元線 WWL: Write Character Line

Claims (10)

一種適用於記憶體內運算(In-Memory Computing,IMC)之通用記憶體,包括:至少一寫入字元線;至少一單位記憶胞,包括:一寫入電晶體,該寫入電晶體之閘極連接於該寫入字元線,該寫入電晶體係為一臨界電壓可調電晶體;及一讀取電晶體,該讀取電晶體之閘極連接於該寫入電晶體之汲極或源極;以及至少一讀取字元線,連接於該讀取電晶體之汲極或源極;其中在一訓練模式中,介於該寫入電晶體與該讀取電晶體之間之一儲存節點的一儲存電位代表該單位記憶胞之一權重值(weight);在一推論模式之一權重設定程序中,該寫入電晶體之一臨界電壓代表該單位記憶胞之該權重值。 A universal memory suitable for in-memory computing (IMC) includes: at least one write word line; at least one unit memory cell, including: a write transistor, the gate of the write transistor is connected to the write word line, the write transistor is a critical voltage adjustable transistor; and a read transistor, the gate of the read transistor is connected to the drain or source of the write transistor; and at least A read word line is connected to the drain or source of the read transistor; wherein in a training mode, a storage potential of a storage node between the write transistor and the read transistor represents a weight value of the unit memory cell; in a weight setting procedure in an inference mode, a critical voltage of the write transistor represents the weight value of the unit memory cell. 如請求項1所述之適用於記憶體內運算之通用記憶體,其中該寫入電晶體之該閘極具有一電荷儲存層。 A general purpose memory suitable for in-memory computing as described in claim 1, wherein the gate of the write transistor has a charge storage layer. 一種適用於記憶體內運算(In-Memory Computing,IMC)之通用記憶體的操作方法,其中該通用記憶體包括至少一單位記憶胞,該單位記憶胞包括一寫入電晶體及一 讀取電晶體,該讀取電晶體之閘極連接於該寫入電晶體之汲極或源極,該操作方法包括:進行一訓練模式之一權重變更程序,在該訓練模式之該權重變更程序中,對介於該寫入電晶體與該讀取電晶體之間的一儲存節點充電或放電至一儲存電位,該儲存節點之該儲存電位代表該單位記憶胞之一權重值(weight);以及進行一推論模式之一權重設定程序,在該推論模式之該權重設定程序中,對該寫入電晶體進行載子注入機制,以改變該寫入電晶體之一臨界電壓,該寫入電晶體之該臨界電壓代表該單位記憶胞之該權重值。 An operation method of a universal memory suitable for in-memory computing (IMC), wherein the universal memory includes at least one unit memory cell, the unit memory cell includes a write transistor and a read transistor, the gate of the read transistor is connected to the drain or source of the write transistor, the operation method includes: performing a weight change procedure of a training mode, in which the weight change procedure of the training mode is performed on the write transistor and the read transistor. A storage node between the crystals is charged or discharged to a storage potential, and the storage potential of the storage node represents a weight value (weight) of the unit memory cell; and a weight setting procedure of an inference mode is performed, in which a carrier injection mechanism is performed on the write transistor to change a critical voltage of the write transistor, and the critical voltage of the write transistor represents the weight value of the unit memory cell. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,其中在該訓練模式之該權重變更程序中,當該權重值改變,該寫入電晶體之該臨界電壓固定不變。 The method for operating a general purpose memory suitable for in-memory computing as described in claim 3, wherein in the weight change procedure of the training mode, when the weight value changes, the critical voltage of the write transistor remains fixed. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,更包括:進行該訓練模式之一讀取運算程序,在該訓練模式之該讀取運算程序中,該儲存節點之該儲存電位改變該讀取電晶體的導通程度,以於該讀取電晶體形成對應該權重值之一讀取電流。 The method for operating a general-purpose memory applicable to in-memory operations as described in claim 3 further includes: performing a read operation procedure of the training mode, in which the storage potential of the storage node changes the conduction degree of the read transistor to form a read current corresponding to the weight value in the read transistor. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,更包括: 進行該推論模式之一讀取運算程序,在該推論模式之該讀取運算程序中,該權重值固定不變。 The method for operating a general purpose memory applicable to in-memory operations as described in claim 3 further includes: Performing a read operation procedure of the inference mode, in which the weight value is fixed and unchanged. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,更包括:進行該推論模式之一讀取運算程序,在該推論模式之該讀取運算程序中,該寫入電晶體之該臨界電壓控制該寫入電晶體的導通程度,以改變該儲存節點的該儲存電位,並於該讀取電晶體形成對應該權重值之一讀取電流。 The operation method of the general memory applicable to in-memory operation as described in claim 3 further includes: performing a read operation procedure of the inference mode, in which the critical voltage of the write transistor controls the conduction degree of the write transistor to change the storage potential of the storage node, and forms a read current corresponding to the weight value in the read transistor. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,其中該寫入電晶體連接一寫入位元線,在該推論模式之一讀取運算程序中,該寫入位元線之偏壓大於該讀取電晶體之一臨界電壓。 The method for operating a universal memory suitable for in-memory operation as described in claim 3, wherein the write transistor is connected to a write bit line, and in a read operation procedure in the inference mode, the bias of the write bit line is greater than a critical voltage of the read transistor. 如請求項3所述之適用於記憶體內運算之通用記憶體的操作方法,更包括:進行該推論模式之一預放電程序,該推論模式之該預放電程序執行於該推論模式之該讀取運算程序之前,在該推論模式之該預放電程序中讀取運算程序中,該儲存節點被放電。 The method for operating a general purpose memory applicable to in-memory computing as described in claim 3 further includes: performing a pre-discharge procedure of the inference mode, the pre-discharge procedure of the inference mode being executed before the read computing procedure of the inference mode, and the storage node being discharged during the read computing procedure in the pre-discharge procedure of the inference mode. 一種適用於記憶體內運算(In-Memory Computing,IMC)之通用記憶體,包括:至少一寫入字元線; 至少一單位記憶胞,包括:一寫入電晶體,該寫入電晶體之閘極連接於該寫入字元線,該寫入電晶體係為一臨界電壓可調電晶體;及一讀取電晶體,該讀取電晶體之閘極連接於該寫入電晶體之汲極或源極;以及至少一讀取字元線,連接於該讀取電晶體之汲極或源極;其中,該通用記憶體通用於一訓練模式及一推論模式,在該訓練模式及該推論模式,一權重值儲存於該單位記憶胞之不同處。 A universal memory suitable for in-memory computing (IMC), comprising: at least one write word line; at least one unit memory cell, comprising: a write transistor, the gate of the write transistor is connected to the write word line, the write transistor is a critical voltage adjustable transistor; and a read transistor, the gate of the read transistor is connected to the drain or source of the write transistor; and at least one read word line is connected to the drain or source of the read transistor; wherein the universal memory is common to a training mode and an inference mode, in which a weight value is stored in different places of the unit memory cell.
TW112113062A 2023-01-16 2023-04-07 Universal memory for in-memory computing and operation method thereof TWI856602B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363439157P 2023-01-16 2023-01-16
US63/439,157 2023-01-16

Publications (2)

Publication Number Publication Date
TW202431262A TW202431262A (en) 2024-08-01
TWI856602B true TWI856602B (en) 2024-09-21

Family

ID=93260234

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112113062A TWI856602B (en) 2023-01-16 2023-04-07 Universal memory for in-memory computing and operation method thereof

Country Status (1)

Country Link
TW (1) TWI856602B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220108742A1 (en) * 2020-10-02 2022-04-07 Qualcomm Incorporated Differential charge sharing for compute-in-memory (cim) cell
TW202230164A (en) * 2021-01-28 2022-08-01 旺宏電子股份有限公司 Multiplication and addition operation device and control method for multiplication and addition operation thereof
TW202230367A (en) * 2021-01-28 2022-08-01 旺宏電子股份有限公司 In-memory computation device and in-memory computation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220108742A1 (en) * 2020-10-02 2022-04-07 Qualcomm Incorporated Differential charge sharing for compute-in-memory (cim) cell
TW202230164A (en) * 2021-01-28 2022-08-01 旺宏電子股份有限公司 Multiplication and addition operation device and control method for multiplication and addition operation thereof
TW202230367A (en) * 2021-01-28 2022-08-01 旺宏電子股份有限公司 In-memory computation device and in-memory computation method

Also Published As

Publication number Publication date
TW202431262A (en) 2024-08-01

Similar Documents

Publication Publication Date Title
US11151439B2 (en) Computing in-memory system and method based on skyrmion racetrack memory
CN113467751B (en) Analog domain memory internal computing array structure based on magnetic random access memory
TWI699711B (en) Memory devices and manufacturing method thereof
TWI698884B (en) Memory devices and methods for operating the same
CN110543937A (en) Neural network, operation method and neural network information processing system
CN110880501A (en) Transposition feedback field effect electronic device and its arrangement circuit
CN116913335A (en) A non-op amp clamped in-memory computing circuit based on semiconductor memory device 2T0C
Soliman et al. Efficient FeFET crossbar accelerator for binary neural networks
TWI856602B (en) Universal memory for in-memory computing and operation method thereof
CN116741235A (en) Read-write reconfigurable memristor memory integrated system
US12347481B2 (en) Universal memory for in-memory computing and operation method thereof
US20250149102A1 (en) Storage array
CN118351912A (en) Universal memory suitable for in-memory operation and operation method thereof
TWI860617B (en) Memory device for computing in-memory
CN102842340B (en) SRAM Circuit Based on PNPN Structure and Its Reading and Writing Method
CN117558312A (en) A ferroelectric random access memory array and its control method
CN112017701B (en) Threshold voltage adjusting device and threshold voltage adjusting method
JPS62170097A (en) Semiconductor storage device
CN111061926B (en) Method for realizing data search in NAND type memory array
US12094564B2 (en) Memory device and computing method thereof
KR102797123B1 (en) Synapse circuit, operation method thereof and neuromorphic device including synapse circuit
TWI889367B (en) Memory array circuit, ternary content addressable memory and operation method thereof
JP7669629B2 (en) Universal Memory for In-Memory Computing
US20250006271A1 (en) Memory array circuit, ternary content addressable memory and operation method thereof
CN111243648A (en) Flash cells, flash modules, and flash chips