Chen et al., 2022 - Google Patents
A low-cost training method of ReRAM inference accelerator chips for binarized neural networks to recover accuracy degradation due to statistical variabilitiesChen et al., 2022
- Document ID
- 6782159589530219906
- Author
- Chen Z
- Ohsawa T
- Publication year
- Publication venue
- IEICE Transactions on Electronics
External Links
Snippet
A new software based in-situ training (SBIST) method to achieve high accuracies is proposed for binarized neural networks inference accelerator chips in which measured offsets in sense amplifiers (activation binarizers) are transformed into biases in the training …
- 230000001537 neural 0 title abstract description 36
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/56—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/02—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
- G11C15/04—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Jung et al. | A crossbar array of magnetoresistive memory devices for in-memory computing | |
| Yu et al. | RRAM for compute-in-memory: From inference to training | |
| Sebastian et al. | Memory devices and applications for in-memory computing | |
| Yu et al. | Compute-in-memory chips for deep learning: Recent trends and prospects | |
| Jaiswal et al. | 8T SRAM cell as a multibit dot-product engine for beyond von Neumann computing | |
| Kaiser et al. | Hardware-aware in situ learning based on stochastic magnetic tunnel junctions | |
| Daniels et al. | Energy-efficient stochastic computing with superparamagnetic tunnel junctions | |
| Yu | Neuro-inspired computing with emerging nonvolatile memorys | |
| Cheng et al. | TIME: A training-in-memory architecture for RRAM-based deep neural networks | |
| Marinella et al. | Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator | |
| Wei et al. | Trends and challenges in the circuit and macro of RRAM-based computing-in-memory systems | |
| Giacomin et al. | A robust digital RRAM-based convolutional block for low-power image processing and learning applications | |
| Gebregiorgis et al. | Tutorial on memristor-based computing for smart edge applications | |
| Wu et al. | Bulk‐switching memristor‐based compute‐in‐memory module for deep neural network training | |
| Li et al. | An ADC-less RRAM-based computing-in-memory macro with binary CNN for efficient edge AI | |
| Lee et al. | Operation scheme of multi-layer neural networks using NAND flash memory as high-density synaptic devices | |
| Xiang et al. | Efficient and robust spike-driven deep convolutional neural networks based on NOR flash computing array | |
| Liu et al. | Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing | |
| Pedretti et al. | Differentiable content addressable memory with memristors | |
| Sengupta et al. | Spin-transfer torque magnetic neuron for low power neuromorphic computing | |
| Mackin et al. | Weight programming in DNN analog hardware accelerators in the presence of NVM variability | |
| Shreya et al. | Energy-efficient all-spin BNN using voltage-controlled spin-orbit torque device for digit recognition | |
| Jing et al. | VSDCA: A voltage sensing differential column architecture based on 1T2R RRAM array for computing-in-memory accelerators | |
| Lee et al. | Ferroelectric field-effect transistors for binary neural network with 3-D NAND architecture | |
| Yi et al. | Improved Hopfield network optimization using manufacturable three-terminal electronic synapses |