[go: up one dir, main page]

Chen et al., 2022 - Google Patents

A low-cost training method of ReRAM inference accelerator chips for binarized neural networks to recover accuracy degradation due to statistical variabilities

Chen et al., 2022

Document ID
6782159589530219906
Author
Chen Z
Ohsawa T
Publication year
Publication venue
IEICE Transactions on Electronics

External Links

Snippet

A new software based in-situ training (SBIST) method to achieve high accuracies is proposed for binarized neural networks inference accelerator chips in which measured offsets in sense amplifiers (activation binarizers) are transformed into biases in the training …
Continue reading at search.ieice.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/02Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using magnetic elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • G11C15/04Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled

Similar Documents

Publication Publication Date Title
Jung et al. A crossbar array of magnetoresistive memory devices for in-memory computing
Yu et al. RRAM for compute-in-memory: From inference to training
Sebastian et al. Memory devices and applications for in-memory computing
Yu et al. Compute-in-memory chips for deep learning: Recent trends and prospects
Jaiswal et al. 8T SRAM cell as a multibit dot-product engine for beyond von Neumann computing
Kaiser et al. Hardware-aware in situ learning based on stochastic magnetic tunnel junctions
Daniels et al. Energy-efficient stochastic computing with superparamagnetic tunnel junctions
Yu Neuro-inspired computing with emerging nonvolatile memorys
Cheng et al. TIME: A training-in-memory architecture for RRAM-based deep neural networks
Marinella et al. Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator
Wei et al. Trends and challenges in the circuit and macro of RRAM-based computing-in-memory systems
Giacomin et al. A robust digital RRAM-based convolutional block for low-power image processing and learning applications
Gebregiorgis et al. Tutorial on memristor-based computing for smart edge applications
Wu et al. Bulk‐switching memristor‐based compute‐in‐memory module for deep neural network training
Li et al. An ADC-less RRAM-based computing-in-memory macro with binary CNN for efficient edge AI
Lee et al. Operation scheme of multi-layer neural networks using NAND flash memory as high-density synaptic devices
Xiang et al. Efficient and robust spike-driven deep convolutional neural networks based on NOR flash computing array
Liu et al. Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing
Pedretti et al. Differentiable content addressable memory with memristors
Sengupta et al. Spin-transfer torque magnetic neuron for low power neuromorphic computing
Mackin et al. Weight programming in DNN analog hardware accelerators in the presence of NVM variability
Shreya et al. Energy-efficient all-spin BNN using voltage-controlled spin-orbit torque device for digit recognition
Jing et al. VSDCA: A voltage sensing differential column architecture based on 1T2R RRAM array for computing-in-memory accelerators
Lee et al. Ferroelectric field-effect transistors for binary neural network with 3-D NAND architecture
Yi et al. Improved Hopfield network optimization using manufacturable three-terminal electronic synapses