mixup-cifar10 is the official PyTorch implementation of “mixup: Beyond Empirical Risk Minimization” (Zhang et al., ICLR 2018), a foundational paper introducing mixup, a simple yet powerful data augmentation technique for training deep neural networks. The core idea of mixup is to generate synthetic training examples by taking convex combinations of pairs of input samples and their labels. By interpolating both data and labels, the model learns smoother decision boundaries and becomes more robust to noise and adversarial examples. This repository implements mixup for the CIFAR-10 dataset, showcasing its effectiveness in improving generalization, stability, and calibration of neural networks. The approach acts as a regularizer, encouraging linear behavior in the feature space between samples, which helps reduce overfitting and enhance performance on unseen data.
Features
- Simple, easily extensible codebase for research and experimentation
- Based on the original ICLR 2018 publication results
- Compatible with PyTorch and GPU-accelerated training
- Demonstrates significant gains in generalization and robustness
- Trains neural networks on convex combinations of inputs and labels
- Implementation of mixup data augmentation for CIFAR-10 classification