[go: up one dir, main page]

Open Source Computer Vision Libraries - Page 2

  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Jesta I.S. | Enterprise Software For Retail and Supply Chain Icon
    Jesta I.S. | Enterprise Software For Retail and Supply Chain

    Transition from fragmented entry-level or legacy systems to an enterprise suite.

    Unify your people and operations across all departments and channels. Discover end-to-end retail, wholesale, and supply chain management software suites designed to scale.
    Learn More
  • 1
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ArrayFire is a general-purpose tensor library that simplifies the process of software development for the parallel architectures found in CPUs, GPUs, and other hardware acceleration devices. The library serves users in every technical computing market. Data structures in ArrayFire are smartly managed to avoid costly memory transfers and to take advantage of each performance feature provided by the underlying hardware. The community of ArrayFire developers invites you to build with us if you're interested and able to write top performing tensor functions. Together we can fulfill The ArrayFire Mission under an excellent Code of Conduct that promotes a respectful and friendly building experience. Rigorous benchmarks and tests ensuring top performance and numerical accuracy. Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux. Built-in visualization functions through Forge.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    Hello AI World

    Hello AI World

    Guide to deploying deep-learning inference networks

    Hello AI World is a great way to start using Jetson and experiencing the power of AI. In just a couple of hours, you can have a set of deep learning inference demos up and running for realtime image classification and object detection on your Jetson Developer Kit with JetPack SDK and NVIDIA TensorRT. The tutorial focuses on networks related to computer vision, and includes the use of live cameras. You’ll also get to code your own easy-to-follow recognition program in Python or C++, and train your own DNN models onboard Jetson with PyTorch. Ready to dive into deep learning? It only takes two days. We’ll provide you with all the tools you need, including easy to follow guides, software samples such as TensorRT code, and even pre-trained network models including ImageNet and DetectNet examples. Follow these directions to integrate deep learning into your platform of choice and quickly develop a proof-of-concept design.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Machine Learning PyTorch Scikit-Learn

    Machine Learning PyTorch Scikit-Learn

    Code Repository for Machine Learning with PyTorch and Scikit-Learn

    Initially, this project started as the 4th edition of Python Machine Learning. However, after putting so much passion and hard work into the changes and new topics, we thought it deserved a new title. So, what’s new? There are many contents and additions, including the switch from TensorFlow to PyTorch, new chapters on graph neural networks and transformers, a new section on gradient boosting, and many more that I will detail in a separate blog post. For those who are interested in knowing what this book covers in general, I’d describe it as a comprehensive resource on the fundamental concepts of machine learning and deep learning. The first half of the book introduces readers to machine learning using scikit-learn, the defacto approach for working with tabular datasets. Then, the second half of this book focuses on deep learning, including applications to natural language processing and computer vision.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    Netvlad

    Netvlad

    NetVLAD: CNN architecture for weakly supervised place recognition

    NetVLAD is a deep learning-based image descriptor framework developed by Relja Arandjelović for place recognition and image retrieval. It extends standard CNNs with a trainable VLAD (Vector of Locally Aggregated Descriptors) layer to create compact, robust global descriptors from image features. This implementation includes training code and pretrained models using the Pittsburgh and Tokyo datasets.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Workable Hiring Software - Hire The Best People, Fast Icon
    Workable Hiring Software - Hire The Best People, Fast

    Find the best candidates with the best recruitment software

    Workable is the preferred software for today's recruiting industry and HR teams, trusted by over 6,000 companies to streamline their hiring processes. Finding the right person for the job has never been easier—users now possess the ability to manage multiple hiring pipelines at once, from posting a job to sourcing candidates. Workable is also seamlessly integrated between desktop and mobile, allowing admins full control and flexibility all in the ATS without needing additional software.
    Learn More
  • 5
    Phi-3-MLX

    Phi-3-MLX

    Phi-3.5 for Mac: Locally-run Vision and Language Models

    Phi-3-Vision-MLX is an Apple MLX (machine learning on Apple silicon) implementation of Phi-3 Vision, a lightweight multi-modal model designed for vision and language tasks. It focuses on running vision-language AI efficiently on Apple hardware like M1 and M2 chips.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    Mobile Robot Programming Toolkit (MRPT)

    Mobile Robot Programming Toolkit (MRPT)

    **MOVED TO GITHUB** ==> https://github.com/MRPT/mrpt

    **MOVED TO GITHUB** ==> https://github.com/MRPT/mrpt The Mobile Robot Programming Toolkit (MRPT) is an extensive, cross-platform, and open source C++ library aimed for robotics researchers to design and implement algorithms about Localization, SLAM, Navigation, computer vision. http://www.mrpt.org/
    Downloads: 22 This Week
    Last Update:
    See Project
  • 7

    BoofCV

    BoofCV is an open source Java library for real-time computer vision.

    BoofCV is an open source Java library for real-time computer vision and robotics applications. Written from scratch for ease of use and high performance, it provides both basic and advanced features needed for creating a computer vision system. Functionality include optimized low level image processing routines (e.g. convolution, interpolation, gradient) to high level functionality such as image stabilization. Released under an Apache 2.0 license for both academic and commercial use.
    Leader badge">
    Downloads: 19 This Week
    Last Update:
    See Project
  • 8
    DnCNN

    DnCNN

    Beyond a Gaussian Denoiser: Residual Learning of Deep CNN

    This repository implements DnCNN (“Deep CNN Denoiser”) from the paper “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising”. DnCNN is a feedforward convolutional neural network that learns to predict the residual noise (i.e. noise map) from a noisy input image, which is then subtracted to yield a clean image. This formulation allows efficient denoising, supports blind Gaussian noise (i.e. unknown noise levels), and can be extended to related tasks like image super-resolution or JPEG deblocking in some variants. The repository includes training code (using MatConvNet / MATLAB), demo scripts, pretrained models, and evaluation routines. Single model handling multiple noise levels.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Kornia

    Kornia

    Open Source Differentiable Computer Vision Library

    Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions. Inspired by existing packages, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors. With Kornia we fill the gap between classical and deep computer vision that implements standard and advanced vision algorithms for AI. Our libraries and initiatives are always according to the community needs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Mavenlink | Project Management Software Icon
    Mavenlink | Project Management Software

    Connecting People, Projects, and Profits

    Mavenlink is an innovative online resource management and project management software built for professional services teams. Offering a better way to manage projects and resources, Mavenlink transforms businesses by combining project management, collaboration, time tracking, resource management, and project financials all in one place.
    Get Started Today
  • 10
    OpenCE

    OpenCE

    Contrast Enhancement Techniques for low-light images

    OpenCE is an open source implementation of the paper Cascaded Pyramid Network for Multi-Person Pose Estimation (CVPR 2018) by Yilun Chen, Zhicheng Wang, Yuxiang Peng, Zhiqiang Zhang, Gang Yu, and Jian Sun. The framework provides a complete training and evaluation pipeline for human pose estimation using a cascaded pyramid network (CPN). OpenCE leverages a feature pyramid structure combined with a refinement stage to improve keypoint detection accuracy across multiple scales, particularly for challenging poses in crowded scenes. The repository includes training scripts, pretrained models, and testing code, allowing users to reproduce results reported in the paper. It supports standard human pose estimation benchmarks such as COCO, with configurations optimized for accuracy and efficiency. As an open resource, OpenCE offers researchers and practitioners a strong baseline for pose estimation and a foundation for extending CPN-based methods.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    SAHI

    SAHI

    A lightweight vision library for performing large object detection

    A lightweight vision library for performing large-scale object detection & instance segmentation. Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities. Detection of small objects and objects far away in the scene is a major challenge in surveillance applications. Such objects are represented by small number of pixels in the image and lack sufficient details, making them difficult to detect using conventional detectors. In this work, an open-source framework called Slicing Aided Hyper Inference (SAHI) is proposed that provides a generic slicing aided inference and fine-tuning pipeline for small object detection.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    SAM 2

    SAM 2

    The repository provides code for running inference with SAM 2

    SAM2 is a next-generation version of the Segment Anything Model (SAM), designed to improve performance, generalization, and efficiency in promptable image segmentation tasks. It retains the core promptable interface—accepting points, boxes, or masks—but incorporates architectural and training enhancements to produce higher-fidelity masks, better boundary adherence, and robustness to complex scenes. The updated model is optimized for faster inference and lower memory use, enabling real-time interactivity even on larger images or constrained hardware. SAM2 comes with pretrained weights and easy-to-use APIs, enabling developers and researchers to integrate promptable segmentation into annotation tools, vision pipelines, or downstream tasks. The project also includes scripts and notebooks to compare SAM2 against SAM on edge cases, benchmarks showing improvements, and evaluation suites to measure mask quality metrics like IoU and boundary error.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    VGGT

    VGGT

    [CVPR 2025 Best Paper Award] VGGT

    VGGT is a transformer-based framework aimed at unifying classic visual geometry tasks—such as depth estimation, camera pose recovery, point tracking, and correspondence—under a single model. Rather than training separate networks per task, it shares an encoder and leverages geometric heads/decoders to infer structure and motion from images or short clips. The design emphasizes consistent geometric reasoning: outputs from one head (e.g., correspondences or tracks) reinforce others (e.g., pose or depth), making the system more robust to challenging viewpoints and textures. The repo provides inference pipelines to estimate geometry from monocular inputs, stereo pairs, or brief sequences, together with evaluation harnesses for common geometry benchmarks. Training utilities highlight data curation and augmentations that preserve geometric cues while improving generalization across scenes and cameras.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Vision Transformer Pytorch

    Vision Transformer Pytorch

    Implementation of Vision Transformer, a simple way to achieve SOTA

    This repository provides a from-scratch, minimalist implementation of the Vision Transformer (ViT) in PyTorch, focusing on the core architectural pieces needed for image classification. It breaks down the model into patch embedding, positional encoding, multi-head self-attention, feed-forward blocks, and a classification head so you can understand each component in isolation. The code is intentionally compact and modular, which makes it easy to tinker with hyperparameters, depth, width, and attention dimensions. Because it stays close to vanilla PyTorch, you can integrate custom datasets and training loops without framework lock-in. It’s widely used as an educational reference for people learning transformers in vision and as a lightweight baseline for research prototypes. The project encourages experimentation—swap optimizers, change augmentations, or plug the transformer backbone into downstream tasks.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    OpenNN - Open Neural Networks Library

    OpenNN - Open Neural Networks Library

    Machine learning algorithms for advanced analytics

    OpenNN is a software library written in C++ for advanced analytics. It implements neural networks, the most successful machine learning method. Some typical applications of OpenNN are business intelligence (customer segmentation, churn prevention…), health care (early diagnosis, microarray analysis…) and engineering (performance optimization, predictive maitenance…). OpenNN does not deal with computer vision or natural language processing. The main advantage of OpenNN is its high performance. This library outstands in terms of execution speed and memory allocation. It is constantly optimized and parallelized in order to maximize its efficiency. The documentation is composed by tutorials and examples to offer a complete overview about the library. OpenNN is developed by Artelnics, a company specialized in artificial intelligence.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 16
    Albumentations

    Albumentations

    Fast image augmentation library and an easy-to-use wrapper

    Albumentations is a computer vision tool that boosts the performance of deep convolutional neural networks. Albumentations is a Python library for fast and flexible image augmentations. Albumentations efficiently implements a rich variety of image transform operations that are optimized for performance, and does so while providing a concise, yet powerful image augmentation interface for different computer vision tasks, including object classification, segmentation, and detection. Albumentations supports different computer vision tasks such as classification, semantic segmentation, instance segmentation, object detection, and pose estimation. Albumentations works well with data from different domains: photos, medical images, satellite imagery, manufacturing and industrial applications, Generative Adversarial Networks. Albumentations can work with various deep learning frameworks such as PyTorch and Keras.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    BotSharp

    BotSharp

    AI Multi-Agent Framework in .NET

    Conversation as a platform (CaaP) is the future, so it's perfect that we're already offering the whole toolkits to our .NET developers using the BotSharp AI BOT Platform Builder to build a CaaP. It opens up as much learning power as possible for your own robots and precisely control every step of the AI processing pipeline. BotSharp is an open source machine learning framework for AI Bot platform builder. This project involves natural language understanding, computer vision and audio processing technologies, and aims to promote the development and application of intelligent robot assistants in information systems. Out-of-the-box machine learning algorithms allow ordinary programmers to develop artificial intelligence applications faster and easier. It's written in C# running on .Net Core that is full cross-platform framework. C# is a enterprise-grade programming language which is widely used to code business logic in information management-related system.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Butteraugli

    Butteraugli

    Estimates the psychovisual difference between two images

    butteraugli is a perceptual similarity metric designed to estimate how noticeable differences between two images will be to the human eye. Instead of simple pixel math, it models aspects of human vision—color sensitivity, spatial masking, and contrast perception—to highlight differences that viewers actually see. The core tool outputs a single “distance” score along with per-pixel or per-region maps that show where artifacts are most objectionable. These maps make it practical to tune compressor settings and confirm whether bitrate reductions are visually acceptable. The metric has become a common yardstick for objective image quality when comparing codecs or encoder tweaks that target web or mobile delivery. Because it is deterministic and fast, it can be used in automated pipelines to gate releases on visual quality, not just file size.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. It remains a challenge for AI researchers to implement complex distributed training solutions for their models. Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    DETR

    DETR

    End-to-end object detection with transformers

    PyTorch training code and pretrained models for DETR (DEtection TRansformer). We replace the full complex hand-crafted object detection pipeline with a Transformer, and match Faster R-CNN with a ResNet-50, obtaining 42 AP on COCO using half the computation power (FLOPs) and the same number of parameters. Inference in 50 lines of PyTorch. What it is. Unlike traditional computer vision techniques, DETR approaches object detection as a direct set prediction problem. It consists of a set-based global loss, which forces unique predictions via bipartite matching, and a Transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. Due to this parallel nature, DETR is very fast and efficient.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    ECO

    ECO

    Matlab implementation of the ECO tracker

    ECO (Efficient Convolution Operators for Tracking) is a high-performance object tracking algorithm developed by Martin Danelljan and collaborators. It is based on discriminative correlation filters and designed to handle appearance changes, occlusions, and scale variations in visual object tracking tasks. The code provides a MATLAB implementation of the ECO and ECO-HC (high-speed) variants and was one of the top performers on multiple visual tracking benchmarks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    HashingBaselineForImageRetrieval

    HashingBaselineForImageRetrieval

    Various hashing methods for image retrieval and serves as the baseline

    This repository provides baseline implementations of deep supervised hashing methods for image retrieval tasks using PyTorch. It includes clean, minimal code for several hashing algorithms designed to map images into compact binary codes while preserving similarity in feature space, enabling fast and scalable retrieval from large image datasets.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Image Fusion

    Image Fusion

    Deep Learning-based Image Fusion: A Survey

    This repository is a survey / code collection centered on deep learning–based image fusion (e.g. fusing infrared + visible light images, multi-modal fusion) methods. It catalogs many fusion algorithms (e.g. DenseFuse, FusionGAN, NestFuse, etc.), links to code implementations, and describes evaluation metrics. The repository includes a “General Evaluation Metric” subfolder containing objective fusion metrics. It is not a single monolithic tool, but rather a curated reference and aggregation of methods, code and performance comparisons in the domain of image fusion. Survey style description of method taxonomy, architectures, loss types. Compilation of many state-of-the-art image fusion methods (infrared + visible, multi-focus, multi-exposure). Survey style description of method taxonomy, architectures, loss types.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    MMF

    MMF

    A modular framework for vision & language multimodal research

    MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-the-art vision and language models and has powered multiple research projects at Facebook AI Research. MMF is designed from ground up to let you focus on what matters, your model, by providing boilerplate code for distributed training, common datasets and state-of-the-art pre-trained baselines out-of-the-box. MMF is built on top of PyTorch that brings all of its power in your hands. MMF is not strongly opinionated. So you can use all of your PyTorch knowledge here. MMF is created to be easily extensible and composable. Through our modular design, you can use specific components from MMF that you care about. Our configuration system allows MMF to easily adapt to your needs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    MetaCLIP

    MetaCLIP

    ICLR2024 Spotlight: curation/training code, metadata, distribution

    MetaCLIP is a research codebase that extends the CLIP framework into a meta-learning / continual learning regime, aiming to adapt CLIP-style models to new tasks or domains efficiently. The goal is to preserve CLIP’s strong zero-shot transfer capability while enabling fast adaptation to domain shifts or novel class sets with minimal data and without catastrophic forgetting. The repository provides training logic, adaptation strategies (e.g. prompt tuning, adapter modules), and evaluation across base and target domains to measure how well the model retains its general knowledge while specializing as needed. It includes utilities to fine-tune vision-language embeddings, compute prompt or adapter updates, and benchmark across transfer and retention metrics. MetaCLIP is especially suited for real-world settings where a model must continuously incorporate new visual categories or domains over time.
    Downloads: 1 This Week
    Last Update:
    See Project