[go: up one dir, main page]

Search Results for "distributed shared memory"

Showing 53 open source projects for "distributed shared memory"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Yeastar: Business Phone System and Unified Communications Icon
    Yeastar: Business Phone System and Unified Communications

    Go beyond just a PBX with all communications integrated as one.

    User-friendly, optimized, and scalable, the Yeastar P-Series Phone System redefines business connectivity by bringing together calling, meetings, omnichannel messaging, and integrations in one simple platform—removing the limitations of distance, platforms, and systems.
    Learn More
  • 1
    I hate money

    I hate money

    A simple shared budget manager web application

    I hate money is a web application made to ease shared budget management. It keeps track of who bought what, when, and for whom; and helps to settle the bills. I hate money is written in python, using the flask framework. It’s developed with ease of use in mind and is trying to keep things simple. Hope you (will) like it! The code is distributed under a BSD beerware derivative: if you meet the people in person and you want to pay them a craft beer, you are highly encouraged to do so.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    vLLM

    vLLM

    A high-throughput and memory-efficient inference and serving engine

    vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.
    Downloads: 30 This Week
    Last Update:
    See Project
  • 3
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Qwen

    Qwen

    The official repo of Qwen chat & pretrained large language model

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making it a...
    Downloads: 17 This Week
    Last Update:
    See Project
  • Contractor Foreman is the most affordable all-in-one construction management software for contractors and is trusted by contractors in more than 75 countries. Icon
    Contractor Foreman is the most affordable all-in-one construction management software for contractors and is trusted by contractors in more than 75 countries.

    For Residential, Commercial and Public Works Contractors

    Starting at $49/m for the WHOLE company, Contractor Foreman is the most affordable all-in-one construction management system for contractors. Our customers in 75+ countries and industry awards back it up. And it's all backed by a 100 day guarantee.
    Learn More
  • 5
    Claude-Flow

    Claude-Flow

    The leading agent orchestration platform for Claude

    ...It enables developers to coordinate multiple specialized AI agents in real time through a hive-mind architecture, combining swarm intelligence, neural reasoning, and a powerful set of 87 Modular Control Protocol (MCP) tools. The platform supports both quick swarm tasks and persistent multi-agent sessions known as hives, facilitating distributed AI collaboration with persistent contextual memory. At its core, Claude-Flow integrates Dynamic Agent Architecture (DAA) for self-organizing agent management, neural pattern recognition accelerated by WebAssembly SIMD, and a SQLite-based memory system for context retention and knowledge persistence across tasks. It automates development workflows via pre- and post-operation hooks, providing seamless coordination, code formatting, validation, and performance optimization.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    OSS-Fuzz

    OSS-Fuzz

    OSS-Fuzz - continuous fuzzing for open source software

    OSS-Fuzz is a large-scale fuzz testing platform developed by Google to improve the security and reliability of widely used open source software. Fuzz testing is a proven method for uncovering programming errors such as buffer overflows and memory leaks, which can lead to severe security vulnerabilities. By leveraging guided in-process fuzzing, Google has already identified thousands of issues in projects like Chrome, and this initiative extends the same capabilities to the broader open source community. OSS-Fuzz integrates modern fuzzing engines with sanitizers and runs them at scale in a distributed environment, providing automated testing and continuous monitoring. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    CogVideo

    CogVideo

    text and image to video generation: CogVideoX (2024) and CogVideo

    CogVideo is an open source text-/image-/video-to-video generation project that hosts the CogVideoX family of diffusion-transformer models and end-to-end tooling. The repo includes SAT and Diffusers implementations, turnkey demos, and fine-tuning pipelines (including LoRA) designed to run across a wide range of NVIDIA GPUs, from desktop cards (e.g., RTX 3060) to data-center hardware (A100/H100). Current releases cover CogVideoX-2B, CogVideoX-5B, and the upgraded CogVideoX1.5-5B variants, plus...
    Downloads: 23 This Week
    Last Update:
    See Project
  • 8
    Dask

    Dask

    Parallel computing with task scheduling

    Dask is a Python library for parallel and distributed computing, designed to scale analytics workloads from single machines to large clusters. It integrates with familiar tools like NumPy, Pandas, and scikit-learn while enabling execution across cores or nodes with minimal code changes. Dask excels at handling large datasets that don’t fit into memory and is widely used in data science, machine learning, and big data pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    ChatGPT Clone

    ChatGPT Clone

    ChatGPT interface with better UI

    ChatGPT Clone demonstrates a ChatGPT-style conversational interface wired to large-language-model backends, packaged so developers can self-host and extend. The goal is to replicate the core chat UX—message history, streaming tokens, code blocks, and system prompts—while letting you plug in different provider APIs or local models. It showcases a clean separation between the web client and the message orchestration layer so you can experiment with prompts, roles, and memory strategies. The...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Optimize every aspect of hiring with Greenhouse Recruiting Icon
    Optimize every aspect of hiring with Greenhouse Recruiting

    Hire for what's next.

    What’s next for many of us is changing. Your company’s ability to hire great talent is as important as ever – so you’ll be ready for whatever’s ahead. Whether you need to scale your team quickly or improve your hiring process, Greenhouse gives you the right technology, know-how and support to take on what’s next.
    Learn More
  • 10
    FramePack

    FramePack

    Lets make video diffusion practical

    FramePack explores compact representations for sequences of image frames, targeting tasks where many near-duplicate frames carry redundant information. The idea is to “pack” frames by detecting shared structure and storing differences efficiently, which can accelerate training or inference on video-like data. By reducing I/O and memory bandwidth, datasets become lighter to load while models still see the essential temporal variation. The repository demonstrates both packing and unpacking steps, making it straightforward to integrate into preprocessing pipelines. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 11
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Lingvo

    Lingvo

    Framework for building neural networks

    Lingvo is a TensorFlow based framework focused on building and training sequence models, especially for language and speech tasks. It was originally developed for internal research and later open sourced to support reproducible experiments and shared model implementations. The framework provides a structured way to define models, input pipelines, and training configurations using a common interface for layers, which encourages reuse across different tasks. It has been used to implement state...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    DGL

    DGL

    Python package built to ease deep learning on graph

    Build your models with PyTorch, TensorFlow or Apache MXNet. Fast and memory-efficient message passing primitives for training Graph Neural Networks. Scale to giant graphs via multi-GPU acceleration and distributed training infrastructure. DGL empowers a variety of domain-specific projects including DGL-KE for learning large-scale knowledge graph embeddings, DGL-LifeSci for bioinformatics and cheminformatics, and many others.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    Smallpond

    Smallpond

    A lightweight data processing framework built on DuckDB and 3FS

    smallpond is a lightweight distributed data processing framework built by DeepSeek, designed to scale DuckDB workloads over clusters using their 3FS (Fire-Flyer File System) backend. The idea is to preserve DuckDB’s fast analytics engine but lift it from single-node to multi-node settings, giving you the ability to operate on large datasets (e.g. petabyte scale) without moving to a heavyweight system like Spark. Users write Python-like code (via DataFrame APIs or SQL strings) to express...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    LingBot-World

    LingBot-World

    Advancing Open-source World Models

    LingBot-World is an open-source, high-fidelity world simulator designed to advance the state of world models through video generation. Built on top of Wan2.2, it enables realistic, dynamic environment simulation across diverse styles, including real-world, scientific, and stylized domains. LingBot-World supports long-term temporal consistency, maintaining coherent scenes and interactions over minute-level horizons. With real-time interactivity and sub-second latency at 16 FPS, it is...
    Downloads: 59 This Week
    Last Update:
    See Project
  • 17
    DLRM

    DLRM

    An implementation of a deep learning recommendation model (DLRM)

    DLRM (Deep Learning Recommendation Model) is Meta’s open-source reference implementation for large-scale recommendation systems built to handle extremely high-dimensional sparse features and embedding tables. The architecture combines dense (MLP) and sparse (embedding) branches, then interacts features via dot product or feature interactions before passing through further dense layers to predict click-through, ranking scores, or conversion probabilities. The implementation is optimized for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    NeuralForecast

    NeuralForecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods'...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    OpenTinker

    OpenTinker

    OpenTinker is an RL-as-a-Service infrastructure for foundation models

    ...Traditional RL setups can be monolithic and difficult to configure, but OpenTinker separates concerns across agent definition, environment interaction, and execution, which lets developers focus on defining the logic of agents and environments separately from how training and inference are run. It introduces a centralized scheduler to manage distributed training jobs and shared compute resources, enabling workloads like reinforcement learning, supervised fine-tuning, and inference to run across multiple settings. The architecture supports a range of single-turn and multi-turn agentic tasks with a design that abstracts away infrastructure complexity while offering flexible Python APIs to define environments and workflows.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    LMCache

    LMCache

    Supercharge Your LLM with the Fastest KV Cache Layer

    LMCache is an extension layer for LLM serving engines that accelerates inference, especially with long contexts, by storing and reusing key-value (KV) attention caches across requests. Instead of rebuilding KV states for repeated or shared text segments, LMCache persists and retrieves them from multiple tiers—GPU memory, CPU DRAM, and local disk—then injects them into subsequent requests to reduce TTFT and increase throughput. Its design supports reuse beyond strict prefix matching and enables sharing across serving instances, improving efficiency under real multi-tenant traffic. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Ludwig AI

    Ludwig AI

    Low-code framework for building custom LLMs, neural networks

    ...Support for multi-task and multi-modality learning. Comprehensive config validation detects invalid parameter combinations and prevents runtime failures. Automatic batch size selection, distributed training (DDP, DeepSpeed), parameter efficient fine-tuning (PEFT), 4-bit quantization (QLoRA), and larger-than-memory datasets. Retain full control of your models down to the activation functions. Support for hyperparameter optimization, explainability, and rich metric visualizations. Experiment with different model architectures, tasks, features, and modalities with just a few parameter changes in the config. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Omnilingual ASR

    Omnilingual ASR

    Omnilingual ASR Open-Source Multilingual SpeechRecognition

    Omnilingual-ASR is a research codebase exploring automatic speech recognition that generalizes across a very large number of languages using shared modeling and training recipes. It focuses on leveraging self-supervised audio pretraining and scalable fine-tuning so low-resource languages can benefit from high-resource data. The project provides data preparation pipelines, training scripts, decoding utilities, and evaluation tools so researchers can reproduce results and extend to new...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    MobileLLM

    MobileLLM

    MobileLLM Optimizing Sub-billion Parameter Language Models

    MobileLLM is a lightweight large language model (LLM) framework developed by Facebook Research, optimized for on-device deployment where computational and memory efficiency are critical. Introduced in the ICML 2024 paper “MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases”, it focuses on delivering strong reasoning and generalization capabilities in models under one billion parameters. The framework integrates several architectural innovations—SwiGLU...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    DeepSpeed

    DeepSpeed

    Deep learning optimization library: makes distributed training easy

    DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed you can: 1. Train/Inference dense or sparse models with billions or trillions of parameters 2. Achieve excellent system throughput and efficiently scale to thousands of GPUs 3. Train/Inference on resource constrained GPU systems 4. Achieve unprecedented low latency and high throughput for inference 5. Achieve extreme...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    FFTW++ is a C++ header class for the FFTW Fast Fourier Transform library that automates memory allocation, alignment, planning, wisdom, and communication on both serial and parallel (OpenMP/MPI) architectures. In 2D and 3D, hybrid dealiasing of convolutions substantially reduces memory usage and computation time. Wrappers for C, Python, and Fortran are included.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next