[go: up one dir, main page]

Showing 140 open source projects for "pico-8"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • The sales CRM that makes your life easy, so all you have to do is sell. Icon
    The sales CRM that makes your life easy, so all you have to do is sell.

    The simpler way to sell

    Welcome to the simpler way to sell. Pipedrive is CRM software that makes your life easy, for less legwork and more sales. Let us track your sales conversations, eliminate admin tasks, get you more leads and uncover how you win, because your day belongs to you. Join more than 100,000 sales teams around the world that use the CRM rated #1 by SoftwareReviews in 2019. Start your free 14-day trial and get full access – no credit card needed.
    Try it free
  • 1
    Pyxel

    Pyxel

    A retro game engine for Python

    ...Thanks to its simple specifications inspired by retro gaming consoles, such as only 16 colors can be displayed and only 4 sounds can be played back at the same time, you can feel free to enjoy making pixel art style games. The motivation for the development of Pyxel is the feedback from users. Please give Pyxel a star on GitHub! Pyxel's specifications and APIs are inspired by PICO-8 and TIC-80. Pyxel is open source and free to use. Let's start making a retro game with Pyxel! Runs on Windows, Mac, Linux, and Web. Using the Pyxel Web Launcher or custom elements for HTML, you can run Pyxel in a web browser without any installation work. Pyxel supports a dedicated application distribution file format (Pyxel application file) that works across platforms. 8 musics that can combine arbitrary sounds.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    autopep8

    autopep8

    A tool that automatically formats Python code to conform to the PEP 8

    autopep8 automatically formats Python code to conform to the PEP 8 style guide. It uses the pycodestyle utility to determine what parts of the code need to be formatted. autopep8 is capable of fixing most of the formatting issues that can be reported by pycodestyle. Correct deprecated or non-idiomatic Python code (via lib2to3). Use this for making Python 2.7 code more compatible with Python 3. Put a blank line between a class docstring and its first method declaration.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 3
    Black

    Black

    The uncompromising Python code formatter

    ...Its formatting eventually becomes transparent, so you can simply forget about it and focus on your task at hand. Black has been successfully used in many projects, and has gained stellar user reviews as an exceptional, uncompromising PEP 8 compliant opinionated formatter.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    ...This requires 8-bit compression to be enabled and the bitsandbytes package to be installed, which is only available on linux operating systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The all-in-one Omnichannel Experience Management Platform Icon
    The all-in-one Omnichannel Experience Management Platform

    Do more than just Surveys.

    Build conversational surveys of any type, for any purpose, in any language. Get 40% more responses.
    Learn More
  • 5
    GLM-4.6

    GLM-4.6

    Agentic, Reasoning, and Coding (ARC) foundation models

    GLM-4.6 is the latest iteration of Zhipu AI’s foundation model, delivering significant advancements over GLM-4.5. It introduces an extended 200K token context window, enabling more sophisticated long-context reasoning and agentic workflows. The model achieves superior coding performance, excelling in benchmarks and practical coding assistants such as Claude Code, Cline, Roo Code, and Kilo Code. Its reasoning capabilities have been strengthened, including improved tool usage during inference...
    Downloads: 109 This Week
    Last Update:
    See Project
  • 6
    StreamSpeech

    StreamSpeech

    StreamSpeech is a seamless model for offline speech recognition

    ...Developed as part of an ACL 2024 paper, it targets streaming and low-latency scenarios where intermediate results and final translations or synthetic speech must be produced continuously as audio is being received. The model supports eight tasks: offline ASR, speech-to-text translation, speech-to-speech translation, and TTS, as well as their streaming or simultaneous counterparts, all handled by the same underlying system. During simultaneous translation, StreamSpeech can optionally output intermediate ASR transcripts and text translations, giving users or downstream applications real-time visibility into what the system is hearing and how it is translating.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    AutoGPTQ

    AutoGPTQ

    An easy-to-use LLMs quantization package with user-friendly apis

    AutoGPTQ is an implementation of GPTQ (Quantized GPT) that optimizes large language models (LLMs) for faster inference by reducing their computational footprint while maintaining accuracy.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 8
    LatentSync

    LatentSync

    Taming Stable Diffusion for Lip Sync

    ...The system leverages a U-Net diffusion backbone, with cross-attention of audio embeddings (via an audio encoder) and reference video frames to guide generation, and applies a set of loss functions (temporal, perceptual, sync-net based) to enforce lip-sync accuracy, visual fidelity, and temporal consistency. Over versions, LatentSync has improved temporal stability and lowered resource requirements — making inference more practical (e.g. 8 GB VRAM for earlier versions, somewhat higher for latest models).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Text Generation Web UI

    Text Generation Web UI

    Oobabooga - The definitive Web UI for local AI, with powerful features

    ...Markdown output for GALACTICA, including LaTeX rendering. Custom chat characters. Advanced chat features (send images, get audio responses with TTS). Very efficient text streaming. Parameter presets, 8-bit mode. Layers splitting across GPU(s), CPU, and disk. CPU mode, FlexGen, DeepSpeed ZeRO-3, API with streaming and without streaming. LLaMA model, including 4-bit GPTQ. RWKV model, LoRA (loading and training), Softprompts, and extensions.
    Downloads: 45 This Week
    Last Update:
    See Project
  • Awardco Employee Recognition Icon
    Awardco Employee Recognition

    For companies looking to recognize and reward their employees

    Everything you love about Amazon is now available for rewards and recognition. Awardco has partnered with Amazon Business to bring millions of reward choices, lower vendor fees and dollar-for-dollar recognition spend to your organization. More choice, more capability, and less spend - all in one simple platform.
    Learn More
  • 10
    SetFit

    SetFit

    Efficient few-shot learning with Sentence Transformers

    SetFit is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers. It achieves high accuracy with little labeled data - for instance, with only 8 labeled examples per class on the Customer Reviews sentiment dataset, SetFit is competitive with fine-tuning RoBERTa Large on the full training set of 3k examples.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 11
    fpdf2

    fpdf2

    Simple PDF generation for Python

    fpdf2 is a library for simple & fast PDF document generation in Python. It is a fork and the successor of PyFPDF. Compared with other PDF libraries, fpdf2 is fast, versatile, easy to learn and to extend (example). It is also entirely written in Python and has very few dependencies: Pillow, defusedxml, & fontTools. It is a fork and the successor of PyFPDF.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 12
    Curated Transformers

    Curated Transformers

    PyTorch library of curated Transformer models and their components

    ...Supports state-of-the-art transformer models, including LLMs such as Falcon, Llama, and Dolly v2. Implementing a feature or bugfix benefits all models. For example, all models support 4/8-bit inference through the bitsandbytes library and each model can use the PyTorch meta device to avoid unnecessary allocations and initialization.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 13
    LitGPT

    LitGPT

    20+ high-performance LLMs with recipes to pretrain, finetune at scale

    LitGPT is a collection of over 20 high-performance large language models (LLMs) accompanied by recipes to pretrain, finetune, and deploy them at scale. It provides implementations without abstractions, making it beginner-friendly while offering advanced features like flash attention and support for various precision levels. LitGPT is designed to run efficiently across multiple GPUs or TPUs, catering to both small-scale and large-scale deployments.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 14
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    ...Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    openage

    openage

    Open source clone of the Age of Empires II engine

    ...Our aim is to make openage a platform for the original Age of Empires games providing the same look and feel, but with more features for modding and multiplayer. openage uses an open API powered by our human-readable configuration language nyan. We implement a client-server architecture with dedicated servers that supports more than 8 players. The overarching system will provide matchmaking, lobbies, server discovery and other community features. openage is a community project that values every contribution, the only requirement is your enthusiasm. Don't hesitate to get in touch with us if you want to help!
    Downloads: 8 This Week
    Last Update:
    See Project
  • 16
    VisiData

    VisiData

    A terminal spreadsheet multitool for discovering and arranging data

    VisiData is an interactive multitool for tabular data. It combines the clarity of a spreadsheet, the efficiency of the terminal, and the power of Python, into a lightweight utility that can handle millions of rows with ease. A terminal interface for exploring and arranging tabular data. VisiData supports tsv, CSV, SQLite, JSON, xlsx (Excel), hdf5, and many other formats. Requires Linux, OS/X, or Windows (with WSL). Hundreds of other commands and options are also available; see the...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 17
    MiniCPM-o

    MiniCPM-o

    A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming

    ...Capable of running on end-side devices such as smartphones and tablets, it provides powerful features like real-time speech conversation, video understanding, and multimodal live streaming. With 8 billion parameters, MiniCPM-o 2.6 surpasses its predecessors in versatility and efficiency, making it one of the most robust models available. It supports both text and audio inputs to generate outputs in various forms, including voice cloning, emotion control, and interactive role-playing.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 18
    Z80-μLM

    Z80-μLM

    Z80-μLM is a 2-bit quantized language model

    Z80-μLM is a retro-computing AI project that demonstrates a tiny language model (Z80-μLM) engineered to run on an 8-bit Z80 CPU by aggressively quantizing weights down to 2-bit precision. The repository provides a complete workflow where you train or fine-tune conversational models in Python, then export them into a format that can be executed on classic Z80 systems. A key deliverable is producing CP/M-compatible .COM binaries, enabling a genuinely vintage “chat with your computer” experience on real hardware or accurate emulators. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    CogVLM is an open-source visual–language model suite—and its GUI-oriented sibling CogAgent—aimed at image understanding, grounding, and multi-turn dialogue, with optional agent actions on real UI screenshots. The flagship CogVLM-17B combines ~10B visual parameters with ~7B language parameters and supports 490×490 inputs; CogAgent-18B extends this to 1120×1120 and adds plan/next-action outputs plus grounded operation coordinates for GUI tasks. The repo provides multiple ways to run models...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 20
    ChatGLM3

    ChatGLM3

    ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat

    ...The family includes base and long-context variants (8K/32K/128K). The repo ships Python APIs, CLI and web demos (Gradio/Streamlit), an OpenAI-format API server, and a compact fine-tuning kit. Quantization (4/8-bit), CPU/MPS support, and accelerator backends (TensorRT-LLM, OpenVINO, chatglm.cpp) enable lightweight local or edge deployment.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    AV1 AVIF

    AV1 AVIF

    AV1 Image File Format Specification - ISO-BMFF/HEIF derivative

    AV1 AVIF is the official specification and reference design for the AV1 Image File Format (AVIF), defining how AV1-encoded bitstreams are packaged into the HEIF container format (based on ISOBMFF) to produce AVIF files. The project outlines the syntax and semantics required for AVIF compliance, including support for multiple image profiles, color depths, chroma subsampling modes, HDR/WCG, alpha channels, animation/image sequences, and various color-space/bit-depth combinations — making AVIF...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 22
    orjson

    orjson

    Fast, correct Python JSON library supporting dataclasses, datetimes

    orjson is a fast, correct JSON library for Python. It benchmarks as the fastest Python library for JSON and is more correct than the standard json library or other third-party libraries. It serializes dataclass, datetime, numpy, and UUID instances natively. orjson supports CPython 3.8, 3.9, 3.10, 3.11, and 3.12. It distributes amd64/x86_64, aarch64/armv8, arm7, POWER/ppc64le, and s390x wheels for Linux, amd64 and aarch64 wheels for macOS, and amd64 and i686/x86 wheels for Windows. orjson...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    VibeVoice ComfyUI

    VibeVoice ComfyUI

    ComfyUI integration for Microsoft's VibeVoice text-to-speech model

    VibeVoice ComfyUI is a comprehensive wrapper that integrates Microsoft’s VibeVoice text-to-speech models directly into ComfyUI workflows. It exposes VibeVoice as a set of custom nodes so you can build single-speaker and multi-speaker voice generation pipelines visually, combining TTS with other audio or generative components. The integration supports high-quality single-speaker synthesis as well as scripted multi-speaker conversations, with optional voice cloning from audio samples for each...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    MTEB

    MTEB

    MTEB: Massive Text Embedding Benchmark

    ...This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks.
    Downloads: 1 This Week
    Last Update:
    See Project