[go: up one dir, main page]

Search Results for "ai summarize documents" - Page 5

Showing 116 open source projects for "ai summarize documents"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Manage and optimise Google, Facebook and Microsoft Ads faster and gain a competitive advantage with our digital advertising platform. Icon
    Manage and optimise Google, Facebook and Microsoft Ads faster and gain a competitive advantage with our digital advertising platform.

    Smarter, more effective advertising

    Slash the time it takes to manage and optimize your Google, Microsoft Advertising or Facebook Ads campaigns to just minutes a day. Adzooma's AI and machine learning based PPC platform offers stress free campaign management, state of the art 24/7 optimization and advanced automation, all in a simple to use interface. Scan for 50+ improvement 'opportunities', many of which can be actioned with a single click, track PPC performance and highlight over/under spending to improve your quality score, conversions and ROI. These trying times are tough for all. So we're giving away our whole award-winning platform for free until June 1st 2020. That's automated PPC ads, one-click optimisations, and world-class reporting - at zero cost. No strings attached. No credit card required.
    Free until June 1st 2020
  • 1
    ANts P2P
    ANts P2P realizes a third generation P2P net. It protects your privacy while you are connected and makes you not trackable, hiding your identity (ip) and crypting everything you are sending/receiving from others.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 2
    Master's Thesis subject: "Knowledge repositories for effective and secure services executing in agent environment." Goal: Developing optimal method for storing knowledge in distributed agent applications. Java code + LaTeX documents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TextMine is for the Perl hacker who is grappling with the problems of managing unstructured text from various sources. You can use these text mining tools to search the Web, index text, extract entities, categorize your e-mail, and summarize documents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    NuMarkdown-8B-Thinking

    NuMarkdown-8B-Thinking

    Reasoning-powered OCR VLM for converting complex documents to Markdown

    NuMarkdown-8B-Thinking is the first reasoning OCR vision-language model (VLM) designed to convert documents into clean Markdown optimized for retrieval-augmented generation (RAG). Built on Qwen 2.5-VL-7B and fine-tuned with synthetic Doc → Reasoning → Markdown examples, it generates thinking tokens before producing the final Markdown to better handle complex layouts and tables. It uses a two-phase training process: supervised fine-tuning (SFT) followed by reinforcement learning (GRPO) with a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • GR4VY: Payment Orchestration Platform Icon
    GR4VY: Payment Orchestration Platform

    Payment orchestration platform that connects PSPs, methods, and tools in one layer, streamlining payments and increasing success rates.

    Gr4vy’s payment orchestration platform empowers enterprise merchants and platforms to optimize their stack and create bespoke checkout experiences, giving you full control over your payment strategy.
    Learn More
  • 5
    BrainyAI

    BrainyAI

    Concurrent AI Chat, Search, and Read for free, alternative to Sider

    BrainyAI is a free Chrome browser extension that offers features such as AI chat aggregation, AI search, AI reading, and enhanced AI web browsing. (Free alternative to Sider and Monica). Powered by the latest ChatGPT technology, BrainyAI offers a versatile browser extension that seamlessly fits into your workflow. Whether you’re searching, chatting, translating, writing, summarizing, or tackling any other task, BrainyAI is here to supercharge your productivity. We’re committed to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    layoutlm-base-uncased

    layoutlm-base-uncased

    Multimodal Transformer for document image understanding and layout

    layoutlm-base-uncased is a multimodal transformer model developed by Microsoft for document image understanding tasks. It incorporates both text and layout (position) features to effectively process structured documents like forms, invoices, and receipts. This base version has 113 million parameters and is pre-trained on 11 million documents from the IIT-CDIP dataset. LayoutLM enables better performance in tasks where the spatial arrangement of text plays a crucial role. The model uses a standard BERT-like architecture but enriches input with 2D positional embeddings. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Qwen2.5-VL-7B-Instruct

    Qwen2.5-VL-7B-Instruct

    Multimodal 7B model for image, video, and text understanding tasks

    Qwen2.5-VL-7B-Instruct is a multimodal vision-language model developed by the Qwen team, designed to handle text, images, and long videos with high precision. Fine-tuned from Qwen2.5-VL, this 7-billion-parameter model can interpret visual content such as charts, documents, and user interfaces, as well as recognize common objects. It supports complex tasks like visual question answering, localization with bounding boxes, and structured output generation from documents. The model is also...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Ministral 3 8B Base 2512

    Ministral 3 8B Base 2512

    Versatile 8B-base multimodal LLM, flexible foundation for custom AI

    Ministral 3 8B Base 2512 is a mid-sized, dense model in the Ministral 3 series, designed as a general-purpose foundation for text and image tasks. It pairs an 8.4B-parameter language model with a 0.4B-parameter vision encoder, enabling unified multimodal capabilities out of the box. As a “base” model (i.e., not fine-tuned for instruction or reasoning), it offers a flexible starting point for custom downstream tasks or fine-tuning. The model supports a large 256k token context window, making...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Ministral 3 3B Base 2512

    Ministral 3 3B Base 2512

    Small 3B-base multimodal model ideal for custom AI on edge hardware

    ...It supports dozens of languages, making it practical for multilingual, global, or distributed environments. With a large 256k token context window, it can handle long documents, extended inputs, or multi-step processing workflows even at its small size.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Point of Sale. Powerful and Simple. Icon
    Point of Sale. Powerful and Simple.

    For retail store owners and multi-location retail operations needing a tool to manage sales, inventory, staff and channels in one place

    Vibe Retail is an all-in-one retail point-of-sale and operations platform built for single-store and multi-location retailers seeking to unify inventory, sales, staff and customer data from one mobile-friendly interface. The system lets you track inventory across locations and warehouses, handle item variations (size, color, material), manage purchase orders and supplier deliveries, print custom barcodes, and transfer stock between stores in real time. On the sales side, Vibe supports multiple payment types (cards, cash, checks, gift cards, EBT), layaway workflows, serial number tracking, delivery management, loyalty programs and branded receipts. Retailers can integrate with online platforms (such as Shopify and WooCommerce), sync in-store and online sales, access 40+ real-time reports on sales, inventory and performance, set up promotions and discounts, and print receipts from mobile devices.
    Learn More
  • 10
    bart-large-cnn

    bart-large-cnn

    Summarization model fine-tuned on CNN/DailyMail articles

    facebook/bart-large-cnn is a large-scale sequence-to-sequence transformer model developed by Meta AI and fine-tuned specifically for abstractive text summarization. It uses the BART architecture, which combines a bidirectional encoder (like BERT) with an autoregressive decoder (like GPT). Pre-trained on corrupted text reconstruction, the model was further trained on the CNN/DailyMail dataset—a collection of news articles paired with human-written summaries. It performs particularly well in...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Hunyuan-A13B-Instruct

    Hunyuan-A13B-Instruct

    Efficient 13B MoE language model with long context and reasoning modes

    Hunyuan-A13B-Instruct is a powerful instruction-tuned large language model developed by Tencent using a fine-grained Mixture-of-Experts (MoE) architecture. While the total model includes 80 billion parameters, only 13 billion are active per forward pass, making it highly efficient while maintaining strong performance across benchmarks. It supports up to 256K context tokens, advanced reasoning (CoT) abilities, and agent-based workflows with tool parsing. The model offers both fast and slow...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    translategemma-4b-it

    translategemma-4b-it

    Lightweight multimodal translation model for 55 languages

    translategemma-4b-it is a lightweight, state-of-the-art open translation model from Google, built on the Gemma 3 family and optimized for high-quality multilingual translation across 55 languages. It supports both text-to-text translation and image-to-text extraction with translation, enabling workflows such as OCR-style translation of signs, documents, and screenshots. With a compact ~5B parameter footprint and BF16 support, the model is designed to run efficiently on laptops, desktops, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    Qwen3-Next-80B-A3B-Instruct is the flagship release in the Qwen3-Next series, designed as a next-generation foundation model for ultra-long context and efficient reasoning. With 80B total parameters and 3B activated at a time, it leverages hybrid attention (Gated DeltaNet + Gated Attention) and a high-sparsity Mixture-of-Experts architecture to achieve exceptional efficiency. The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Ministral 3 8B Reasoning 2512

    Ministral 3 8B Reasoning 2512

    Efficient 8B multimodal model tuned for advanced reasoning tasks.

    Ministral 3 8B Reasoning 2512 is a balanced midsize model in the Ministral 3 family, delivering strong multimodal reasoning capabilities within an efficient footprint. It combines an 8.4B-parameter language model with a 0.4B vision encoder, enabling it to process both text and images for advanced reasoning tasks. This version is specifically post-trained for reasoning, making it well-suited for math, coding, and STEM applications requiring multi-step logic and problem-solving. Despite its...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    VaultGemma is a sub-1B parameter variant of Google’s Gemma family that is pre-trained from scratch with Differential Privacy (DP), providing mathematically backed guarantees that its outputs do not reveal information about any single training example. Using DP-SGD with a privacy budget across a large English-language corpus (web documents, code, mathematics), it prioritizes privacy over raw utility. The model follows a Gemma-2–style architecture, outputs text from up to 1,024 input tokens,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ministral 3 14B Base 2512

    Ministral 3 14B Base 2512

    Powerful 14B-base multimodal model — flexible base for fine-tuning

    Ministral 3 14B Base 2512 is the largest model in the Ministral 3 line, offering state-of-the-art language and vision capabilities in a dense, base-pretrained form. It combines a 13.5B-parameter language model with a 0.4B-parameter vision encoder, enabling both high-quality text understanding/generation and image-aware tasks. As a “base” model (i.e. not fine-tuned for instruction or reasoning), it provides a flexible foundation ideal for custom fine-tuning or downstream specialization. The...
    Downloads: 0 This Week
    Last Update:
    See Project