[go: up one dir, main page]

Best Large Language Models

Compare the Top Large Language Models as of December 2025

What are Large Language Models?

Large language models are artificial neural networks used to process and understand natural language. Commonly trained on large datasets, they can be used for a variety of tasks such as text generation, text classification, question answering, and machine translation. Over time, these models have continued to improve, allowing for better accuracy and greater performance on a variety of tasks. Compare and read user reviews of the best Large Language Models currently available using the table below. This list is updated regularly.

  • 1
    Vertex AI
    Large Language Models (LLMs) in Vertex AI enable businesses to perform complex natural language processing tasks such as text generation, summarization, and sentiment analysis. These models, powered by massive datasets and cutting-edge techniques, can understand context and generate human-like responses. Vertex AI offers scalable solutions for training, fine-tuning, and deploying LLMs to meet business needs. New customers receive $300 in free credits, allowing them to explore the potential of LLMs in their applications. With these models, businesses can enhance their AI-driven text-based services and improve customer interactions.
    Starting Price: Free ($300 in free credits)
  • 2
    LM-Kit.NET
    LM-Kit.NET lets C# and VB.NET developers integrate large and small language models for natural language understanding, text generation, multi-turn dialogue, and low-latency on-device inference, while its vision language models add image analysis and captioning, its embedding models turn text into vectors for fast semantic search, and its LM-Lit catalog lists every state-of-the-art model with continuous updates, all in one efficient toolkit that stays inside your codebase without revealing any AI origin to the user.
    Leader badge">
    Starting Price: Free (Community) or $1000/year
    Partner badge
  • 3
    BLACKBOX AI

    BLACKBOX AI

    BLACKBOX AI

    BLACKBOX AI is an advanced AI-powered platform designed to accelerate coding, app development, and deep research tasks. It features an AI Coding Agent that supports real-time voice interaction, GPU acceleration, and remote parallel task execution. Users can convert Figma designs into functional code and transform images into web applications with minimal coding effort. The platform enables screen sharing within IDEs like VSCode and offers mobile access to coding agents. BLACKBOX AI also supports integration with GitHub repositories for streamlined remote workflows. Its capabilities extend to website design, app building with PDF context, and image generation and editing.
    Starting Price: Free
  • 4
    Qwen2.5

    Qwen2.5

    Alibaba

    Qwen2.5 is an advanced multimodal AI model designed to provide highly accurate and context-aware responses across a wide range of applications. It builds on the capabilities of its predecessors, integrating cutting-edge natural language understanding with enhanced reasoning, creativity, and multimodal processing. Qwen2.5 can seamlessly analyze and generate text, interpret images, and interact with complex data to deliver precise solutions in real time. Optimized for adaptability, it excels in personalized assistance, data analysis, creative content generation, and academic research, making it a versatile tool for professionals and everyday users alike. Its user-centric design emphasizes transparency, efficiency, and alignment with ethical AI practices.
    Starting Price: Free
  • 5
    Llama 3
    We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. With the release of Llama 3, we’ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, code shield, and Cybersec Eval 2.
    Starting Price: Free
  • 6
    Arcee-SuperNova
    Our new flagship model is a small Language Model (SLM) with all the power and performance of leading closed-source LLMs. Excels at generalized tasks, instruction-following, and human preferences. The best 70B model on the market. SuperNova can be utilized for any generalized task, much like Open AI’s GPT4o, Claude Sonnet 3.5, and Cohere. Trained with the most advanced learning & optimization techniques, SuperNova generates highly accurate responses in human-like text. It's the most flexible, secure, and cost-effective language model on the market, saving customers up to 95% on total deployment costs vs. traditional closed-source models. Use SuperNova to integrate AI into apps and products, for general chat purposes, and for diverse use cases. Regularly update your models with the latest open-source tech, ensuring you're never locked into any one solution. Protect your data with industry-leading privacy measures.
    Starting Price: Free
  • 7
    Marco-o1

    Marco-o1

    AIDC-AI

    Marco-o1 is a robust, next-generation AI model tailored for high-performance natural language processing and real-time problem-solving. It is engineered to deliver precise and contextually rich responses, combining deep language comprehension with a streamlined architecture for speed and efficiency. Marco-o1 excels in a variety of applications, including conversational AI, content creation, technical support, and decision-making tasks, adapting seamlessly to diverse user needs. With a focus on intuitive interactions, reliability, and ethical AI principles, Marco-o1 stands out as a cutting-edge solution for individuals and organizations seeking intelligent, adaptive, and scalable AI-driven tools. MCTS allows the exploration of multiple reasoning paths using confidence scores derived from softmax-applied log probabilities of the top-k alternative tokens, guiding the model to optimal solutions.
    Starting Price: Free
  • 8
    Octave TTS

    Octave TTS

    Hume AI

    Hume AI has introduced Octave (Omni-capable Text and Voice Engine), a groundbreaking text-to-speech system that leverages large language model technology to understand and interpret the context of words, enabling it to generate speech with appropriate emotions, rhythm, and cadence, unlike traditional TTS models that merely read text, Octave acts akin to a human actor, delivering lines with nuanced expression based on the content. Users can create diverse AI voices by providing descriptive prompts, such as "a sarcastic medieval peasant," allowing for tailored voice generation that aligns with specific character traits or scenarios. Additionally, Octave offers the flexibility to modify the emotional delivery and speaking style through natural language instructions, enabling commands like "sound more enthusiastic" or "whisper fearfully" to fine-tune the output.
    Starting Price: $3 per month
  • 9
    NuExtract

    NuExtract

    NuExtract

    NuExtract is a large language model specialized in extracting structured information from documents of any format, including raw text, scanned images, PDFs, PowerPoints, spreadsheets, and more, supporting over a dozen languages and mixed‑language inputs. It delivers JSON‑formatted output that faithfully follows user‑defined templates, with built‑in verification and null‑value handling to minimize hallucinations. Users define extraction tasks by creating a template, either by describing the desired fields or importing existing schemas—and can improve accuracy by adding document, output examples in the example set. The NuExtract Platform provides an intuitive workspace for designing templates, testing extractions in a playground, managing teaching examples, and fine‑tuning settings such as model temperature and document rasterization DPI. Once validated, projects can be deployed via a RESTful API endpoint that processes documents in real time.
    Starting Price: $5 per 1M tokens
  • 10
    K2 Think

    K2 Think

    Institute of Foundation Models

    K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.
    Starting Price: Free
  • 11
    Qwen3-Max

    Qwen3-Max

    Alibaba

    Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
    Starting Price: Free
  • 12
    Gemini Enterprise
    Gemini Enterprise is a comprehensive AI platform built by Google Cloud designed to bring the full power of Google’s advanced AI models, agent-creation tools, and enterprise-grade data access into everyday workflows. The solution offers a unified chat interface that lets employees interact with internal documents, applications, data sources, and custom AI agents. At its core, Gemini Enterprise comprises six key components: the Gemini family of large multimodal models, an agent orchestration workbench (formerly Google Agentspace), pre-built starter agents, robust data-integration connectors to business systems, extensive security and governance controls, and a partner ecosystem for tailored integrations. It is engineered to scale across departments and enterprises, enabling users to build no-code or low-code agents that automate tasks, such as research synthesis, customer support response, code assist, contract analysis, and more, while operating within corporate compliance standards.
    Starting Price: $21 per month
  • 13
    Claude Haiku 4.5
    Anthropic has launched Claude Haiku 4.5, its latest small-language model designed to deliver near-frontier performance at significantly lower cost. The model provides similar coding and reasoning quality as the company’s mid-tier Sonnet 4, yet it runs at roughly one-third of the cost and more than twice the speed. In benchmarks cited by Anthropic, Haiku 4.5 meets or exceeds Sonnet 4’s performance in key tasks such as code generation and multi-step “computer use” workflows. It is optimized for real-time, low-latency scenarios such as chat assistants, customer service agents, and pair-programming support. Haiku 4.5 is made available via the Claude API under the identifier “claude-haiku-4-5” and supports large-scale deployments where cost, responsiveness, and near-frontier intelligence matter. Claude Haiku 4.5 is available now on Claude Code and our apps. Its efficiency means you can accomplish more within your usage limits while maintaining premium model performance.
    Starting Price: $1 per million input tokens
  • 14
    Olmo 3
    Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.
    Starting Price: Free
  • 15
    Ministral 3

    Ministral 3

    Mistral AI

    Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.
    Starting Price: Free
  • 16
    Upstage AI

    Upstage AI

    Upstage.ai

    Upstage AI builds powerful large language models and document processing engines designed to transform workflows across industries like insurance, healthcare, and finance. Their enterprise-grade AI technology delivers high accuracy and performance, enabling businesses to automate complex tasks such as claims processing, underwriting, and clinical document analysis. Key products include Solar Pro 2, a fast and grounded enterprise language model, Document Parse for converting PDFs and scans into machine-readable text, and Information Extract for precise data extraction from contracts and invoices. Upstage’s AI solutions help companies save time and reduce manual work by providing instant, accurate answers from large document sets. The platform supports flexible deployment options including cloud, on-premises, and hybrid, meeting strict compliance requirements. Trusted by global clients, Upstage continues to advance AI innovation with top conference publications and industry awards.
    Starting Price: $0.5 per 1M tokens
  • 17
    Medical LLM

    Medical LLM

    John Snow Labs

    John Snow Labs' Medical LLM is an advanced, domain-specific large language model (LLM) designed to revolutionize the way healthcare organizations harness the power of artificial intelligence. This innovative platform is tailored specifically for the healthcare industry, combining cutting-edge natural language processing (NLP) capabilities with a deep understanding of medical terminology, clinical workflows, and regulatory requirements. The result is a powerful tool that enables healthcare providers, researchers, and administrators to unlock new insights, improve patient outcomes, and drive operational efficiency. At the heart of the Healthcare LLM is its comprehensive training on vast amounts of healthcare data, including clinical notes, research papers, and regulatory documents. This specialized training allows the model to accurately interpret and generate medical text, making it an invaluable asset for tasks such as clinical documentation, automated coding, and medical research.
  • 18
    Orpheus TTS

    Orpheus TTS

    Canopy Labs

    Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
  • 19
    LFM2

    LFM2

    Liquid AI

    LFM2 is a next-generation series of on-device foundation models built to deliver the fastest generative-AI experience across a wide range of endpoints. It employs a new hybrid architecture that achieves up to 2x faster decode and prefill performance than comparable models, and up to 3x improvements in training efficiency compared to the previous generation. These models strike an optimal balance of quality, latency, and memory for deployment on embedded systems, allowing real-time, on-device AI across smartphones, laptops, vehicles, wearables, and other endpoints, enabling millisecond inference, device resilience, and full data sovereignty. Available in three dense checkpoints (0.35 B, 0.7 B, and 1.2 B parameters), LFM2 demonstrates benchmark performance that outperforms similarly sized models in tasks such as knowledge recall, mathematics, multilingual instruction-following, and conversational dialogue evaluations.
  • 20
    NVIDIA NeMo
    NVIDIA NeMo LLM is a service that provides a fast path to customizing and using large language models trained on several frameworks. Developers can deploy enterprise AI applications using NeMo LLM on private and public clouds. They can also experience Megatron 530B—one of the largest language models—through the cloud API or experiment via the LLM service. Customize your choice of various NVIDIA or community-developed models that work best for your AI applications. Within minutes to hours, get better responses by providing context for specific use cases using prompt learning techniques. Leverage the power of NVIDIA Megatron 530B, one of the largest language models, through the NeMo LLM Service or the cloud API. Take advantage of models for drug discovery, including in the cloud API and NVIDIA BioNeMo framework.
  • 21
    Gopher

    Gopher

    DeepMind

    Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans. As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose.
  • 22
    Eternity AI

    Eternity AI

    Eternity AI

    Eternity AI is building an HTLM-7B, a machine learning model that knows what the internet is and how to access it to generate responses. Humans don't make decisions based on 2-year-old data. For a model to think like a human, it needs to get access to real-time knowledge and everything about how humans behave. Members of our team have previously published white papers and articles on topics related to on-chain vulnerability coordination, GPT database retrieval, decentralized dispute resolution, etc.
  • 23
    DataGemma
    DataGemma represents a pioneering effort by Google to enhance the accuracy and reliability of large language models (LLMs) when dealing with statistical and numerical data. Launched as a set of open models, DataGemma leverages Google's Data Commons, a vast repository of public statistical data—to ground its responses in real-world facts. This initiative employs two innovative approaches: Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG). The RIG method integrates real-time data checks during the generation process to ensure factual accuracy, while RAG retrieves relevant information before generating responses, thereby reducing the likelihood of AI hallucinations. By doing so, DataGemma aims to provide users with more trustworthy and factually grounded answers, marking a significant step towards mitigating the issue of misinformation in AI-generated content.
  • 24
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 25
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 26
    Gemini 3 Deep Think
    The most advanced model from Google DeepMind, Gemini 3, sets a new bar for model intelligence by delivering state-of-the-art reasoning and multimodal understanding across text, image, and video. It surpasses its predecessor on key AI benchmarks and excels at deeper problems such as scientific reasoning, complex coding, spatial logic, and visual-/video-based understanding. The new “Deep Think” mode pushes the boundaries even further, offering enhanced reasoning for very challenging tasks, outperforming Gemini 3 Pro on benchmarks like Humanity’s Last Exam and ARC-AGI. Gemini 3 is now available across Google’s ecosystem, enabling users to learn, build, and plan at new levels of sophistication. With context windows up to one million tokens, more granular media-processing options, and specialized configurations for tool use, the model brings better precision, depth, and flexibility for real-world workflows.
  • 27
    Amazon Nova 2 Lite
    Nova 2 Lite is a lightweight, high-speed reasoning model designed to handle everyday AI workloads across text, images, and video. It can generate clear, context-aware responses and lets users fine-tune how much internal reasoning the model performs before producing an answer. This adjustable “thinking depth” gives teams the flexibility to choose faster replies or more detailed problem-solving depending on the task. It stands out for customer service bots, automated document handling, and general business workflow support. Nova 2 Lite delivers strong performance across standard evaluation tests. It performs on par with or better than comparable compact models in most benchmark categories, demonstrating reliable comprehension and response quality. Its strengths include interpreting complex documents, pulling accurate insights from video content, generating usable code, and delivering grounded answers based on provided information.
  • 28
    Amazon Nova 2 Omni
    Nova 2 Omni is a fully unified multimodal reasoning and generation model capable of understanding and producing content across text, images, video, and speech. It can take in extremely large inputs, ranging from hundreds of thousands of words to hours of audio and lengthy videos, while maintaining coherent analysis across formats. This allows it to digest full product catalogs, long-form documents, customer testimonials, and complete video libraries all at the same time, giving teams a single system that replaces the need for multiple specialized models. With its ability to handle mixed media in one workflow, Nova 2 Omni opens new possibilities for creative and operational automation. A marketing team, for example, can feed in product specs, brand guidelines, reference images, and video content and instantly generate an entire campaign, including messaging, social content, and visuals, in one pass.
  • Previous
  • You're on page 1
  • Next