Hugging Face’s cover photo
Hugging Face

Hugging Face

Software Development

The AI community building the future.

About us

The AI community building the future.

Website
https://huggingface.co
Industry
Software Development
Company size
51-200 employees
Type
Privately Held
Founded
2016
Specialties
machine learning, natural language processing, and deep learning

Products

Locations

Employees at Hugging Face

Updates

  • Hugging Face reposted this

    I am so happy to announce that ggml / llama.cpp are going to join the HF family ❤️🔥 Georgi Gerganov (🐐) and team are joining HF with the goal of scaling and supporting the community behind ggml and llama.cpp as Local AI continues to make exponential progress in the coming years. We've been working with Georgi and team for quite some time (we even have awesome core contributors to llama.cpp like Xuan-Son and Aleksander in the team already) so this has been a very natural process. llama.cpp is the fundamental building block for local inference, and transformers is the fundamental building block for model definition, so this is basically a match made in heaven. 🔥 Our shared long-term goal is to provide the community with the building blocks to make open-source superintelligence accessible to the world over the coming years. Onwards!

    • No alternative text description for this image
  • Hugging Face reposted this

    Alpamayo 1 is now Hugging Face’s top-downloaded robotics model with 100K downloads and counting. 🎉 It helps researchers and autonomous-driving practitioners develop and evaluate vision-language-action models for complex autonomous-driving scenarios, especially rare long-tail events. 🔗 Get started with Alpamayo 1 today: https://nvda.ws/46dtbI2 🎥 Watch the deep-dive: https://nvda.ws/4aq613w

    • No alternative text description for this image
  • Hugging Face reposted this

    View organization page for Gradio

    72,700 followers

    Biggest shift in AI development isn't a new model. It's the feedback loop🔥⬇️ OpenClaw hit 200K GitHub stars by embracing a simple truth: LLMs are excellent at writing and running code. So let them. Gradio just made the same bet. Our new gr.HTML feature allows any LLM to generate a complete web app—frontend, backend, state management—in a single Python file. > No build step. > No dependency hell. > No "now add this into your config." 💯 Why this matters: 😥 Traditional development: Idea → plan → scaffold → configure → build → debug → deploy 🤩 Vibe coding: Idea → describe → generate → run → iterate → ship Each cycle takes seconds with Gradio's reload mode. We are not saying frameworks are dead. But we are saying: if an LLM can ship a working Kanban board (checkout the attached video) with drag-and-drop in one prompt and one python file, the ceiling for "prototyping" just disappeared. 🤔 For anyone building AI tools, ML demos, or internal Apps—this changes the math on build vs. buy vs. prompt. Full technical breakdown with 6 working examples → https://lnkd.in/gn4mSHyy Access these 6 viral apps directly here → https://lnkd.in/gFA3a8g3

  • Hugging Face reposted this

    Is it worth re-OCRing digitised collections? How much does it cost to OCR the 1771 Encyclopaedia Britannica using a VLM-based OCR model? VLM-based OCR models have made serious progress in the last year. They have shrunk from 8B-parameter models to <1B. As a result, they are getting cheaper and easier to run. Many libraries and GLAM institutions digitised their collections decades ago, but the OCR from that era often isn't great. The question now is whether it's practical to redo it and how much will it cost? This is a big topic, but starting to work on some numbers for this! I tested this on the 1771 Encyclopaedia Britannica, 2,724 pages of 250-year-old text. Using GLM-OCR (a 0.9B open-source model) on a single GPU, the entire collection cost about $5 to process. That's $0.002 per page. To put that in context: A 10,000-page collection: ~$20 A million pages: ~$2,000 This is without much optimisation, so I think these costs could go down a bit even with the current generation of models. The quality improvement can be significant. The original OCR has errors like misread words and broken layout. The new output is clean, structured markdown. Before/after in the image. The tooling has also gotten simpler. The whole job runs as a single command via Hugging Face Jobs i.e. no Docker, no environment setup. This also means you can easily do a quick test on an OCR model without spending half a day setting stuff up locally (which for many libraries without GPUs isn't even feasible). Have some more fun (if you find OCR fun...) stuff I'm working on for this topic that I hope to share soon! Full Dataset: https://lnkd.in/eWKQn2Qb Scripts: https://lnkd.in/e8mkUkvz

    • No alternative text description for this image
  • Hugging Face reposted this

    Just shipped 🚀 A new task guide for Audio Language Models in the Hugging Face Transformers docs! If you’ve been curious about bleeding-edge audio LMs like Audio Flamingo that can combine audio + text prompts for instruction-guided generation and understanding, this one’s for you. The new Audio-Text-to-Text task guide walks through how to: ✅ Prepare audio + text inputs ✅ Load the right models + processors ✅ Run inference end-to-end ✅ Build instruction-guided audio understanding pipelines If you’re building assistants that can listen and reason or exploring multimodal UX beyond pure text this is a great place to start!

    • No alternative text description for this image
  • Hugging Face reposted this

    Today, Evaluating Evaluations is introducing Every Eval Ever, a unified, open data format and public dataset for AI evaluation results. Evaluation data is everywhere. Found in model releases, in competitive leaderboards, arenas, papers, used for capability/risk measurement, model choice, and governance. But they are not format compatible with each other in any meaningful way. This has some real costs to research, reproducibility, and information parsing. We are releasing standards for an aggregate schema, and an instance level schema! We took great care to incorporate almost every edge case that we found in how people report evaluations for LLMs. And we are not done! Multimodal and other suport is in the works. Out of the gate, we wrote converters to convert your data from the HELM, Inspect AI and lm-eval format. We are starting to populate a massive dataset of every eval ever on Hugging Face :) We ate our own dogfood: an internal version of this dataset helped power our big AI benchmark saturation study which we will be releasing in the coming weeks. Having all the data together unlocks levels of meta-research that is very hard to do meaningfully otherwise. To turbocharge the data collection, we are also launching a shared task at #ACL 2026 to solicit data submissions from folks :) There will be prizes and paper authorship! Leading this project and getting feedback from the incredible partners across the ecosystem has been truly, incredibly, a dream. Feedback was obtained from researchers in orgs like the US CAISI, Hugging Face, EleutherAI, Inspect, HELM, Technical University of Munich, Massachusetts Institute of Technology, Northeastern University, University of Copenhagen (Københavns Universitet), IBM, Noma Security, Trustible, Meridian, AI Verification and Evaluation Research Institute (AVERI), The Collective Intelligence Project, Weizenbaum Institute, and Evidence Prime. Can't wait to see what you all do with this standard and with the data. All links in comments!

    • No alternative text description for this image
  • Great video and feedback from University of Zurich x HF RL class 🎓 We are looking for more universities to join our Academia Hub program, get in touch with Penelope GITTOS if you'd like to partner!

    Last semester, students of Giorgia Ramponi's Reinforcement Learning course used Hugging Face to develop course projects that exposed them to real-life research tasks, as part of a bigger push at the University of Zurich (and beyond) to bring real-world challenges into the classroom. The HF Academia Hub was instrumental in providing our students with compute power to do all sorts of cool things, which we got to see at the final poster session. Here's a recap filmed and edited by Oriane Pierrès! UZH Department of Informatics UZH.ai

Similar pages

Browse jobs

Funding

Hugging Face 8 total rounds

Last Round

Series unknown
See more info on crunchbase