𝐃𝐢𝐬𝐜𝐨𝐫𝐝 𝐒𝐞𝐬𝐬𝐢𝐨𝐧: 𝐌𝐢𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝐭𝐨 𝐐𝐝𝐫𝐚𝐧𝐭 𝐄𝐝𝐠𝐞 𝐟𝐨𝐫 𝐎𝐧-𝐃𝐢𝐬𝐤 𝐕𝐞𝐜𝐭𝐨𝐫 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 𝐢𝐧 𝐑𝐮𝐬𝐭 We’re excited to host Clelia Astra Bertelli from LlamaIndex for a deep-dive session on: “Migrating to Qdrant Edge for On-Disk Vector Storage in Rust.” If you’re building AI systems in Rust or working with production-grade vector search, this session is for you. 🗓 February 27th ⏰ 4:00 PM CET / 7:00 AM PT / 8:30 PM IST 📍 Happening live on the Qdrant Discord server 👉 Join us here: https://lnkd.in/gAGnAuqe Clelia will cover: - Why and when to migrate to Qdrant Edge - Trade-offs of on-disk vector storage - Performance considerations in Rust-based systems - Wins, pitfalls, and lessons learned If you’re optimizing for edge deployments, reducing memory footprint, or scaling vector workloads efficiently - you’ll walk away with practical insights. Bring your questions - see you there! #Qdrant #LlamaIndex #VectorSearch #Rust #EdgeAI #OnDiskStorage #RAG #GenAI #AIEngineering #VectorSearch
Qdrant
Software Development
Berlin, Berlin 52,012 followers
Massive-Scale AI Search Engine & Vector Database
About us
Powering the next generation of AI applications with advanced and high-performant vector similarity search technology. Qdrant is an open-source vector search engine. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more. Make the most of your Unstructured Data!
- Website
-
https://qdrant.tech
External link for Qdrant
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- Berlin, Berlin
- Type
- Privately Held
- Founded
- 2021
- Specialties
- Deep Tech, Search Engine, Open-Source, Vector Search, Rust, Vector Search Engine, Vector Similarity, Artificial Intelligence , Machine Learning, and Vector Database
Products
Qdrant
Machine Learning Software
Qdrant develops high-performant vector search technology that allows everyone to use state-of-the-art neural network encoders at the production scale. The main project is the Vector Search Engine. It deploys as an API service, providing a search for high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and many more solutions to make the most of unstructured data. It is easy to use, deploy, and scale, blazing fast and accurate simultaneously. Qdrant engine is open-source, written in Rust, and is also available as a managed Vector Search as a Service https://cloud.qdrant.io solution or managed on-premise.
Locations
-
Primary
Get directions
Berlin, Berlin 10115, DE
-
Get directions
New York, New York, US
Employees at Qdrant
Updates
-
Qdrant reposted this
Here are some super simple tips and a reference card for using filters in Qdrant. The filtering capabilities are quite sophisticated and can really help you optimize both retrieval speed and quality. 🤔 Did you know Qdrant filtering is neither pre- nor post-filtering? The filtering occurs while the HNSW graph is being traversed, allowing both efficient computation and better retrieval recall. 🤔 Did you know you can send arrays to filter criteria? No need to limit yourself to single matches. 🤔 Did you know Qdrant has nested filters? Simply use dot notation to filter down to "user.name.address" for example ____ 💡 Want to learn more? Check out our filtering documentation and essentials course lesson on filtering: Docs --> https://lnkd.in/e29222YZ Course --> https://lnkd.in/eQeqn5Yi
-
-
🇳🇱 𝐀𝐦𝐬𝐭𝐞𝐫𝐝𝐚𝐦, 𝐉𝐨𝐢𝐧 𝐔𝐬 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐌𝐞𝐞𝐭𝐮𝐩 Stop by our Context Engineering Meetup in Amsterdam on the 3rd of March! 𝐆𝐨𝐨𝐝 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐢𝐧 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐬𝐲𝐬𝐭𝐞𝐦𝐬 comes from domain expertise, data structuring choices, and low-level technical decisions. Let's develop an intuition behind good context engineering together and apply it in practice! 🗓️ March 3, 18:00-21:30 📍 Mr. Green Offices, Amsterdam 🔗 Register here: https://luma.com/dtswysbb Speakers: 1️⃣ Robert Caulk, Founder of AskNews, with “𝘍𝘳𝘰𝘮 𝘙𝘢𝘸 𝘋𝘢𝘵𝘢 𝘵𝘰 𝘘𝘶𝘦𝘳𝘺𝘢𝘣𝘭𝘦 𝘚𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦𝘴: 𝘛𝘩𝘦 𝘏𝘪𝘨𝘩-𝘓𝘦𝘷𝘦𝘭 𝘋𝘦𝘤𝘪𝘴𝘪𝘰𝘯𝘴 𝘛𝘩𝘢𝘵 𝘔𝘢𝘬𝘦 𝘈𝘭𝘭 𝘵𝘩𝘦 𝘋𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘤𝘦” → Includes live data structuring consulting, so bring your own data! 2️⃣ Evgeniya Sukhodolskaya, Developer Advocate at Qdrant and a special guest from Neo4j, Niels de Jong, with “𝘊𝘰𝘯𝘵𝘦𝘹𝘵 𝘌𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 𝘧𝘰𝘳 𝘈𝘨𝘦𝘯𝘵𝘪𝘤 𝘔𝘦𝘥𝘪𝘤𝘢𝘭 𝘙𝘦𝘴𝘦𝘢𝘳𝘤𝘩 𝘈𝘴𝘴𝘪𝘴𝘵𝘢𝘯𝘵 𝘶𝘴𝘪𝘯𝘨 𝘝𝘦𝘤𝘵𝘰𝘳 𝘚𝘦𝘢𝘳𝘤𝘩 𝘢𝘯𝘥 𝘎𝘳𝘢𝘱𝘩 𝘋𝘢𝘵𝘢𝘣𝘢𝘴𝘦 𝘛𝘰𝘰𝘭𝘪𝘯𝘨” → A walkthrough of building an AI medical copilot powered by Qdrant vector search and Neo4j graph database tooling. Come learn, bring your use case, ask questions, and stay for networking + refreshments. See you in Amsterdam 👋 #Qdrant #ContextEngineering #VectorSearch #RAG #AgenticAI #Neo4j #LLM #AIEngineering #Amsterdam
-
-
Qdrant reposted this
This week, we received boards from the Arduino team for testing Qdrant Edge deployment. Here we go, Andrey Vasnetsov just made it happen. 𝐐𝐝𝐫𝐚𝐧𝐭 𝐄𝐝𝐠𝐞 𝐄𝐧𝐠𝐢𝐧𝐞 is running on Arduino UNO Q. 🤖, we are here to provide you with an Information Retrieval layer and Context for your Memory functions. ⤵️ qdrant.to/edge 🤖, Stay tuned.
-
-
🚀 Congratulations to cognee on raising a $7.5M seed round, led by pebblebed - the venture fund led by Pamela Vagata (Co-founder of OpenAI) and Keith Adams (Founder of Facebook AI Research Lab) At Qdrant, we strongly believe that agents must retrieve, reason, and remember with structure. Stateless agents hallucinate. They forget. Teams end up duct-taping RAG pipelines, vector stores, rules, and logs together. That’s why we’re excited to see the emergence of a clear category: AI Memory. cognee is building structured, governed, and continuously improving memory for AI agents - bringing context engineering and knowledge engineering into production systems at scale. From a fast-growing open-source project running 1M+ pipelines per month to being used by 70+ companies like Bayer, the momentum speaks for itself. We’re proud to partner with cognee as they bring structured AI memory to agents and push forward the future of production-grade, agentic AI systems. Excited for what this next chapter unlocks. 👏 #AIMemory #AgenticAI #ContextEngineering #KnowledgeEngineering #VectorSearch #Qdrant #AIInfrastructure
-
-
Qdrant reposted this
Announcing the first group of speakers for AI Dev 26 × San Francisco! This lineup brings AI founders, engineers, entrepreneurs, and researchers from Oracle, Actian, LandingAI, Silicon Valley Girl, Datadog, Neo4j, Andela, Sonar, Snowflake, Box, Unblocked, Redis, Chroma, Reducto, Qdrant, CopilotKit, Zencoder, Giskard, Agentic Fabriq (YC F25), Spice AI, and Vocal Bridge. Join them in April for insightful talks, hands-on technical workshops designed for real implementation, live demos, a new startup track, and the chance to connect with 3,000+ fellow builders. Tickets are limited. Reserve your seat before they’re gone: https://bit.ly/4aiyfNp More speakers and partners to be announced soon!
-
From Static Embeddings to Dynamic AI Assistants - Powered by Qdrant In this recent LinkedIn pulse by Michael Folino, he revisits a Kafka AI assistant project originally built in 2022 using FAISS - and upgrades it for 2025 using Qdrant. The key shift? Moving from a static embedding setup to a more dynamic architecture where: ✅ Qdrant serves as the semantic memory layer ✅ Internal knowledge is stored as embeddings in a scalable vector store ✅ Live web search complements stored knowledge ✅ The assistant delivers more relevant, up-to-date responses Instead of relying on stale pre-indexed vectors, the updated system combines: • Persistent semantic memory (Qdrant) • Real-time external retrieval • Agent-based reasoning This is a great example of how modern AI assistants are evolving - and how vector search solutions like Qdrant are central to building grounded, production-ready AI systems. If you’re upgrading from early RAG prototypes to more advanced, dynamic pipelines, this article is worth a read. Read it here: https://lnkd.in/gW6w5GRp #Qdrant #VectorSearch #RAG #AgenticAI #SemanticSearch #GenAI #AIEngineering #Kafka
-
-
Infrastructure matters. But when it comes to AI systems, retrieval performance is what often determines whether your application feels instant - or unusable. In this blog, the team at Qovery shares how they leverage Qdrant as the vector search backbone in their AI workflows. Where Qdrant fits in? • Storing and indexing high-dimensional embeddings • Enabling fast, low-latency semantic search • Powering retrieval pipelines for AI applications • Supporting scalable vector workloads in production By combining Qovery’s environment automation with Qdrant’s high-performance vector search, teams can: ✅ Spin up AI projects faster ✅ Deploy retrieval systems reliably ✅ Iterate on GenAI pipelines without infra bottlenecks This is a great real-world example of how vector search is not just a feature - it’s core infrastructure for modern AI systems. Read the full article here: https://lnkd.in/geteVADX If you’re building RAG systems or production AI workflows, understanding how Qdrant fits into modern platform stacks is key. Do reachout, in case you need help 😊 #Qdrant #VectorSearch #RAG #GenAI #AIEngineering #CloudNative #PlatformEngineering #Qovery
-
-
Just a quick reminder that Qdrant Office Hours is coming up on Feb 19, 2026 at 17:00 CEST / 08:00 PDT - and we’re really excited about this one. We’ll be joined by: • hafedh hichri from Chonkie And we will also have Clelia from LlamaIndex who will give us a quick glimpse into her upcoming session: “Migrating to Qdrant Edge for On-Disk Vector Storage in Rust: Wins & Pitfalls to Avoid.” - More details sooooon.. If you’re working on: – Rust-based AI systems – On-disk vector storage – Edge deployments – Or just scaling RAG systems in production You’ll definitely want to tune in. As always, Office Hours are chill, interactive, and community-first - so bring your questions and let’s talk vector search. 🔗 Join us here: https://lnkd.in/gYCxFJE9 See you all on the 19th 🙌 #Qdrant #VectorSearch #OfficeHours #LlamaIndex #Rust #RAG #GenAI
-