7 days ago
- https://tessl.io/registry/skills/github/tobi/qmd/qmd
10 Feb 26
06 Feb 26
Hyprnote is a private, on-device AI notepad that enhances your own notes—without bots, cloud recording, or meeting intrusion. Stay engaged, build your personal knowledge base, and export to tools like Notion on your terms.
04 Feb 26
27 Jan 26
20 Jan 26
100% Private and Local PDF Tools. Merge, compress, and convert PDFs entirely in your browser. No uploads, no cloud, no tracking. The secure alternative to iLovePDF.
19 Jan 26
100% Private and Local PDF Tools. Merge, compress, and convert PDFs entirely in your browser. No uploads, no cloud, no tracking. The secure alternative to iLovePDF.
15 Jan 26
- https://news.ycombinator.com/item?id=46616529
08 Jan 26
Seamlessly share your clipboard between Windows, Mac, Linux, Android, and iOS. End-to-end encrypted, decentralized, and completely private.
05 Dec 25
22 Nov 25
Run Qwen LLMs locally in your browser with WebGPU. Zero installation, instant AI chat.
03 Nov 25
https://news.ycombinator.com/item?id=45798193
02 Nov 25
Web Search MCP Server for use with Local LLMs A TypeScript MCP (Model Context Protocol) server that provides comprehensive web search capabilities using direct connections (no API keys required) with multiple tools for different use cases.
Features Multi-Engine Web Search: Prioritises Bing > Brave > DuckDuckGo for optimal reliability and performance Full Page Content Extraction: Fetches and extracts complete page content from search results Multiple Search Tools: Three specialised tools for different use cases Smart Request Strategy: Switches between playwright browesrs and fast axios requests to ensure results are returned Concurrent Processing: Extracts content from multiple pages simultaneously
27 Oct 25
Hyperlink is a local-first AI agent that understands your files privately—PDFs, notes, transcripts, and more. No internet required. Data stays secure, offline, and under your control. A Glean alternative built for personal or regulated use.
24 Oct 25
Psst, kid, want some cheap and small LLMs? This blog post provides a comprehensive guide on how to set up and use llama.cpp, a C++ library, to efficiently run large language models (LLMs) locally on consumer hardware.
23 Sep 25
“『有楽町で逢いましょう』(略)国鉄有楽町駅と営団地下鉄西銀座駅は、お互いに目と鼻の先である。同じ場所で二匹目のドジョウを狙った感” なるほど。鉄道の専門家らしい視点だ