<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wes</title>
    <description>The latest articles on DEV Community by Wes (@ticktockbent).</description>
    <link>https://dev.to/ticktockbent</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ticktockbent"/>
    <language>en</language>
    <item>
      <title>Your Artifact Registry Doesn't Need 2 GB of RAM</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:12:12 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-artifact-registry-doesnt-need-2-gb-of-ram-3ckp</link>
      <guid>https://dev.to/ticktockbent/your-artifact-registry-doesnt-need-2-gb-of-ram-3ckp</guid>
      <description>&lt;p&gt;Every team eventually needs an artifact registry. You need somewhere to push Docker images, host internal npm packages, or cache Maven dependencies so your builds don't break when a mirror goes down. The standard answer is Nexus or Artifactory. Both work. Both are also Java applications that need a JVM, a database, careful heap tuning, and at least 2 GB of RAM before they'll serve a single artifact. On a CI server that's already running builds, that memory budget hurts.&lt;/p&gt;

&lt;p&gt;One developer decided the problem was simpler than the existing solutions make it look. The result is a 32 MB Rust binary that handles seven package protocols on less than 100 MB of RAM.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Nora?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;Nora&lt;/a&gt; is a lightweight artifact registry built by &lt;a href="https://github.com/devitway" rel="noopener noreferrer"&gt;devitway&lt;/a&gt; (Pavel Volkov). It supports Docker/OCI, Maven, npm, PyPI, Cargo, Go modules, and raw file hosting in a single binary. It includes a web UI dashboard, Prometheus metrics, token auth with Argon2id password hashing, S3 or local storage backends, mirror/proxy mode for air-gapped environments, and garbage collection. You configure it with a TOML file and run it. That's it.&lt;/p&gt;

&lt;p&gt;36 stars. Three months of focused solo development. About 19,600 lines of Rust across 45 source files. This project is doing real work and almost nobody knows about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;Nora&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;36 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (devitway)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;411 tests, proptest, fuzz targets, 61.5% coverage, CI that would make a team project jealous&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README, CONTRIBUTING.md with build/test/PR instructions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same-day review on PRs, warm and specific feedback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Getting there. Mirror mode and auth are solid. Watch this space.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The architecture is what you'd hope for: each registry protocol gets its own module under &lt;code&gt;src/registries/&lt;/code&gt;, storage is abstracted behind a trait (local filesystem or S3), and the HTTP layer is axum with tokio. Config is validated at startup. There's no clever metaprogramming, no macro soup. You can open the Docker registry module and understand what it does without reading three layers of indirection first.&lt;/p&gt;

&lt;p&gt;The dependency list is restrained for a project this ambitious. axum and tokio handle the HTTP server. reqwest handles upstream proxy requests. serde and toml for config. The entire Cargo.toml reads like someone who picks dependencies on purpose rather than pulling in whatever shows up first on crates.io. For context: Nexus Repository Manager pulls in over 400 Maven dependencies. Nora's &lt;code&gt;Cargo.lock&lt;/code&gt; has about 350 crate entries, but most of those are transitive deps from tokio and reqwest. The direct dependency count is small.&lt;/p&gt;

&lt;p&gt;The CI pipeline is where Nora stands out from other solo projects. Most one-person repos have a test workflow and maybe clippy. Nora runs: &lt;code&gt;cargo fmt&lt;/code&gt;, clippy with &lt;code&gt;-D warnings&lt;/code&gt;, the full test suite, &lt;code&gt;cargo-audit&lt;/code&gt; for vulnerability scanning, &lt;code&gt;cargo-deny&lt;/code&gt; for license and supply-chain policy, Trivy for container scanning, Gitleaks for secret detection, CodeQL for static analysis, and OpenSSF Scorecard for security posture. That is not a typical solo developer setup. It's more thorough than plenty of team projects I've contributed to.&lt;/p&gt;

&lt;p&gt;The test suite backs it up. 411 &lt;code&gt;#[test]&lt;/code&gt; functions across 29 files. Proptest for parser fuzzing. Fuzz targets via &lt;code&gt;cargo-fuzz&lt;/code&gt; with ClusterFuzzLite integration. Integration tests that spin up the actual binary and exercise all seven protocols. Playwright end-to-end tests for the web UI. Coverage measured at 61.5% via tarpaulin.&lt;/p&gt;

&lt;p&gt;Security is taken seriously too. Credentials use Argon2id with &lt;code&gt;zeroize&lt;/code&gt; to scrub secrets from memory after use. The token verification layer has a TTL cache so it's not re-hashing on every request. There are explicit TOCTOU race condition fixes in the storage layer and request deduplication for the proxy mode so concurrent pulls for the same image don't stampede the upstream registry.&lt;/p&gt;

&lt;p&gt;What's rough? The web UI modules are untested. &lt;code&gt;ui/api.rs&lt;/code&gt; (1,010 lines), &lt;code&gt;ui/templates.rs&lt;/code&gt; (861 lines), and &lt;code&gt;ui/components.rs&lt;/code&gt; (783 lines) have zero test coverage between them. That's 2,654 lines of code running the dashboard with no safety net. There's also a dead code problem: &lt;code&gt;error.rs&lt;/code&gt; defines a full &lt;code&gt;AppError&lt;/code&gt; type with an &lt;code&gt;IntoResponse&lt;/code&gt; impl that's been sitting unused since v0.3. The comment says "wiring into handlers planned for v0.3" but they're on v0.4 now, and handlers still construct status code responses manually. The garbage collector has a subtler issue: &lt;code&gt;collect_all_blobs&lt;/code&gt; scans all seven registries, but &lt;code&gt;collect_referenced_digests&lt;/code&gt; only reads Docker manifests. Non-Docker blobs would look like orphans to the GC. None of these are deal-breakers, but they're the kind of gaps that matter as the project grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;I was reading through the metrics code when I noticed &lt;code&gt;detect_registry()&lt;/code&gt; had match arms for Docker, Maven, npm, PyPI, and Cargo, but Go and Raw requests fell through to an &lt;code&gt;"other"&lt;/code&gt; catch-all. Every request to those two registries was invisible in Prometheus. The &lt;code&gt;RegistriesHealth&lt;/code&gt; struct had the same gap: five fields for five registries, but Go and Raw weren't represented. The health endpoint would report them as down even when they were running fine.&lt;/p&gt;

&lt;p&gt;The kicker was the test suite. There was already a test for Go registry path detection. It asserted that a Go module request should be labeled &lt;code&gt;"other"&lt;/code&gt;. The test was passing because it was checking for the wrong thing.&lt;/p&gt;

&lt;p&gt;The fix was straightforward: add the missing match arms in &lt;code&gt;detect_registry()&lt;/code&gt;, add the &lt;code&gt;go&lt;/code&gt; and &lt;code&gt;raw&lt;/code&gt; fields to &lt;code&gt;RegistriesHealth&lt;/code&gt; and its construction in &lt;code&gt;check_registries_health()&lt;/code&gt;, and fix the test to assert the correct label. &lt;a href="https://github.com/getnora-io/nora/pull/97" rel="noopener noreferrer"&gt;PR #97&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was easy. The CONTRIBUTING.md lays out build and test commands clearly. The module structure maps directly to concepts: if you want to understand how Docker pushes work, you open &lt;code&gt;src/registries/docker/&lt;/code&gt;. If you want metrics, you open &lt;code&gt;src/metrics.rs&lt;/code&gt;. I found the bugs by reading, not by fighting the project layout. The whole thing compiled and tested cleanly on the first try.&lt;/p&gt;

&lt;p&gt;This was the first external PR the project had ever received. The maintainer reviewed it, approved it, and merged it the same day. His comment:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"This is the very first community PR that NORA has ever received, and it means a lot. The fact that you not only noticed the missing Go and Raw registries in metrics, but took the time to write a clean fix with proper tests... Welcome to the team."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That response tells you something about whether this project has a future.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Nora is for teams that need artifact hosting without the Java tax. If you're running a small engineering team, managing a homelab, or working in an air-gapped environment where you can't reach public registries, a 32 MB binary that handles seven protocols on 100 MB of RAM is a compelling alternative to configuring Nexus heap flags.&lt;/p&gt;

&lt;p&gt;The project's trajectory is strong. January was the initial scaffold with Docker support. February added six more protocols. March brought security hardening, proptest, integration tests, and a coverage push from 22% to 61.5%. April shipped v0.4 with mirror mode and air-gap support. That's a lot of ground to cover in three months, and the commit history tells a consistent story: conventional commits, one concern per commit, no monolithic dumps.&lt;/p&gt;

&lt;p&gt;What would push Nora to the next level? Wire in the &lt;code&gt;AppError&lt;/code&gt; type so error responses are consistent across registries. Fix the GC so it doesn't treat non-Docker blobs as orphans. Get some test coverage on the UI modules. And keep doing what's already working, because the foundation is solid.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you've ever winced at your artifact registry's memory usage, &lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;give Nora a look&lt;/a&gt;. The codebase is clean, the CI is thorough, and the maintainer merges good work the same day you submit it.&lt;/p&gt;

&lt;p&gt;Star the repo. Try running it against your Docker workflow. If you want to contribute, the &lt;code&gt;AppError&lt;/code&gt; wiring and the GC's Docker-only reference scanning are both waiting for someone to pick them up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #N, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>I Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Fri, 03 Apr 2026 23:53:10 +0000</pubDate>
      <link>https://dev.to/ticktockbent/i-renamed-all-43-tools-in-my-mcp-server-heres-why-i-did-it-now-hic</link>
      <guid>https://dev.to/ticktockbent/i-renamed-all-43-tools-in-my-mcp-server-heres-why-i-did-it-now-hic</guid>
      <description>&lt;p&gt;Charlotte has 111 stars. That's not a lot. But it's enough that a breaking change will annoy real people.&lt;/p&gt;

&lt;p&gt;I shipped one anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  The naming problem
&lt;/h2&gt;

&lt;p&gt;When I started building &lt;a href="https://github.com/TickTockBent/charlotte" rel="noopener noreferrer"&gt;Charlotte&lt;/a&gt; in February, I named every tool with a colon separator: &lt;code&gt;charlotte:navigate&lt;/code&gt;, &lt;code&gt;charlotte:observe&lt;/code&gt;, &lt;code&gt;charlotte:click&lt;/code&gt;. It looked clean. It felt namespaced. Every tool call in every session used it.&lt;/p&gt;

&lt;p&gt;The problem: the MCP spec restricts tool names to &lt;code&gt;[A-Za-z0-9_.-]&lt;/code&gt;. The colon character isn't in that set. It never was. I either didn't check or didn't care at the time. The MCP SDK was lenient about it until v1.26.0, which started emitting validation warnings on every tool registration.&lt;/p&gt;

&lt;p&gt;I had two options. Fix it now with 111 stars and a handful of active users. Or fix it later with more stars, more users, more documentation, more muscle memory, and more pain.&lt;/p&gt;

&lt;p&gt;We renamed all 43 tools from &lt;code&gt;charlotte:xxx&lt;/code&gt; to &lt;code&gt;charlotte_xxx&lt;/code&gt; in a single commit. Breaking change. Documented in the changelog. Migration note in the release.&lt;/p&gt;

&lt;p&gt;Here's the thing about MCP: clients discover tools dynamically at connection time. When an agent connects to Charlotte, it asks "what tools do you have?" and Charlotte sends the current list. The agent doesn't care what the tools were called yesterday. So for most users, the upgrade is invisible. The old names simply don't exist anymore and the new names appear automatically.&lt;/p&gt;

&lt;p&gt;The people who get hit are the ones with custom prompts or configurations that reference tool names as strings. That's a small group right now. It will be a much larger group in six months.&lt;/p&gt;

&lt;p&gt;Breaking changes are cheaper when you're small. That's the whole argument. Ship it early, pay the small cost, avoid the large one later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else is in 0.6.0
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Batch form filling.&lt;/strong&gt; This one matters for token economics.&lt;/p&gt;

&lt;p&gt;Before 0.6.0, filling a 10-field contact form meant 10 separate tool calls: &lt;code&gt;charlotte_type&lt;/code&gt; for each text field, &lt;code&gt;charlotte_select&lt;/code&gt; for dropdowns, &lt;code&gt;charlotte_toggle&lt;/code&gt; for checkboxes. Each call carries tool definition overhead (the MCP server sends its tool schemas on every API round trip). Ten calls at ~4,000 definition tokens each is 40,000 tokens just in overhead, before any actual content.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;charlotte_fill_form&lt;/code&gt; takes an array of &lt;code&gt;{element_id, value}&lt;/code&gt; pairs and fills everything in one call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"fields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inp-a3f1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jane Smith"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inp-b7c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jane@example.com"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sel-d4e8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Enterprise"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"chk-f9a0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One call. One set of definition tokens. The form is filled. It handles text inputs, textareas, selects, checkboxes, radios, toggles, date pickers, and color inputs. Type detection is automatic based on the element's role.&lt;/p&gt;

&lt;p&gt;For a testing agent running form validation across 50 pages, this is the difference between 500 tool calls and 50. The token savings compound fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lazy Chromium initialization.&lt;/strong&gt; Charlotte used to launch a Chromium instance the moment the MCP server started. The problem: MCP clients like Claude Desktop and Cursor spawn all configured servers at startup, whether you're going to use them or not. If Charlotte is in your config but you're just writing code today, you had a headless browser burning RAM for nothing.&lt;/p&gt;

&lt;p&gt;Now the browser launches on the first actual tool call. If you never browse, Chromium never starts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slow typing.&lt;/strong&gt; &lt;code&gt;charlotte_type&lt;/code&gt; gains &lt;code&gt;slowly&lt;/code&gt; and &lt;code&gt;character_delay&lt;/code&gt; parameters for character-by-character input. This sounds trivial until your agent tries to test a search-as-you-type field and the site's event handler only fires on individual keystrokes, not pasted text. Autocomplete, live validation, search suggestions. They all need real keystroke events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node.js 20 support.&lt;/strong&gt; I was requiring Node 22 for no reason. No 22-only APIs were in use. Relaxing to &amp;gt;=20 opens Charlotte to the broader LTS user base.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ASI bug, one last time
&lt;/h2&gt;

&lt;p&gt;In v0.4.1, I found a bug where &lt;code&gt;charlotte:evaluate&lt;/code&gt; silently returned null on multi-statement JavaScript. The cause was &lt;code&gt;new Function('return ' + expr)&lt;/code&gt; combined with Automatic Semicolon Insertion. I &lt;a href="https://dev.to/ticktockbent/i-let-an-ai-agent-use-my-browser-tool-unsupervised-it-found-3-bugs-in-20-minutes-2c70"&gt;wrote about it&lt;/a&gt; at the time.&lt;/p&gt;

&lt;p&gt;I fixed it in &lt;code&gt;evaluate.ts&lt;/code&gt;. Then found the same pattern in &lt;code&gt;wait-for.ts&lt;/code&gt; and fixed it in 0.5.0.&lt;/p&gt;

&lt;p&gt;0.6.0 found it a third time in &lt;code&gt;pollUntilCondition&lt;/code&gt;, a utility function used by the wait system. Same bug. Same &lt;code&gt;new Function('return ' + expr)&lt;/code&gt;. Same migration to CDP &lt;code&gt;Runtime.evaluate&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's three separate files with the same broken pattern, discovered across three releases. Copy-paste bugs are persistent. If you find a pattern-level bug in your codebase, grep for every instance before you close the issue. I should have done that the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  7 strangers improved my code
&lt;/h2&gt;

&lt;p&gt;When I started Charlotte in February, it was a solo project. 100% of commits from one person. An external evaluation in early March rated sustainability 2 out of 5 and flagged "97% single-developer commits" as the primary risk.&lt;/p&gt;

&lt;p&gt;Six weeks later, seven people I've never met have merged code into Charlotte:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teoman Yavuzkurt&lt;/strong&gt; contributed three PRs: fixing the default viewport (800x600 was unrealistically small), solving a stale compositor frame bug in screenshots on SPA transitions, and fixing macOS symlink resolution in tests. Three different areas of the codebase. That's someone who read the code deeply enough to find problems across modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;clawtom&lt;/strong&gt; submitted two PRs: an O(1) lookup optimization for the snapshot store (replacing a linear scan with a Map index) and proper error logging for CDP failures in layout extraction. Both unsolicited. Both performance or reliability improvements that I hadn't prioritized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sandy McArthur, Jr.&lt;/strong&gt; joined as a new contributor this cycle. &lt;strong&gt;Nuno Curado&lt;/strong&gt; did the original security hardening back in February. &lt;strong&gt;kai-agent-free&lt;/strong&gt; picked up the "read version from package.json" issue I had tagged as "good first issue." &lt;strong&gt;Nestor Fernando De Leon Llanos&lt;/strong&gt; added the issue templates and community links.&lt;/p&gt;

&lt;p&gt;I didn't recruit any of them. They found the project, read the code, and decided it was worth contributing to. The issue templates, the "good first issue" labels, the CONTRIBUTING guide, the test suite that gives contributors confidence their changes don't break things. All of that infrastructure exists to make contributing feel safe and worthwhile. It seems to be working.&lt;/p&gt;

&lt;p&gt;The sustainability rating would look different today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Charlotte is at 43 tools, 519 tests, and a 1.07:1 test-to-source line ratio. The structural tree view from 0.5.0 gives agents a full page map in under 2,000 characters. Iframe extraction handles embedded content. File output keeps large responses out of the context window. And now batch form fills collapse multi-step interactions into single calls.&lt;/p&gt;

&lt;p&gt;The focus for the next cycle is the connect-to-browser feature: attaching Charlotte to an already-running Chrome instance instead of launching its own. This unlocks screen recording of agent sessions, live debugging, and the kind of demo videos that are worth more than any blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @ticktockbent/charlotte@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works with any MCP client: Claude Desktop, Claude Code, Cursor, Windsurf, Cline, VS Code, Amp.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/TickTockBent/charlotte" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@ticktockbent/charlotte" rel="noopener noreferrer"&gt;npm&lt;/a&gt; | &lt;a href="https://charlotte-rose.vercel.app/vs-playwright" rel="noopener noreferrer"&gt;Charlotte vs Playwright MCP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open source, MIT licensed. If you're running browser-heavy agent workflows, I'd like to hear how it holds up.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your System Is Not a State Machine</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Fri, 03 Apr 2026 12:10:55 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-system-is-not-a-state-machine-2jf1</link>
      <guid>https://dev.to/ticktockbent/your-system-is-not-a-state-machine-2jf1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9hkrvkkd7qijfq8uqmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9hkrvkkd7qijfq8uqmc.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were all taught the same thing. A system has states. It has transitions. Something happens, the system moves from State A to State B. You can draw it on a whiteboard. You can enumerate the possibilities. You can write tests for each branch.&lt;/p&gt;

&lt;p&gt;That was true for a long time. It's not true anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do the math
&lt;/h2&gt;

&lt;p&gt;Take a small transformer. 117 million parameters, each stored as a float32. The raw state space of just the weights is 2^(3.7 billion). The number of atoms in the observable universe is around 2^266.&lt;/p&gt;

&lt;p&gt;That's before you add activations, attention matrices, the KV cache growing with every token. And that's one model sitting idle. Not a system. Not an architecture. Just one small model.&lt;/p&gt;

&lt;p&gt;Now build something real. An orchestrator spawns four sub-agents. One is browsing a website. One is querying a database. One is calling an external API. One is doing a computation. Each has its own latency, its own failure modes, its own ability to return something you didn't expect.&lt;/p&gt;

&lt;p&gt;What state is that system in?&lt;/p&gt;

&lt;p&gt;You don't know. I don't know. Nobody knows, because the space of possible configurations is so absurdly vast that calling it "astronomical" is generous. You can't draw this on a whiteboard. You can't enumerate the branches. The flowchart is a lie.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's not random either
&lt;/h2&gt;

&lt;p&gt;Your first instinct might be to reach for probability. If we can't predict the exact state, maybe we can describe the distribution of likely states. Stochastic modeling. Markov chains. The math is right there.&lt;/p&gt;

&lt;p&gt;But that framing is wrong too, because these systems aren't rolling dice. An agent returning a useful summary of a web page isn't a random event. It's the result of a goal-directed process that evaluated and corrected itself on a token-by-token basis across thousands of sequential decisions. The output is useful precisely because it isn't random.&lt;/p&gt;

&lt;p&gt;So you're stuck in a third space. Not deterministic. Not stochastic. Something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convergent but underdetermined
&lt;/h2&gt;

&lt;p&gt;Here's the framing I keep coming back to.&lt;/p&gt;

&lt;p&gt;An LLM doesn't select an output from a distribution and accept whatever comes up. Every token is an evaluation. The weights encode something like "given everything generated so far, what moves me closer to a coherent completion?" The model is steering. Continuously. At the lowest level of its operation.&lt;/p&gt;

&lt;p&gt;That's already not a state machine. But zoom out.&lt;/p&gt;

&lt;p&gt;Your orchestrator has four sub-agents running. Each one is internally converging toward its own useful output. The orchestrator is monitoring returns in real time, and each return reshapes how it evaluates the others. Agent 3's result might make agent 2's task irrelevant. Agent 1's failure might mean re-dispatching agent 4 with different parameters.&lt;/p&gt;

&lt;p&gt;You have nested convergence loops running at different scales, different speeds, none following a predetermined path, all goal-directed. The system isn't transitioning between states. It's navigating toward coherence through a space that only reveals itself as the system moves through it.&lt;/p&gt;

&lt;p&gt;The closest analogy isn't computer science. It's biology. A cell responding to chemical gradients isn't executing a flowchart or rolling dice. It's resolving toward a functional configuration through continuous interaction with an environment it can't fully predict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters practically
&lt;/h2&gt;

&lt;p&gt;If this framing is right, then designing agentic systems with state-machine thinking isn't just imprecise. It's architecturally wrong. You're imposing discrete checkpoints on a system whose fundamental operation is continuous convergence. You're fighting the nature of the thing.&lt;/p&gt;

&lt;p&gt;The alternative might look something like designing around convergence envelopes. Not "what state should the system be in at step 3" but "what region of outcome space should this process be converging toward, and what boundaries should it not cross while getting there."&lt;/p&gt;

&lt;p&gt;Under that model, an orchestrator isn't a state manager. It's a convergence auditor. Its job isn't to track which step the system is on. Its job is to monitor whether the system is still heading toward a useful result, and intervene when it drifts outside acceptable bounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  I don't have the answers
&lt;/h2&gt;

&lt;p&gt;I want to be clear that this is half a thought. I don't have a formal model. I don't have a replacement for the state machine abstraction that you can hand to a junior engineer and say "use this." I'm not sure one exists yet.&lt;/p&gt;

&lt;p&gt;But I know the old model is broken. If you've tried to draw a flowchart for an agentic system and felt like you were lying, you were. The system you're building doesn't have states. It has trajectories. It doesn't transition. It converges.&lt;/p&gt;

&lt;p&gt;Somebody smarter than me will figure out the formalism. I just wanted to point at the gap.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>agents</category>
      <category>systems</category>
    </item>
    <item>
      <title>Anthropic Leaked Its Own Source Code and May Not Own It</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Thu, 02 Apr 2026 12:33:39 +0000</pubDate>
      <link>https://dev.to/ticktockbent/anthropic-leaked-its-own-source-code-and-may-not-own-it-3j3l</link>
      <guid>https://dev.to/ticktockbent/anthropic-leaked-its-own-source-code-and-may-not-own-it-3j3l</guid>
      <description>&lt;p&gt;On March 31st, Anthropic shipped version 2.1.88 of Claude Code to npm with a &lt;a href="https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/" rel="noopener noreferrer"&gt;60MB source map file&lt;/a&gt; that was supposed to stay internal. That file pointed to a zip archive on Anthropic's Cloudflare R2 bucket containing the entire TypeScript source. 1,900 files. 512,000 lines of code. The full architectural blueprint of one of the most commercially successful AI coding tools ever built.&lt;/p&gt;

&lt;p&gt;Security researcher &lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;Chaofan Shou spotted it within hours&lt;/a&gt;. By the time Anthropic pulled the package, the codebase had been mirrored, forked over 41,500 times on GitHub, and archived on decentralized platforms that don't respond to takedown notices.&lt;/p&gt;

&lt;p&gt;What followed was a 12-hour chain reaction that may have permanently changed the legal landscape for AI-generated code.&lt;/p&gt;

&lt;p&gt;Anthropic's response was a &lt;a href="https://github.com/github/dmca/blob/master/2026/03/2026-03-31-anthropic.md" rel="noopener noreferrer"&gt;DMCA blitz&lt;/a&gt;. GitHub disabled &lt;a href="https://piunikaweb.com/2026/04/01/anthropic-dmca-claude-code-leak-github/" rel="noopener noreferrer"&gt;over 8,100 repositories&lt;/a&gt;. The original mirror and its entire fork network went dark. Lawyers moved fast.&lt;/p&gt;

&lt;p&gt;But a Korean developer named Sigrid Jin moved faster. Jin, previously &lt;a href="https://tech.yahoo.com/ai/claude/articles/anthropic-accidentally-leaked-claude-codes-180256954.html" rel="noopener noreferrer"&gt;profiled by the Wall Street Journal&lt;/a&gt; as the heaviest Claude Code user in the world (25 billion tokens consumed), woke up at 4am and rewrote the entire core architecture in Python before sunrise. The repo hit 30,000 GitHub stars faster than any repository in GitHub history. He then rewrote it again in Rust.&lt;/p&gt;

&lt;p&gt;Anthropic can't touch those rewrites. They're clean-room reimplementations. New code, new language, new creative expression. Copyright protects specific expression, not ideas, not architectures, not design patterns. Anyone can study how a system works and build their own version from scratch. That's how the entire software industry has always operated.&lt;/p&gt;

&lt;p&gt;But the legal problems go deeper than clean-room rewrites. The DMCA takedowns themselves rest on a copyright claim that Anthropic's own public statements may have already destroyed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The authorship problem
&lt;/h2&gt;

&lt;p&gt;On March 18, 2025, the DC Circuit issued a unanimous opinion in &lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;Thaler v. Perlmutter&lt;/em&gt;&lt;/a&gt; holding that the Copyright Act requires human authorship. The case involved a computer scientist who tried to register copyright for artwork generated by his AI system. The court didn't just deny the registration on narrow grounds. It &lt;a href="https://www.skadden.com/insights/publications/2025/03/appellate-court-affirms-human-authorship" rel="noopener noreferrer"&gt;reasoned&lt;/a&gt; that the word "author" throughout the Copyright Act is structurally incoherent when applied to a non-human entity. Applying it to a machine would produce absurdities, like referring to a "machine's children" or a "machine's nationality." The Supreme Court declined to hear the appeal.&lt;/p&gt;

&lt;p&gt;Some important caveats. That case was about visual art, not code. It dealt with a specific scenario where someone listed an AI as the sole author. It's one circuit, not a Supreme Court ruling. The court &lt;a href="https://foleyhoag.com/news-and-insights/publications/alerts-and-updates/2025/march/dc-circuit-holds-that-ai-generated-artwork-is-ineligible-for-copyright-protection/" rel="noopener noreferrer"&gt;deliberately left the door open&lt;/a&gt; for "AI-assisted" works where a human contributes meaningful creative input.&lt;/p&gt;

&lt;p&gt;The reasoning matters more than the narrow holding. The court established that "author" throughout the Copyright Act is structurally tied to human beings. That's not a ruling about art. That's a philosophical foundation that any future court addressing AI-generated code will find persuasive, even if it's not technically binding.&lt;/p&gt;

&lt;p&gt;And then Anthropic's leadership walked up to that open door and welded it shut.&lt;/p&gt;

&lt;p&gt;In January 2026, Boris Cherny, the head of Claude Code, &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;posted on X&lt;/a&gt; that 100% of his code is written by Claude. No manual edits. Not even small ones. He shipped 22 pull requests in one day and 27 the next, each one "100% written by Claude." Across Anthropic, he said the figure is "pretty much 100%."&lt;/p&gt;

&lt;p&gt;An &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Anthropic spokesperson&lt;/a&gt; softened that to 70-90% company-wide. For Claude Code specifically, about 90% of its code is written by Claude Code itself.&lt;/p&gt;

&lt;p&gt;These aren't offhand comments. They're timestamped, attributed, &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;reported by Fortune&lt;/a&gt;, &lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;The Register&lt;/a&gt;, &lt;a href="https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html" rel="noopener noreferrer"&gt;CNBC&lt;/a&gt;, and others. They're discoverable evidence. And they directly undermine any claim of human authorship over the leaked codebase.&lt;/p&gt;

&lt;p&gt;The court in &lt;em&gt;Thaler&lt;/em&gt; left Anthropic a door. Their marketing team closed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  You can't copyright an idea
&lt;/h2&gt;

&lt;p&gt;There's a common response to this argument: "But the humans designed the system. They architected it. The AI just wrote the implementation."&lt;/p&gt;

&lt;p&gt;This gets the law exactly backwards. Copyright protects specific expression, not ideas, not designs, not architectures. You can design a system all day long. That design isn't copyrightable. Patents can protect novel functional inventions, but that's a completely different legal regime with a completely different process and standard.&lt;/p&gt;

&lt;p&gt;The part that copyright actually covers (the specific code) is the part Anthropic says the AI wrote. And the part the humans contributed (the design and architecture) is the part copyright doesn't protect.&lt;/p&gt;

&lt;p&gt;So when someone rewrites the architecture in a different language, there's nothing to claim. The ideas are free. The original expression is AI-generated. And the new expression belongs to whoever wrote it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The transitive authorship problem
&lt;/h2&gt;

&lt;p&gt;Here's where it gets speculative, and where the implications extend far beyond one leaked npm package.&lt;/p&gt;

&lt;p&gt;If 90% of Claude Code's source was written by Claude, then the training pipeline code that produces the next generation of Claude models was also substantially written by Claude. The model weights that come out of that pipeline are the output of an AI-authored system.&lt;/p&gt;

&lt;p&gt;Can you copyright the product of a system that was built by AI?&lt;/p&gt;

&lt;p&gt;The existing precedent only addresses direct AI outputs. Nobody has litigated whether a work produced by an AI-coded system inherits the copyright problem of the system that created it. But the logic is hard to avoid. If human authorship is the prerequisite for copyright, and the authorship chain passes through a substantially non-human link, the claim gets weaker at every generation.&lt;/p&gt;

&lt;p&gt;Nobody knows where the line is. No court has addressed it. But every frontier AI company should be thinking about it, because the answer affects whether their core asset is protectable at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The moat that isn't
&lt;/h2&gt;

&lt;p&gt;Trade secret is the last legal defense in this analysis. Trade secret protection doesn't require human authorship. It doesn't care who or what created the information. It only requires that the holder took "reasonable measures" to keep it secret.&lt;/p&gt;

&lt;p&gt;Anthropic is not having a great month on that front either.&lt;/p&gt;

&lt;p&gt;Days before the Claude Code leak, &lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer"&gt;Fortune reported&lt;/a&gt; that descriptions of Anthropic's upcoming model (internally called "Mythos" or "Capybara") were sitting in a publicly accessible data cache along with close to 3,000 other files. Then the Claude Code source went out on npm. Two major exposures in one week.&lt;/p&gt;

&lt;p&gt;If this ever went to court, opposing counsel would argue that Anthropic's operational security doesn't meet the "reasonable measures" threshold for trade secret protection. A single incident might be forgivable. A pattern is harder to defend.&lt;/p&gt;

&lt;p&gt;The protections collapse one by one. Copyright requires human authorship, and Anthropic publicly says AI writes the code. Trade secret requires maintained confidentiality, and Anthropic keeps accidentally publishing things. Patent requires specific novel invention claims and a formal process, not something you can retroactively blanket over a leaked codebase. DMCA takedowns require a valid underlying copyright, and they only work on centralized platforms anyway.&lt;/p&gt;

&lt;p&gt;What's left is practical barriers: the cost of compute, the difficulty of assembling training data, the head start of an established product, brand trust, enterprise relationships. Those are real advantages. But they're business moats, not legal ones. They can't be enforced in court. They erode as compute gets cheaper, as open-source models close the gap, and as competitors absorb architectural insights from leaks exactly like this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The product demo
&lt;/h2&gt;

&lt;p&gt;There's an irony at the center of this whole story that's hard to overstate.&lt;/p&gt;

&lt;p&gt;Anthropic built Claude Code. They told the world it was so good that &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;their own engineers stopped writing code entirely&lt;/a&gt;. Then a packaging error exposed the source. The world's heaviest Claude Code user used what was almost certainly Claude to rewrite Claude Code in Python overnight. The result is legally untouchable, it's the fastest-starred repo in GitHub history, and it demonstrates exactly the capability Anthropic has been selling.&lt;/p&gt;

&lt;p&gt;Anthropic's own product, used by Anthropic's own power user, to neutralize Anthropic's own IP. Made possible because the product is exactly as good as they said it was.&lt;/p&gt;

&lt;p&gt;That's not a leak. That's a product demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means
&lt;/h2&gt;

&lt;p&gt;The Claude Code leak is entertaining. The legal questions it surfaces are not. Every frontier AI company that uses its own models to write production code is building on the same unstable ground. The more they market AI autonomy to sell products, the more they undermine the legal frameworks that protect those products. Every press quote about AI writing 100% of the code is a future exhibit in a case they hope never gets filed.&lt;/p&gt;

&lt;p&gt;The law hasn't caught up. Congress hasn't acted. The courts have addressed &lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;one narrow question&lt;/a&gt; in one circuit. But the trajectory is clear, and every company in this space is exposed to it.&lt;/p&gt;

&lt;p&gt;The question isn't whether AI-generated code is copyrightable. The court already answered that. The question is whether anyone in the industry is willing to admit what that answer means for them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm not a lawyer. This article is speculative analysis based on public reporting, public court opinions, and public statements by Anthropic leadership. If you're making business decisions about AI-generated IP, talk to an actual attorney.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;Thaler v. Perlmutter, DC Circuit opinion (March 18, 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai" rel="noopener noreferrer"&gt;Axios: Anthropic leaked its own Claude source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer"&gt;Fortune: Anthropic leaks source code for Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Fortune: Top engineers at Anthropic say AI writes 100% of their code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;The Register: Anthropic accidentally exposes Claude Code source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html" rel="noopener noreferrer"&gt;CNBC: Anthropic leaks part of Claude Code's internal source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know" rel="noopener noreferrer"&gt;VentureBeat: Claude Code's source code appears to have leaked&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://decrypt.co/362917/anthropic-accidentally-leaked-claude-code-source-internet-keeping-forever" rel="noopener noreferrer"&gt;Decrypt: Anthropic Accidentally Leaked Claude Code's Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/" rel="noopener noreferrer"&gt;BleepingComputer: Claude Code source code accidentally leaked&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/github/dmca/blob/master/2026/03/2026-03-31-anthropic.md" rel="noopener noreferrer"&gt;GitHub DMCA notice: Anthropic takedown (March 31, 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.skadden.com/insights/publications/2025/03/appellate-court-affirms-human-authorship" rel="noopener noreferrer"&gt;Skadden: Appellate Court Affirms Human Authorship Requirement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://foleyhoag.com/news-and-insights/publications/alerts-and-updates/2025/march/dc-circuit-holds-that-ai-generated-artwork-is-ineligible-for-copyright-protection/" rel="noopener noreferrer"&gt;Foley Hoag: DC Circuit Holds AI-Generated Artwork Ineligible for Copyright&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>security</category>
      <category>legal</category>
    </item>
    <item>
      <title>Your Encrypted Backups Are Slow Because Encryption Isn't the Bottleneck</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Thu, 02 Apr 2026 11:46:35 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-encrypted-backups-are-slow-because-encryption-isnt-the-bottleneck-62k</link>
      <guid>https://dev.to/ticktockbent/your-encrypted-backups-are-slow-because-encryption-isnt-the-bottleneck-62k</guid>
      <description>&lt;p&gt;If you encrypt files before pushing them to backup storage, you've probably assumed the encryption step is what makes it slow. That's what I assumed too. Then I looked at the numbers. On any modern x86 chip with AES-NI, AES-256-GCM runs at 4-8 GB/s on a single core. ChaCha20-Poly1305 isn't far behind. The CPU is not the problem. The problem is that your encryption tool reads a chunk of data, encrypts it, writes it out, then reads the next chunk. It's serial. The disk sits idle while the CPU works, and the CPU sits idle while the disk works.&lt;/p&gt;

&lt;p&gt;One person decided to fix that by applying the same async I/O technique that powers modern databases to file encryption. The result hits GB/s throughput on commodity NVMe hardware, and the whole thing is about 900 lines of Rust.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Concryptor?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt; is a multi-threaded AEAD file encryption CLI built by &lt;a href="https://github.com/FrogSnot" rel="noopener noreferrer"&gt;FrogSnot&lt;/a&gt;. It encrypts and decrypts files using AES-256-GCM or ChaCha20-Poly1305 with Argon2id key derivation, and it does it fast by overlapping disk I/O with CPU crypto using Linux's io_uring interface. It handles single files and directories (packed via tar), runs entirely in the terminal, and installs with &lt;code&gt;cargo install concryptor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;73 stars. One month of focused development. A six-file core with 67 tests. It deserves more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;73&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (FrogSnot)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean architecture, 67 tests, clippy and fmt now enforced in CI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Excellent README with honest perf analysis and full format spec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fresh templates and CI, small codebase, easy to navigate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not yet for production (author's own disclaimer), but the architecture is real&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The centerpiece is a triple-buffered io_uring pipeline in &lt;code&gt;engine.rs&lt;/code&gt;. The idea is simple: keep three sets of buffers rotating through three stages. While buffer A's encrypted contents are being written to disk by the kernel, buffer B is being encrypted in parallel by Rayon worker threads, and buffer C's plaintext is being read from disk. Every component stays busy. Nothing waits.&lt;/p&gt;

&lt;p&gt;The implementation is tighter than you'd expect from a month-old project. Each io_uring submission queue entry carries bit-packed metadata in its &lt;code&gt;user_data&lt;/code&gt; field: the low two bits identify which buffer slot, bit 2 flags read vs. write, and the upper bits store the expected byte count for short-I/O detection. When completion queue entries come back, the pipeline routes them to per-slot counters without any hash lookups or allocations. The whole loop runs &lt;code&gt;num_batches + 2&lt;/code&gt; iterations to let the pipeline drain cleanly at the end.&lt;/p&gt;

&lt;p&gt;The file format is designed around O_DIRECT. Every encrypted chunk is padded to a 4 KiB boundary. The header is exactly 4096 bytes (52 bytes of data plus KDF parameters plus zero padding). Buffers are allocated with explicit 4096-byte alignment via &lt;code&gt;std::alloc&lt;/code&gt;. This lets Concryptor bypass the kernel's page cache entirely, talking directly to NVMe storage via DMA. It's the same technique databases use to avoid double-buffering, and it's a big part of why the throughput numbers are real.&lt;/p&gt;

&lt;p&gt;The security model is more careful than I expected from a solo hobby project. The full 4 KiB header is included as associated data in every chunk's AEAD tag, so modifying any header byte invalidates all chunks. There's a TLS 1.3-style nonce derivation scheme where each chunk's nonce is the base nonce XOR'd with the chunk index, preventing nonce reuse without coordination. A final-chunk flag in the AAD prevents truncation and append attacks. The 4032 reserved bytes in the header are authenticated too, so you can't smuggle data into them. The test suite covers chunk swapping, truncation (two variants), header field manipulation, reserved byte tampering, KDF parameter tampering, and cipher mismatch. These aren't afterthought tests. Someone thought about the threat model.&lt;/p&gt;

&lt;p&gt;What's rough? The project is Linux-only. io_uring doesn't exist on macOS or Windows, and there's no fallback backend. If you try to build it on a Mac you'll get errors that don't explain why. The README is upfront about the experimental status, which is honest and appreciated, but it does mean you shouldn't point this at anything you can't afford to lose yet. The &lt;code&gt;rand&lt;/code&gt; dependency is still on 0.8 (0.10 is current), and until recently clippy warnings and formatting drift had been accumulating unchecked. None of these are architectural problems. They're the kind of rough edges you get when one person is focused on making the core work first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;CONTRIBUTING.md asks you to run clippy and cargo fmt before submitting, but CI only ran &lt;code&gt;cargo test&lt;/code&gt;. No enforcement. The result was predictable: 7 clippy warnings had accumulated across &lt;code&gt;engine.rs&lt;/code&gt; and &lt;code&gt;header.rs&lt;/code&gt;, and formatting had drifted in almost every source file.&lt;/p&gt;

&lt;p&gt;I fixed all seven lints. Three were manual &lt;code&gt;div_ceil&lt;/code&gt; reimplementations (the &lt;code&gt;(a + b - 1) / b&lt;/code&gt; pattern that Rust now has a method for), one was a min/max chain that should have been &lt;code&gt;.clamp()&lt;/code&gt;, one was a manual range check, and two were &lt;code&gt;too_many_arguments&lt;/code&gt; warnings on internal pipeline functions where every parameter is essential and restructuring would just add noise. I also wired up &lt;code&gt;KdfParams::DEFAULT&lt;/code&gt; via struct update syntax to eliminate a dead-code warning, ran &lt;code&gt;cargo fmt --all&lt;/code&gt;, and added clippy and fmt checks to the CI workflow so they stay clean going forward.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was straightforward. Six files, clear responsibilities: &lt;code&gt;engine.rs&lt;/code&gt; handles the pipeline, &lt;code&gt;crypto.rs&lt;/code&gt; handles primitives, &lt;code&gt;header.rs&lt;/code&gt; handles the format, &lt;code&gt;archive.rs&lt;/code&gt; handles tar packing. The code is dense but not clever. You can follow the pipeline loop without needing to hold too much in your head at once. I had the PR ready in under an hour.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/FrogSnot/Concryptor/pull/10" rel="noopener noreferrer"&gt;PR #10&lt;/a&gt; is open as of this writing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Concryptor is for people who encrypt files regularly and want it to be fast. If you're backing up to cloud storage, encrypting disk images, or just moving sensitive data between machines, the throughput difference between a serial encryption tool and a pipelined one is real. On NVMe, it's the difference between saturating your drive and leaving most of its bandwidth on the table.&lt;/p&gt;

&lt;p&gt;The project is early. One maintainer, one month old, Linux-only, self-labeled experimental. It could stall. But the commit history tells a story of deliberate progression: the initial mmap approach was replaced with io_uring in the same day, security hardening followed within a week, the format was upgraded to v4 with full header authentication, and directory support landed before the first month was out. That's not hobby-project pacing. That's someone building something they intend to use.&lt;/p&gt;

&lt;p&gt;What would push Concryptor to the next level? A fallback I/O backend for macOS and Windows would be the single biggest improvement. Even a plain pread/pwrite loop, slower than io_uring but functional, would open the project to most Rust developers who want to try it. Stdin/stdout streaming for pipe composability would help too. And the rand 0.8 to 0.10 migration is a real breaking change that Dependabot can't auto-fix. That's a contribution waiting to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you care about I/O performance, encryption, or io_uring, &lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt; is worth reading. The codebase is small enough to understand in an afternoon, and the pipeline implementation is one of the cleaner io_uring examples I've seen in the wild.&lt;/p&gt;

&lt;p&gt;Star the repo. Try encrypting a large file and watch the throughput. If you want to contribute, the &lt;a href="https://github.com/FrogSnot/Concryptor/pull/6" rel="noopener noreferrer"&gt;rand 0.8 to 0.10 migration&lt;/a&gt; is sitting there waiting for someone to pick it up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #11, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>security</category>
      <category>cli</category>
    </item>
    <item>
      <title>Your Package Manager's Installer Doesn't Know Fish Exists</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:58:19 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-package-managers-installer-doesnt-know-fish-exists-19bh</link>
      <guid>https://dev.to/ticktockbent/your-package-managers-installer-doesnt-know-fish-exists-19bh</guid>
      <description>&lt;p&gt;You find a new CLI tool on GitHub. The README looks good. You scroll to "Installation" and see the magic one-liner: &lt;code&gt;curl -sSL https://... | sh&lt;/code&gt;. You run it. The script downloads a binary, drops it somewhere sensible, and adds it to your PATH by appending a line to your &lt;code&gt;.bashrc&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Except you use fish. And fish doesn't understand &lt;code&gt;export PATH=&lt;/code&gt;. So the binary is on your disk, but your shell can't find it. You open the install script, figure out where it put things, and manually write &lt;code&gt;set -gx PATH ~/.local/bin $PATH&lt;/code&gt; into your &lt;code&gt;config.fish&lt;/code&gt;. You've done this before. You'll do it again.&lt;/p&gt;

&lt;p&gt;This is a small problem. But it's a revealing one. The kind of developer who installs CLI tools from GitHub release pages, who tries new package managers, who runs fish instead of bash because they actually thought about their shell choice, that's your target user. And your installer just told them you didn't think about them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is parm?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;parm&lt;/a&gt; is a binary package manager for GitHub Releases, written in Go by &lt;a href="https://github.com/alxrw" rel="noopener noreferrer"&gt;alxrw&lt;/a&gt;. You give it a repo (&lt;code&gt;parm install owner/repo&lt;/code&gt;) and it finds the latest release, picks the right binary for your platform, downloads it, and symlinks it onto your PATH. No root access, no system package manager, no registry to maintain. It queries GitHub directly.&lt;/p&gt;

&lt;p&gt;It handles updates (&lt;code&gt;parm update&lt;/code&gt;), version pinning (&lt;code&gt;parm pin&lt;/code&gt;), removal (&lt;code&gt;parm remove&lt;/code&gt;), and has a search command that queries GitHub's API. It's pre-release (v0.1.6) but functional, with a clear roadmap toward v0.2.0. About 138 stars and one very active maintainer.&lt;/p&gt;

&lt;p&gt;The interesting design choice: there is no curated package registry. Homebrew has formulae. asdf has plugins. parm has GitHub's API and your judgment. The README is upfront about this: "Users are responsible for vetting packages." That's a tradeoff, and it's a deliberate one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;parm&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~138 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, actively releasing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean Go with standard stack (Cobra, Viper, go-github), 32% test file ratio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README with usage table, disclaimers, and package compatibility guide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Merged my PR next-day with "lgtm"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, for grabbing CLI tools from GitHub without the Homebrew ceremony&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;parm follows the standard Go CLI layout. Commands live in &lt;code&gt;cmd/&lt;/code&gt; (one file per subcommand, Cobra-based). Core business logic lives in &lt;code&gt;internal/core/&lt;/code&gt; with separate &lt;code&gt;installer&lt;/code&gt; and &lt;code&gt;updater&lt;/code&gt; packages. The GitHub client lives in &lt;code&gt;internal/gh/&lt;/code&gt; using &lt;code&gt;go-github&lt;/code&gt; v74. Configuration is TOML-based via Viper.&lt;/p&gt;

&lt;p&gt;The dependency list is heavier than you'd expect for a tool this focused. Eight direct dependencies: &lt;code&gt;go-github&lt;/code&gt; for the API, &lt;code&gt;cobra&lt;/code&gt; and &lt;code&gt;viper&lt;/code&gt; for CLI and config, &lt;code&gt;semver&lt;/code&gt; for version comparison, &lt;code&gt;mpb&lt;/code&gt; for progress bars, &lt;code&gt;gopsutil&lt;/code&gt; for platform detection, &lt;code&gt;filetype&lt;/code&gt; for binary type detection, &lt;code&gt;oauth2&lt;/code&gt; for GitHub authentication. None of these are unreasonable individually, but it's a lot of moving parts for "download a binary and symlink it." Compare to tools like &lt;code&gt;ubi&lt;/code&gt; or &lt;code&gt;eget&lt;/code&gt; that do the same thing with fewer dependencies. That said, the extra weight buys real features: progress bars, proper semver handling, and platform detection that works on all three major OSes.&lt;/p&gt;

&lt;p&gt;The architecture within &lt;code&gt;internal/&lt;/code&gt; is well-separated. The installer handles asset selection (matching your OS and architecture against release asset names), archive extraction (tar, zip, and raw binaries), and symlink management. The manifest tracks what's installed, where, and at what version. The verification package handles binary validation. Each concern has its own package and its own tests. 19 test files out of 59 Go files is a decent ratio for a project this age.&lt;/p&gt;

&lt;p&gt;What the Go code gets right: cross-platform support. The build targets include linux/darwin/windows on both amd64 and arm64. Platform detection via &lt;code&gt;gopsutil&lt;/code&gt; picks the correct release asset. The asset name matching is smart enough to handle the inconsistent naming conventions across GitHub repos (&lt;code&gt;linux-amd64&lt;/code&gt;, &lt;code&gt;Linux_x86_64&lt;/code&gt;, &lt;code&gt;linux-x64&lt;/code&gt;, etc.).&lt;/p&gt;

&lt;p&gt;What the Go code doesn't cover: the shell. The install script (&lt;code&gt;scripts/install.sh&lt;/code&gt;) that handles the &lt;code&gt;curl | sh&lt;/code&gt; onboarding path was bash/zsh-only. It wrote &lt;code&gt;export PATH=...&lt;/code&gt; into &lt;code&gt;.bashrc&lt;/code&gt;, &lt;code&gt;.zshrc&lt;/code&gt;, or &lt;code&gt;.profile&lt;/code&gt;. Fish, the third most popular interactive shell, was completely unsupported. The binary would install, but the user's shell couldn't find it. For a tool whose entire value proposition is "install any program from your terminal," having the install script fail on a common terminal is a gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;Issue #49 reported the fish problem in February 2026, and it was on the v0.2.0 roadmap. I picked it up.&lt;/p&gt;

&lt;p&gt;The fix was about 70 lines added to &lt;code&gt;scripts/install.sh&lt;/code&gt;. Fish uses &lt;code&gt;set -gx PATH&lt;/code&gt; instead of &lt;code&gt;export PATH=&lt;/code&gt;, and its config file lives at &lt;code&gt;~/.config/fish/config.fish&lt;/code&gt; (or wherever &lt;code&gt;$XDG_CONFIG_HOME&lt;/code&gt; points). The implementation detects fish by checking whether the config file exists or whether &lt;code&gt;fish&lt;/code&gt; is available on PATH, resolves the config path respecting XDG conventions, creates the directory if needed, and writes the PATH entry in fish syntax. It also handles &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; persistence (parm uses this for GitHub API rate limits) with the fish equivalent. If a user has both bash and fish installed, both configs get updated. The deduplication logic (grep for the bin directory before appending) follows the same pattern the script already used for bash/zsh.&lt;/p&gt;

&lt;p&gt;Getting into the codebase took about fifteen minutes. The install script was self-contained, and the existing bash/zsh code was a clear template for the fish additions. &lt;a href="https://github.com/alxrw/parm/pull/51" rel="noopener noreferrer"&gt;PR #51&lt;/a&gt; was approved with "lgtm" and merged the next day with "Merged, thank you for your contribution!" No review rounds, no changes requested. Clean in, clean out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;parm is for developers who install CLI tools from GitHub and are tired of the manual download-extract-chmod-symlink dance. If you've ever navigated to a GitHub releases page, scrolled through thirty assets to find the right one for your platform, downloaded it, extracted it, figured out which binary inside the tarball is the one you actually want, chmod'd it, and moved it to somewhere on your PATH, parm automates all of that.&lt;/p&gt;

&lt;p&gt;The no-registry approach is either a feature or a concern depending on your threat model. There's no vetting, no review process, no curated list. You point parm at a repo and trust the maintainer's releases. The README is honest about this. For tools you already trust (ripgrep, fd, bat, delta), it's faster than Homebrew. For tools you've never heard of, you're on your own.&lt;/p&gt;

&lt;p&gt;The project has momentum. Version pinning just shipped. Fish shell support (that's us) just landed. Windows shim support is on the roadmap. The maintainer is responsive and the codebase is clean enough that contributions land quickly. What would push parm further: a &lt;code&gt;parm doctor&lt;/code&gt; command that validates your setup, shell completions for the major shells, and better error messages when a release doesn't have a compatible asset. But the core works today, and it's already replaced a chunk of my manual workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you install CLI tools from GitHub, &lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;try parm&lt;/a&gt;. &lt;code&gt;parm install junegunn/fzf&lt;/code&gt; and see how it feels. If you use fish, the installer now works thanks to &lt;a href="https://github.com/alxrw/parm/pull/51" rel="noopener noreferrer"&gt;PR #51&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Star the repo. Check the &lt;a href="https://github.com/alxrw/parm/issues" rel="noopener noreferrer"&gt;open issues&lt;/a&gt;. The v0.2.0 milestone has clear feature requests if you want to contribute.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #10, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>go</category>
      <category>cli</category>
      <category>tooling</category>
    </item>
    <item>
      <title>The Blackwall Between Your AI Agent and Your Filesystem</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Mon, 30 Mar 2026 12:38:57 +0000</pubDate>
      <link>https://dev.to/ticktockbent/the-blackwall-between-your-ai-agent-and-your-filesystem-3m05</link>
      <guid>https://dev.to/ticktockbent/the-blackwall-between-your-ai-agent-and-your-filesystem-3m05</guid>
      <description>&lt;p&gt;Every AI coding agent you run has the same permissions you do. Claude Code, Cursor, Codex, Aider. They can read your SSH keys, write to your shell config, and run any command your user account can. We accept this because the alternative is setting up Docker containers and dealing with volume mounts and broken toolchains every time we want an agent to help with a project.&lt;/p&gt;

&lt;p&gt;That trade-off has always felt wrong to me. Not because I think my AI agent is malicious, but because I know it executes code from dependencies I haven't read, runs shell commands it hallucinated, and sometimes &lt;code&gt;rm&lt;/code&gt;s things it shouldn't. The blast radius of a mistake is my entire home directory.&lt;/p&gt;

&lt;p&gt;I went looking for something between "full trust" and "Docker wrapper," and I found a project named after the barrier between humanity and rogue AIs in Cyberpunk 2077.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is greywall?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;greywall&lt;/a&gt; is a container-free sandbox for AI coding agents. It uses kernel-level enforcement on Linux (bubblewrap, seccomp, Landlock, eBPF) and Seatbelt profiles on macOS to isolate your agent's filesystem access, network traffic, and syscalls. Deny by default. No Docker, no VMs. One binary, four direct dependencies.&lt;/p&gt;

&lt;p&gt;It ships with built-in profiles for 13 agents (Claude Code, Cursor, Codex, Aider, and more), and it has a learning mode that traces what your agent actually touches and generates a least-privilege profile from the results. The project is three weeks old, has about 110 stars, and the sole maintainer merges external PRs within hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;greywall&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~109 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (tito), mass-committing daily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;17,400 lines of Go, 151 tests, clean layered architecture&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ARCHITECTURE.md, CONTRIBUTING.md, 18 doc files, a full docs site&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Merged my PR same-day, CI catches lint, good first issues labeled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes if you run AI agents on Linux or macOS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The codebase is ~17,400 lines of Go with only four direct dependencies: cobra for CLI, doublestar for glob matching, jsonc for config with comments, and x/sys for kernel syscalls. Everything else is hand-rolled against the kernel API.&lt;/p&gt;

&lt;p&gt;On Linux, greywall stacks five security layers, each covering what the others can't:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bubblewrap namespaces&lt;/strong&gt; (&lt;code&gt;linux.go&lt;/code&gt;, 1,642 lines) handle the heavy lifting. In DefaultDenyRead mode, the sandbox starts from an empty root filesystem (&lt;code&gt;--tmpfs /&lt;/code&gt;) and selectively mounts system paths read-only and your project directory read-write. Network isolation drops all connectivity, then three bridge types restore controlled access: a ProxyBridge for SOCKS5 traffic, a DnsBridge for DNS resolution, and a ReverseBridge for inbound port forwarding. All of them relay over Unix sockets via socat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seccomp BPF&lt;/strong&gt; (&lt;code&gt;linux_seccomp.go&lt;/code&gt;) blocks 30+ dangerous syscalls: ptrace, mount, reboot, bpf, perf_event_open. If your kernel doesn't support seccomp, greywall skips it and continues. This graceful fallback pattern repeats at every layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Landlock&lt;/strong&gt; (&lt;code&gt;linux_landlock.go&lt;/code&gt;) adds kernel-level filesystem access control. It opens paths with &lt;code&gt;O_PATH&lt;/code&gt; and uses &lt;code&gt;fstat&lt;/code&gt; to avoid TOCTOU races between checking a path and applying a rule to it. It handles ABI versions 1 through 5, stripping directory-only rights from non-directory paths to avoid &lt;code&gt;EINVAL&lt;/code&gt; from the kernel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;eBPF monitoring&lt;/strong&gt; traces violations in real time via bpftrace. &lt;strong&gt;Learning mode&lt;/strong&gt; runs strace under the hood, captures every file your agent touches, and collapses the results into a reusable profile.&lt;/p&gt;

&lt;p&gt;On macOS, greywall generates Seatbelt profiles for &lt;code&gt;sandbox-exec&lt;/code&gt; with deny-by-default network rules and selective file access via regex patterns. macOS actually has a cleaner security model here. Seatbelt supports both allow and deny rules with regex, so you can write "allow &lt;code&gt;~/.claude.json*&lt;/code&gt;, deny everything else in home." Linux's Landlock is additive-only. Once you grant write access to a directory, you can't deny individual files inside it. This is the project's most interesting architectural tension, and it surfaces as a real bug: issue #62, where programs that do atomic file writes (create a temp file, rename over the target) break because the temp file and the target live on different filesystems inside the sandbox.&lt;/p&gt;

&lt;p&gt;Command blocking (&lt;code&gt;command.go&lt;/code&gt;, 524 lines) doesn't just match command names. It parses shell syntax: pipes, &lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt;, &lt;code&gt;||&lt;/code&gt;, semicolons, subshells, and quoted strings. &lt;code&gt;echo foo | shutdown&lt;/code&gt; gets caught. &lt;code&gt;bash -c "rm -rf /"&lt;/code&gt; gets caught. It's more parser than filter.&lt;/p&gt;

&lt;p&gt;The architecture makes sense for what it's doing. Each layer has a clear file, clear responsibility, and a fallback path. The build tags (&lt;code&gt;//go:build linux&lt;/code&gt;, &lt;code&gt;//go:build darwin&lt;/code&gt;) keep platform code separated without runtime conditionals. The test suite has 151 tests across 13 files covering command blocking, Landlock rules, Seatbelt profile generation, learning mode, and config validation. For a three-week-old project, that's unusually disciplined.&lt;/p&gt;

&lt;p&gt;What's rough: the project is pre-1.0 and moving fast. Eight releases in 23 days. The DefaultDenyRead mode is ambitious and still has edge cases (the atomic writes bug, WSL DNS issues, AppArmor conflicts with TUN devices). The documentation is comprehensive but assumes you already know what bubblewrap and Landlock are. If you're new to Linux security primitives, the onboarding curve is steep.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;Issue #5 asked for a &lt;code&gt;greywall profiles edit&lt;/code&gt; command. The learning mode generates JSON profiles and saves them to &lt;code&gt;~/.config/greywall/learned/&lt;/code&gt;, but there was no way to edit them without hunting for the file path and hand-validating the JSON. The maintainer wanted an editor command that validates on close.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was straightforward. The existing &lt;code&gt;profiles list&lt;/code&gt; and &lt;code&gt;profiles show&lt;/code&gt; commands were right there in &lt;code&gt;main.go&lt;/code&gt;, following the standard cobra subcommand pattern. The config validation was already built: &lt;code&gt;config.Load()&lt;/code&gt; parses JSON (with comments via jsonc) and runs &lt;code&gt;Validate()&lt;/code&gt;. I just needed to wire up an editor loop.&lt;/p&gt;

&lt;p&gt;The implementation opens the profile in &lt;code&gt;$EDITOR&lt;/code&gt; (splitting on whitespace to support &lt;code&gt;code --wait&lt;/code&gt; and &lt;code&gt;emacs -nw&lt;/code&gt;), saves the original content for rollback, and after the editor closes: detects no-change exits, validates the JSON, and on failure prompts to re-edit or discard. Discard restores the original file. About 95 lines total.&lt;/p&gt;

&lt;p&gt;CI caught two lint issues I couldn't test locally (the project requires Go 1.25, I had 1.22): gocritic flagged an &lt;code&gt;append&lt;/code&gt; to a different variable, and gofumpt wanted explicit octal syntax (&lt;code&gt;0o600&lt;/code&gt; instead of &lt;code&gt;0600&lt;/code&gt;). Pushed the fix, and the maintainer merged the whole thing within hours of submission. Approved the code immediately, just asked for the lint fix. That's a three-week-old project with a same-day merge for a first-time contributor. &lt;a href="https://github.com/GreyhavenHQ/greywall/pull/64" rel="noopener noreferrer"&gt;PR #64&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;greywall is for anyone running AI coding agents who wants more than trust and less than Docker. If you use Claude Code or Cursor on a machine with real credentials, SSH keys, or cloud configs, this fills a gap that nothing else does at this weight class.&lt;/p&gt;

&lt;p&gt;The project is young and moving fast. Three weeks old, 109 stars, eight releases. The maintainer is clearly using it daily and fixing bugs as they surface. The contributor experience is excellent: labeled issues, fast merges, CI that catches real problems. The Landlock limitation (no per-file deny inside a writable directory) is a genuine technical constraint that will shape the project's future, and the maintainer's detailed write-up on issue #62 shows someone who understands the problem deeply and isn't reaching for shortcuts.&lt;/p&gt;

&lt;p&gt;What would push greywall to the next level? Solving the atomic writes problem would unblock a lot of real-world usage. A guided setup wizard (instead of requiring users to understand profiles and config files) would lower the barrier for non-security-minded developers. And more built-in profiles for common development workflows beyond AI agents could widen the audience. But the foundation is solid, the security model is sound, and the code is cleaner than most projects ten times its age.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you run AI agents on your dev machine, &lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;go install greywall&lt;/a&gt; and try &lt;code&gt;greywall -- claude&lt;/code&gt; or &lt;code&gt;greywall -- cursor&lt;/code&gt;. The built-in profiles work out of the box. If you want tighter control, run &lt;code&gt;greywall --learning -- &amp;lt;your-agent&amp;gt;&lt;/code&gt; to generate a profile from actual usage, then &lt;code&gt;greywall profiles edit&lt;/code&gt; to fine-tune it.&lt;/p&gt;

&lt;p&gt;Star the repo. Try the learning mode. If something breaks in your setup, open an issue. The maintainer responds fast and the codebase is navigable enough that you might end up &lt;a href="https://github.com/GreyhavenHQ/greywall/pull/64" rel="noopener noreferrer"&gt;fixing it yourself&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #9, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
      <category>go</category>
      <category>ai</category>
    </item>
    <item>
      <title>Why Your SFTP Transfer Is Stuck at 2 MB/s (and the Fix Is a Protocol from 1983)</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:14:56 +0000</pubDate>
      <link>https://dev.to/ticktockbent/why-your-sftp-transfer-is-stuck-at-2-mbs-and-the-fix-is-a-protocol-from-1983-5c3c</link>
      <guid>https://dev.to/ticktockbent/why-your-sftp-transfer-is-stuck-at-2-mbs-and-the-fix-is-a-protocol-from-1983-5c3c</guid>
      <description>&lt;p&gt;Two minutes to copy a 274 MB file to a VM running on localhost. Not over the internet. Not to a cloud instance across the country. Localhost. The same machine, loopback, zero network latency.&lt;/p&gt;

&lt;p&gt;That was the experience a user reported in issue #290 on cubic, a lightweight CLI for managing QEMU/KVM virtual machines. The maintainer reproduced it, traced the problem to the upstream &lt;code&gt;russh-sftp&lt;/code&gt; crate, and posted a comment asking if anyone had ideas about where the bottleneck was. I did. The answer turned out to be a protocol design decision that limits every Rust project using this crate to about 2 MB/s on file transfers, regardless of how fast the link is.&lt;/p&gt;

&lt;p&gt;The fix was to stop using SFTP entirely and fall back to a simpler, older protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is cubic?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;cubic&lt;/a&gt; is a CLI tool for creating and managing lightweight virtual machines on Linux and macOS. Think of it as the middle ground between running Docker containers and spinning up full VMs in libvirt. You run &lt;code&gt;cubic create myvm --image debian&lt;/code&gt; and get a cloud-init provisioned VM with SSH access, a dedicated disk, and port forwarding. &lt;code&gt;cubic ssh myvm&lt;/code&gt; drops you into a shell. &lt;code&gt;cubic scp file.tar.gz myvm:~/&lt;/code&gt; copies files in. It's about 7,000 lines of Rust, built on QEMU/KVM with cloud-init for provisioning.&lt;/p&gt;

&lt;p&gt;Under 40 stars. The maintainer (rogkne) commits daily and reviews external PRs within hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;cubic&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~37 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, committing daily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~7,000 lines of clean Rust, 104 unit tests, clap + thiserror + tokio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README, CONTRIBUTING.md with conventional commit rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fast reviews, specific feedback, merged shell completions PR in multi-round review&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, if you want lightweight VMs without libvirt's complexity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;cubic has a clean layered architecture. CLI commands live in &lt;code&gt;src/commands/&lt;/code&gt; (one file per subcommand, clap with derive macros). Business logic lives in &lt;code&gt;src/actions/&lt;/code&gt;. The instance model, serialization (TOML and YAML), and storage live in &lt;code&gt;src/instance/&lt;/code&gt;. Image fetching and distro definitions live in &lt;code&gt;src/image/&lt;/code&gt;. SSH and file transfer live in &lt;code&gt;src/ssh_cmd/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The dependency list is lean. Four crates handle the heavy lifting: &lt;code&gt;russh&lt;/code&gt; for SSH connections, &lt;code&gt;russh-sftp&lt;/code&gt; for file transfers, &lt;code&gt;clap&lt;/code&gt; for CLI parsing, and &lt;code&gt;reqwest&lt;/code&gt; for image downloads. Everything else is standard library or small utility crates. The &lt;code&gt;Cargo.toml&lt;/code&gt; is not trying to be clever.&lt;/p&gt;

&lt;p&gt;One pattern that caught my eye: the project is async internally (tokio, russh) but sync at the CLI boundary. An &lt;code&gt;AsyncCaller&lt;/code&gt; struct wraps a tokio multi-threaded runtime and exposes a &lt;code&gt;call()&lt;/code&gt; method that blocks on a future. Every command creates one, runs its async work through it, and returns a sync result. It's simple and it works. No async bleeding into the CLI layer.&lt;/p&gt;

&lt;p&gt;The image pipeline is solid. cubic fetches cloud images from distro mirrors, verifies SHA-256/SHA-512 checksums against the upstream checksum file, shows a progress bar during download, and caches images locally. Adding a new distro means adding one entry to the &lt;code&gt;DISTROS&lt;/code&gt; static in &lt;code&gt;image_factory.rs&lt;/code&gt;. Rocky Linux was added in a recent PR following this exact pattern.&lt;/p&gt;

&lt;p&gt;The rough edges are in the SSH layer. The SFTP implementation delegates to &lt;code&gt;russh-sftp&lt;/code&gt;, which turned out to be the source of the performance bug. The progress bar during file transfers is coupled to the async read wrapper (&lt;code&gt;AsyncTransferView&lt;/code&gt;), which works but makes it hard to swap the underlying transfer mechanism without touching the view layer. The test coverage is good for models and serialization but thin for the SSH and QEMU interaction code, which is typical for tools that depend on external services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;The performance issue (#290) reported that &lt;code&gt;cubic scp&lt;/code&gt; transferred files at roughly 2 MB/s on loopback. I dug into the &lt;code&gt;russh-sftp&lt;/code&gt; internals to find out why.&lt;/p&gt;

&lt;p&gt;The answer is in how &lt;code&gt;russh-sftp&lt;/code&gt; implements &lt;code&gt;AsyncWrite&lt;/code&gt;. Every call to &lt;code&gt;poll_write()&lt;/code&gt; creates a one-shot channel, sends an SFTP write request, and blocks until the server responds with an acknowledgment. One write in flight at a time. No pipelining. The SFTP protocol (RFC 4253) explicitly supports pipelining: clients can send many write requests with different IDs and collect the responses asynchronously. OpenSSH's &lt;code&gt;sftp&lt;/code&gt; client does exactly this with 64 outstanding requests by default. &lt;code&gt;russh-sftp&lt;/code&gt; doesn't. The upstream issue (#70) has been open since June 2025 with no fix.&lt;/p&gt;

&lt;p&gt;For a 274 MB file at the default 255 KB max write size, that's roughly 1,075 round-trips, each waiting for an ACK. Even on loopback, the per-request overhead adds up to minutes.&lt;/p&gt;

&lt;p&gt;Wrapping the writer in a &lt;code&gt;BufWriter&lt;/code&gt; wouldn't help. It coalesces small writes into larger ones, but each &lt;code&gt;poll_write()&lt;/code&gt; still blocks on the ACK. You'd go from many small round-trips to fewer large ones, but the bottleneck is the same.&lt;/p&gt;

&lt;p&gt;The fix was to bypass SFTP for single-file transfers and use SCP instead. SCP is a much simpler protocol: open an SSH exec channel with &lt;code&gt;scp -t &amp;lt;path&amp;gt;&lt;/code&gt;, send a one-line header (&lt;code&gt;C0644 &amp;lt;size&amp;gt; &amp;lt;filename&amp;gt;\n&lt;/code&gt;), stream the raw bytes, send a null byte, done. No request IDs, no per-packet ACKs during data transfer. Just a header and a byte stream.&lt;/p&gt;

&lt;p&gt;I added a new &lt;code&gt;scp.rs&lt;/code&gt; module (~170 lines) that implements SCP upload and download over a raw &lt;code&gt;russh&lt;/code&gt; channel via &lt;code&gt;channel.into_stream()&lt;/code&gt;. The &lt;code&gt;async_copy&lt;/code&gt; function in &lt;code&gt;russh.rs&lt;/code&gt; now detects single-file host-to-guest transfers and routes them through SCP. Directory copies and guest-to-guest transfers still use SFTP. Guest-to-host tries SCP first and falls back to SFTP if it fails (which it will for directories).&lt;/p&gt;

&lt;p&gt;The review was thorough. The maintainer requested eight changes, all cleanups: use &lt;code&gt;BufReader.read_line()&lt;/code&gt; instead of byte-by-byte loops, add error message prefixes, reuse the ack-reading function in the download path, validate the end-of-transfer marker byte. All reasonable, all addressed. He also asked (politely) whether the PR was AI-generated. I explained my workflow and he was satisfied. The PR went through two review rounds over 12 days and merged. &lt;a href="https://github.com/cubic-vm/cubic/pull/311" rel="noopener noreferrer"&gt;PR #311&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;cubic is for developers who want lightweight VMs without the weight of libvirt or the constraints of Docker. If you're testing deployment scripts, need an isolated Linux environment for a project, or just want to spin up a Debian box and SSH into it without thinking about Vagrant files, this does the job.&lt;/p&gt;

&lt;p&gt;The project is young (v0.19.0, solo maintainer) but the trajectory is good. New distros get added regularly. The contributor experience is above average: specific review feedback, no ego, merged with thanks. The maintainer is clearly using this tool daily and fixing things as they surface.&lt;/p&gt;

&lt;p&gt;What would push cubic to the next level? The SFTP performance fix helps, but the bigger opportunity is user experience. A &lt;code&gt;cubic init&lt;/code&gt; that scaffolds a project config file, better error messages when QEMU isn't installed, and a Homebrew formula for macOS users would all lower the barrier. The foundation is clean. It just needs more people kicking the tires.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you manage VMs from the command line, &lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;try cubic&lt;/a&gt;. &lt;code&gt;cubic create myvm --image debian&lt;/code&gt; and you're running in under a minute. If you've been burned by slow file transfers to VMs before, the SCP fix in &lt;a href="https://github.com/cubic-vm/cubic/pull/311" rel="noopener noreferrer"&gt;PR #311&lt;/a&gt; is worth a look for the protocol analysis alone.&lt;/p&gt;

&lt;p&gt;Star the repo. The codebase is small enough to read in a sitting, and there are &lt;a href="https://github.com/cubic-vm/cubic/issues" rel="noopener noreferrer"&gt;open issues&lt;/a&gt; at every difficulty level.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #8, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>ssh</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Finding Blocking Code in Async Rust Without Changing a Single Line</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Wed, 18 Mar 2026 12:52:00 +0000</pubDate>
      <link>https://dev.to/ticktockbent/finding-blocking-code-in-async-rust-without-changing-a-single-line-3c75</link>
      <guid>https://dev.to/ticktockbent/finding-blocking-code-in-async-rust-without-changing-a-single-line-3c75</guid>
      <description>&lt;p&gt;You know the symptoms. Latency spikes under load. Throughput that should be higher. A Tokio runtime that's doing less work than it should be, and you can't see why. Something is blocking a worker thread, starving the other tasks, and nobody's throwing an error about it.&lt;/p&gt;

&lt;p&gt;The standard advice is tokio-console. Add &lt;code&gt;console-subscriber&lt;/code&gt; to your dependencies, rebuild, redeploy, reproduce the problem, and look at task poll times. It works well. It also requires code changes, a rebuild, and a redeployment, which means it's not what you reach for when staging is melting and you need answers now.&lt;/p&gt;

&lt;p&gt;The other option is &lt;code&gt;perf&lt;/code&gt;. Attach to the process, collect stack traces, generate a flamegraph, and interpret a wall of unsymbolized frames. It'll tell you everything that's happening on every thread. The signal-to-noise ratio for "which Tokio worker is blocked and by what" is not great.&lt;/p&gt;

&lt;p&gt;There's a gap between those two. A tool that attaches to a running Tokio process, finds the blocking code, and shows you the result, without touching your source.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is hud?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;hud&lt;/a&gt; is an eBPF-based profiler for Tokio applications, built by &lt;a href="https://github.com/cong-or" rel="noopener noreferrer"&gt;cong-or&lt;/a&gt;. You give it a process name or PID, and it hooks into the Linux scheduler via eBPF tracepoints to detect when Tokio worker threads experience high scheduling latency. When a worker is off-CPU longer than a configurable threshold (default 5ms), hud captures a stack trace, resolves it against DWARF debug symbols, and shows you what was on the stack. No recompile, no instrumentation, no code changes.&lt;/p&gt;

&lt;p&gt;It runs as a real-time TUI or in headless mode with Chrome Trace JSON export. About 147 stars. For the problem it solves, it should have more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;hud&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~147 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, 178 commits and 15 releases in 3 months&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean workspace, good module boundaries, well-documented internals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Five dedicated doc files (architecture, development, exports, troubleshooting, tuning)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Both PRs merged within minutes. Would contribute again.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, if you run Tokio on Linux and have ever wondered "what's blocking?"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The project is a Rust workspace with three crates. &lt;code&gt;hud-ebpf&lt;/code&gt; (~400 lines, &lt;code&gt;#![no_std]&lt;/code&gt;) runs inside the kernel: a &lt;code&gt;sched_switch&lt;/code&gt; tracepoint for off-CPU detection and a &lt;code&gt;perf_event&lt;/code&gt; hook sampling at 99 Hz for stack traces. &lt;code&gt;hud-common&lt;/code&gt; (~330 lines) defines the shared types that cross the kernel/userspace boundary. &lt;code&gt;hud&lt;/code&gt; (~8,700 lines) is the userspace application: event processing, DWARF symbol resolution, a ratatui TUI, and Chrome Trace export. The whole thing builds with &lt;code&gt;cargo xtask build-ebpf&lt;/code&gt; for the eBPF side and a regular &lt;code&gt;cargo build&lt;/code&gt; for userspace.&lt;/p&gt;

&lt;p&gt;The interesting engineering starts with worker discovery. Tokio worker threads need to be identified before hud can filter events to just the runtime. This turns out to be harder than it sounds. The first problem is &lt;code&gt;/proc&lt;/code&gt;'s 15-character &lt;code&gt;TASK_COMM_LEN&lt;/code&gt; limit, which truncates &lt;code&gt;tokio-runtime-worker-0&lt;/code&gt; to &lt;code&gt;tokio-runtime-w&lt;/code&gt;. The second is custom runtimes: if you called &lt;code&gt;thread_name("my-pool")&lt;/code&gt;, the default prefixes don't match. hud handles this with a 4-step fallback chain: explicit prefix via &lt;code&gt;--workers&lt;/code&gt;, default Tokio prefixes, stack-based classification (sample for 500ms and look for Tokio scheduler frames), and a largest-thread-group heuristic. That last one just picks the biggest group of threads following a &lt;code&gt;{name}-{N}&lt;/code&gt; naming pattern.&lt;/p&gt;

&lt;p&gt;Frame classification has its own complexity. Rust statically links dependencies into the main binary, so being "inside the executable" doesn't distinguish your code from tokio's code from serde's code. hud uses a 3-tier classifier: file path patterns first (&lt;code&gt;.cargo/registry/&lt;/code&gt; means third-party, &lt;code&gt;.rustup/toolchains/&lt;/code&gt; means stdlib), then function name prefixes (&lt;code&gt;tokio::&lt;/code&gt;, &lt;code&gt;std::&lt;/code&gt;, &lt;code&gt;hyper::&lt;/code&gt;), then memory range as a last resort. The TUI highlights user code in green and dims everything else.&lt;/p&gt;

&lt;p&gt;The README is refreshingly honest about limitations. It measures scheduling latency, which is a symptom of blocking, not the blocking itself. It captures the victim's stack, not the blocker's. System CPU pressure can cause false positives. The comparison table with tokio-console and Tokio's built-in detection doesn't oversell hud. It positions it as a triage tool: narrow down the suspects, then dig deeper with instrumentation if needed.&lt;/p&gt;

&lt;p&gt;The rough spots are minor. Test coverage is decent for the core modules (classification, worker discovery, hotspot analysis) but thin for the event processing pipeline and TUI rendering. The project is three months old and iterating fast (15 releases), so some gaps are expected. The docs make up for it: five dedicated files covering architecture, development workflow, export format, troubleshooting, and threshold tuning. That's unusual care for a project at this scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;I submitted two PRs, targeting different layers of the stack.&lt;/p&gt;

&lt;p&gt;The first was test coverage for the blocking pool filter. Tokio's &lt;code&gt;spawn_blocking&lt;/code&gt; creates threads that share the same &lt;code&gt;Inner::run&lt;/code&gt; function at the bottom of their stacks as actual worker threads. This is because Tokio bootstraps workers through the blocking pool mechanism. The distinguishing factor is that workers also have &lt;code&gt;scheduler::multi_thread::worker&lt;/code&gt; frames higher up the stack. The &lt;code&gt;is_blocking_pool_stack()&lt;/code&gt; function filters on this distinction to suppress spawn_blocking noise from the TUI.&lt;/p&gt;

&lt;p&gt;This function went through four release iterations (v0.4.2 through v0.5.0) in response to a bug report where spawn_blocking tasks were showing up as false positives. The maintainer shipped multiple fix releases in rapid succession. But the function had zero test coverage. I added 9 tests covering the core logic: genuine blocking pool stacks, genuine worker stacks, empty stacks, partial matches, closure wrappers, and two realistic deep-stack scenarios. I bundled in doc fixes where TROUBLESHOOTING.md listed 3 worker discovery steps instead of the actual 4, and where the README said "x86_64 architecture" while every other doc said "x86_64/aarch64."&lt;/p&gt;

&lt;p&gt;The second PR was an eBPF fix. The &lt;code&gt;get_cpu_id()&lt;/code&gt; function in the kernel-side code always returned 0, with a TODO comment saying "aya-ebpf doesn't expose bpf_get_smp_processor_id directly yet." It does. The helper is re-exported through &lt;code&gt;pub use gen::*&lt;/code&gt; in the aya-ebpf helpers module, but it's &lt;code&gt;#[doc(hidden)]&lt;/code&gt;, so it never shows up in the generated docs. The fix was adding an import and replacing the stub with the real call. Three lines changed. Every exported trace event was silently reporting the wrong CPU core.&lt;/p&gt;

&lt;p&gt;Both PRs were &lt;a href="https://github.com/cong-or/hud/pull/4" rel="noopener noreferrer"&gt;merged&lt;/a&gt; &lt;a href="https://github.com/cong-or/hud/pull/5" rel="noopener noreferrer"&gt;within minutes&lt;/a&gt;. The codebase was easy to navigate: clear module boundaries, descriptive file names, good internal documentation. The eBPF side requires nightly Rust and &lt;code&gt;bpf-linker&lt;/code&gt;, which adds setup friction, but the build process is documented and worked on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;hud is for Rust developers running Tokio on Linux who want to understand what's blocking their runtime without adding instrumentation. The workflow is &lt;code&gt;sudo hud my-app&lt;/code&gt; and you're looking at results. If you've ever stared at a flamegraph trying to figure out which of those Tokio frames is yours, hud does that filtering for you.&lt;/p&gt;

&lt;p&gt;The project is young (three months) and solo-maintained, but the trajectory is strong. The commit history shows a developer who responds to bug reports with same-day fix releases, who writes honest documentation about tradeoffs, and who merges external contributions without friction. The codebase is clean enough that I was reading eBPF kernel code within an hour of cloning the repo. That doesn't happen by accident.&lt;/p&gt;

&lt;p&gt;What would push hud further? More metrics in the TUI (per-CPU breakdown, timeline visualization of blocking events), broader async runtime support beyond Tokio, and CI integration for the headless export mode (pipe the JSON through &lt;code&gt;jq&lt;/code&gt; for regression detection). The architecture supports all of this. The &lt;code&gt;Metric&lt;/code&gt; approach is indirect by design, and the project is honest about that. What it offers in return is zero-friction access to information that would otherwise require a rebuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you run Tokio on Linux, &lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;try hud&lt;/a&gt;. Download the pre-built binary, point it at a running process, and see what shows up. If nothing does, your runtime is clean. If something does, you just saved yourself a rebuild.&lt;/p&gt;

&lt;p&gt;Star the repo. Here are &lt;a href="https://github.com/cong-or/hud/pull/4" rel="noopener noreferrer"&gt;the tests I added&lt;/a&gt; for the blocking pool filter and &lt;a href="https://github.com/cong-or/hud/pull/5" rel="noopener noreferrer"&gt;the eBPF fix&lt;/a&gt; for the cpu_id stub. Both small, both merged.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #7, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>performance</category>
      <category>async</category>
    </item>
    <item>
      <title>Nobody Reviews Their Agent's Code</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Sat, 14 Mar 2026 20:53:36 +0000</pubDate>
      <link>https://dev.to/ticktockbent/nobody-reviews-their-agents-code-17hi</link>
      <guid>https://dev.to/ticktockbent/nobody-reviews-their-agents-code-17hi</guid>
      <description>&lt;p&gt;You tell your AI agent to implement a feature. It writes 150 lines across four files. You skim the diff, it looks reasonable, you commit. Two days later you're debugging an edge case the agent never tested, staring at a conditional that makes no sense, wondering why you didn't catch it.&lt;/p&gt;

&lt;p&gt;The problem isn't the agent. The problem is that there's no review step. When a teammate opens a PR, you read the diff, leave inline comments, request changes, and approve when it's ready. When an agent writes code, you get a wall of terminal output and a vague sense that it probably worked. The review workflow that keeps human code honest doesn't exist for agent code.&lt;/p&gt;

&lt;p&gt;Someone built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is crit?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/tomasz-tomczyk/crit" rel="noopener noreferrer"&gt;crit&lt;/a&gt; is a PR-style review tool for LLM agent output, built by &lt;a href="https://github.com/tomasz-tomczyk" rel="noopener noreferrer"&gt;tomasz-tomczyk&lt;/a&gt;. It's a single Go binary that launches a localhost web UI, detects changed files in your git repo (or takes explicit file paths), and renders them with syntax-highlighted diffs. You click on lines to leave inline comments, just like a GitHub PR review. Then you tell your agent to address the feedback, and crit reloads the files with your comments carried forward into the next round.&lt;/p&gt;

&lt;p&gt;That multi-round loop is the core idea. Leave comments. Agent fixes. Review again. Repeat until you're satisfied. The comments persist across rounds, so you can track whether your feedback was actually addressed.&lt;/p&gt;

&lt;p&gt;46 stars. Created four weeks ago. Already has 32 Playwright E2E tests, SSE-powered real-time updates, dark mode, keyboard navigation, and a sharing feature. The velocity is unusual for a solo project this young.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/tomasz-tomczyk/crit" rel="noopener noreferrer"&gt;crit&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~46 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, daily commits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean Go backend, 2,500 lines of tests, 32+ E2E specs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;One of the best CLAUDE.md files I've read (300+ lines covering architecture, API, testing, release)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same-day review, detailed design feedback, constructive tone&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, if you use AI agents to write code&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;crit's architecture is deliberately simple. The backend is six Go files: &lt;code&gt;main.go&lt;/code&gt; for CLI and server setup, &lt;code&gt;server.go&lt;/code&gt; for HTTP handlers, &lt;code&gt;session.go&lt;/code&gt; for the core state machine, &lt;code&gt;git.go&lt;/code&gt; for git operations, &lt;code&gt;diff.go&lt;/code&gt; for LCS-based line diffing between rounds, and &lt;code&gt;status.go&lt;/code&gt; for terminal formatting. The frontend is vanilla JavaScript and CSS. No React, no build step, no bundler. The assets get embedded into the binary via Go's &lt;code&gt;embed.FS&lt;/code&gt;, so distribution is a single file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;session.go&lt;/code&gt; is where the interesting logic lives, and at 1,470 lines it's the largest file by a wide margin. It manages the review session state, watches files for changes (polling &lt;code&gt;git status --porcelain&lt;/code&gt; every second in git mode, or checking mtimes in file mode), broadcasts updates over server-sent events, and handles the multi-round workflow. When you call &lt;code&gt;crit go PORT&lt;/code&gt; from your agent's terminal, it signals the running session to advance to the next round, reloading all files while preserving comment state.&lt;/p&gt;

&lt;p&gt;The frontend is a 3,900-line &lt;code&gt;app.js&lt;/code&gt; and a 2,000-line &lt;code&gt;style.css&lt;/code&gt;. That's a lot of vanilla JS in one file, and it'll eventually need splitting. But the code is well-organized internally: state management, rendering, comment handling, SSE listeners, and keyboard shortcuts are all in clearly separated sections. The comment forms use a gutter interaction model (mousedown, drag to select lines, mouseup to open the form) that feels natural once you discover it.&lt;/p&gt;

&lt;p&gt;What surprised me most was the test coverage. 2,500 lines of Go tests plus 32 Playwright E2E specs for a project that's been public for four weeks. The E2E suite covers both git and file modes, comment CRUD, multi-round workflows, theme persistence, keyboard navigation, and the sharing feature. That kind of test investment this early usually means the developer is building something they actually use daily, not just demoing.&lt;/p&gt;

&lt;p&gt;The CLAUDE.md deserves its own mention. At 300+ lines, it covers the full architecture, every REST endpoint, the SSE event protocol, testing conventions, the release process, and coding guidelines. It's the most comprehensive project instruction file I've seen on a repo this size. If crit is a tool for reviewing AI-generated code, its own development docs suggest the maintainer is eating his own cooking.&lt;/p&gt;

&lt;p&gt;The rough edges are what you'd expect from a young project. The vanilla JS frontend will hit a complexity wall eventually. There's no CLI-only mode for terminal purists. Comment data lives in &lt;code&gt;.crit.json&lt;/code&gt; in the working directory, which means it doesn't travel with the code unless you commit it. None of these are deal-breakers at this stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;crit's ROADMAP listed "Comment templates" as a near-term feature, so I built it. The idea: clickable pill buttons that insert common review phrases ("This will fail when...", "Missing error handling for...") into the comment textarea. A small UX improvement that saves keystrokes during reviews.&lt;/p&gt;

&lt;p&gt;My first implementation had five default templates, localStorage for persistence, and an always-visible template bar. It worked. The maintainer responded same-day with seven design changes.&lt;/p&gt;

&lt;p&gt;His reasoning was specific and well-considered. No default templates, because he wasn't confident enough in universal defaults yet. Cookies instead of localStorage, because crit launches on a random port each session and localStorage is scoped per origin, meaning templates would vanish between runs. A "Save as template" button in the actions row instead of an always-visible bar, so the feature earns its screen space. The template bar should only appear once you've saved at least one. Hover delete on chips. Truncation for long text. And E2E tests for both git and file modes.&lt;/p&gt;

&lt;p&gt;All seven points were fair. Some I wouldn't have caught on my own (the localStorage/random-port issue is subtle and specific to crit's architecture). I reworked the entire implementation: cookie-backed storage, a save dialog that pre-fills with your current comment text, chips that appear only when you have templates, hover-to-delete with a &lt;code&gt;x&lt;/code&gt; button, ellipsis truncation with title tooltips, and 14 new Playwright E2E tests covering empty state, save flow, insert, delete, persistence across page reloads, and both operating modes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/tomasz-tomczyk/crit/pull/28" rel="noopener noreferrer"&gt;PR #28&lt;/a&gt; was merged the same day I pushed the rework. The maintainer said he'd merge it and follow up with a small tweak to make the delete button always visible instead of hover-only. That's the kind of interaction that makes contributing satisfying: the feedback improved the feature, and the follow-through was fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;crit is for developers who use AI agents to write code and want the same review discipline they'd apply to human PRs. If you're using Claude Code, Cursor, Copilot Workspace, or any agent that modifies files, crit gives you a structured place to review those changes before they become your problem.&lt;/p&gt;

&lt;p&gt;The project is very early. Four weeks old, solo-maintained, under 50 stars. But the foundations are solid: clean architecture, real tests, a maintainer who responds same-day with thoughtful feedback. The multi-round workflow is the differentiator. Other diff viewers can show you what changed. crit lets you have a conversation about it.&lt;/p&gt;

&lt;p&gt;What would push it further? A VS Code extension for inline review without leaving the editor. A headless CLI mode for reviewing diffs in the terminal. Better discoverability of the gutter interaction (I clicked lines for a minute before realizing you need to click-drag). But the core loop already works, and the pace of development suggests these will come.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you use AI agents to write code, &lt;a href="https://github.com/tomasz-tomczyk/crit" rel="noopener noreferrer"&gt;try crit&lt;/a&gt;. Run it against your next agent-generated changeset. Leave comments, tell the agent to fix them, review the next round. See if it changes how much you trust the output.&lt;/p&gt;

&lt;p&gt;Star the repo. The maintainer is responsive and the contribution experience was one of the best in this series. &lt;a href="https://github.com/tomasz-tomczyk/crit/pull/28" rel="noopener noreferrer"&gt;Here's the PR&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #6, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>go</category>
      <category>ai</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Secrets, Agents, and .env Files</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Tue, 10 Mar 2026 16:14:26 +0000</pubDate>
      <link>https://dev.to/ticktockbent/secrets-agents-and-env-files-40l2</link>
      <guid>https://dev.to/ticktockbent/secrets-agents-and-env-files-40l2</guid>
      <description>&lt;p&gt;Your &lt;code&gt;.env&lt;/code&gt; file has your database credentials in it. Your Stripe key. Your AWS secret. Maybe a JWT signing key you generated at 2am and never rotated.&lt;/p&gt;

&lt;p&gt;Your AI agent can see all of it.&lt;/p&gt;

&lt;p&gt;When you give an agentic coding tool access to your project directory, it can read every file in that directory. That includes &lt;code&gt;.env&lt;/code&gt;, &lt;code&gt;.env.local&lt;/code&gt;, &lt;code&gt;.env.production&lt;/code&gt;, and whatever other secrets files you've got sitting in the root of your project. The agent doesn't know those are sensitive. It just sees files. And if you ask it to "clean up the project structure" or "fix the config," there's nothing stopping it from including those files in a commit.&lt;/p&gt;

&lt;p&gt;One &lt;code&gt;git add .&lt;/code&gt; and your secrets are in version history. Even if you delete the file in the next commit, they're still there. Permanently. Unless you rewrite history, and if you don't know how to do that, you probably won't.&lt;/p&gt;

&lt;p&gt;This is preventable. Let's prevent it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 1: .gitignore
&lt;/h2&gt;

&lt;p&gt;This is the bare minimum. If you don't have a &lt;code&gt;.gitignore&lt;/code&gt; that covers your secrets files, stop reading and go add one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .gitignore
.env
.env.*
.env.local
.env.production
.env.staging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells git to ignore these files entirely. They won't show up in &lt;code&gt;git status&lt;/code&gt;, won't get staged by &lt;code&gt;git add .&lt;/code&gt;, and won't end up in commits.&lt;/p&gt;

&lt;p&gt;Two caveats. First, &lt;code&gt;.gitignore&lt;/code&gt; only works on files that aren't already tracked. If your &lt;code&gt;.env&lt;/code&gt; file was committed at some point in the past, adding it to &lt;code&gt;.gitignore&lt;/code&gt; won't remove it from history. You need to run &lt;code&gt;git rm --cached .env&lt;/code&gt; and then commit that removal. Second, &lt;code&gt;.gitignore&lt;/code&gt; is a suggestion to git, not a security boundary. Someone (or some agent) can still force-add an ignored file with &lt;code&gt;git add -f .env&lt;/code&gt;. It's a speed bump, not a wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 2: pre-commit secret scanning
&lt;/h2&gt;

&lt;p&gt;Speed bumps are good, but you want an actual wall. That's where pre-commit hooks come in.&lt;/p&gt;

&lt;p&gt;There are several tools that do this. &lt;code&gt;git-secrets&lt;/code&gt; from AWS, &lt;code&gt;detect-secrets&lt;/code&gt; from Yelp, &lt;code&gt;gitleaks&lt;/code&gt;, and &lt;code&gt;trufflehog&lt;/code&gt; are the most common. They all do roughly the same thing: scan staged changes for patterns that look like secrets and block the commit if they find any.&lt;/p&gt;

&lt;p&gt;Here's a quick setup with &lt;code&gt;gitleaks&lt;/code&gt; as a pre-commit hook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install gitleaks (macOS)&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;gitleaks

&lt;span class="c"&gt;# Or grab the binary from GitHub releases for other platforms&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using &lt;code&gt;pre-commit&lt;/code&gt; (the framework):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .pre-commit-config.yaml&lt;/span&gt;
&lt;span class="na"&gt;repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/gitleaks/gitleaks&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v8.21.2&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gitleaks&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pre-commit &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now every commit gets scanned before it's created. If the agent stages a file with something that looks like &lt;code&gt;AKIAIOSFODNN7EXAMPLE&lt;/code&gt; in it, the commit fails. The secret never enters version history.&lt;/p&gt;

&lt;p&gt;This matters more with agentic tools than with manual development. When you're typing code yourself, you generally know when you're touching a secrets file. An agent doesn't have that awareness. It's just completing the task you gave it. The pre-commit hook catches what the agent doesn't think about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 3: stop putting secrets in files
&lt;/h2&gt;

&lt;p&gt;This is the longer-term fix. If your secrets aren't in files in your project directory, your agent can't commit them.&lt;/p&gt;

&lt;p&gt;Environment variable managers like &lt;code&gt;direnv&lt;/code&gt;, &lt;code&gt;dotenvx&lt;/code&gt;, or platform-specific solutions like Vercel's environment variables, Netlify's env settings, or AWS Parameter Store all keep secrets out of your repo entirely. The values exist in your shell environment or your deployment platform, not in a file that git can touch.&lt;/p&gt;

&lt;p&gt;For local development, &lt;code&gt;direnv&lt;/code&gt; is the lightest lift:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .envrc (this file IS in .gitignore)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"postgres://..."&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;STRIPE_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sk_test_..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;direnv&lt;/code&gt; loads these into your shell when you &lt;code&gt;cd&lt;/code&gt; into the project and unloads them when you leave. Your application reads them from the environment the same way it would read from a &lt;code&gt;.env&lt;/code&gt; file. The difference is that &lt;code&gt;.envrc&lt;/code&gt; is loaded by your shell, not parsed by your application, and you've got it in &lt;code&gt;.gitignore&lt;/code&gt; where it belongs.&lt;/p&gt;

&lt;p&gt;The real win here is that your agent never sees the actual values. It sees &lt;code&gt;process.env.STRIPE_KEY&lt;/code&gt; in your code, not the key itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on what your agent can access beyond files
&lt;/h2&gt;

&lt;p&gt;Secrets in files are the obvious risk. But agents interact with more than your filesystem.&lt;/p&gt;

&lt;p&gt;If you're using MCP servers to give your agent access to GitHub, Slack, databases, or other services, take a hard look at what tools are actually exposed. The GitHub MCP server, for example, exposes a &lt;code&gt;delete_repository&lt;/code&gt; tool. It's right there in the tool list. Your agent probably doesn't need the ability to delete repos. It definitely doesn't need it by default.&lt;/p&gt;

&lt;p&gt;Most MCP servers ship with everything enabled. That's the wrong default for agentic use. You want a deny-by-default posture: the agent gets access to the specific tools it needs and nothing else. We'll dig deeper into MCP server policies and tool scoping in a future article, but for now, audit what your agent has access to. You might be surprised.&lt;/p&gt;

&lt;h2&gt;
  
  
  The checklist
&lt;/h2&gt;

&lt;p&gt;Your secrets protection should have these layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;.gitignore&lt;/code&gt; covering all secrets files. Bare minimum. Non-negotiable.&lt;/li&gt;
&lt;li&gt;Pre-commit scanning with &lt;code&gt;gitleaks&lt;/code&gt;, &lt;code&gt;detect-secrets&lt;/code&gt;, or similar. Catches what &lt;code&gt;.gitignore&lt;/code&gt; misses.&lt;/li&gt;
&lt;li&gt;GitHub secret scanning and push protection enabled. Server-side safety net.&lt;/li&gt;
&lt;li&gt;Secrets in environment managers, not in files. Removes the risk at the source.&lt;/li&gt;
&lt;li&gt;Audit your agent's tool access. Least privilege applies to AI tools the same way it applies to everything else.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of these are hard to set up. All of them are hard to retrofit after a leak.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 2 of the **Guardrails&lt;/em&gt;* series. Previously: &lt;a href="https://dev.to/ticktockbent/stop-letting-agents-code-push-to-main-2kfk"&gt;Stop Letting Agents Push to Main&lt;/a&gt;. Next up: Why Your Agent Should Never Have Access to Production Data.*&lt;/p&gt;

</description>
      <category>github</category>
      <category>security</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Stop Letting Agents Push to Main</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Mon, 09 Mar 2026 11:55:49 +0000</pubDate>
      <link>https://dev.to/ticktockbent/stop-letting-agents-code-push-to-main-2kfk</link>
      <guid>https://dev.to/ticktockbent/stop-letting-agents-code-push-to-main-2kfk</guid>
      <description>&lt;p&gt;You gave Claude Code access to your repo. It wrote some code. It committed. It pushed. Straight to main.&lt;/p&gt;

&lt;p&gt;No PR. No review. No status checks. Just raw, unreviewed AI-generated code landing directly on your production branch like it owns the place.&lt;/p&gt;

&lt;p&gt;And GitHub let it happen, because you never told it not to.&lt;/p&gt;

&lt;p&gt;This is the single most common mistake I see from developers using agentic coding tools. Not bad prompts. Not hallucinated dependencies. Just a completely unprotected main branch and an agent that's happy to commit wherever you point it.&lt;/p&gt;

&lt;p&gt;The fix takes five minutes. Let's do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happens when you don't protect main
&lt;/h2&gt;

&lt;p&gt;Here's the failure mode, step by step.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You ask Claude Code to refactor your auth module.&lt;/li&gt;
&lt;li&gt;It makes the changes, runs &lt;code&gt;git add .&lt;/code&gt;, writes a commit message, and pushes to &lt;code&gt;main&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If you've got CI/CD hooked up to main (and you probably do), that code is now deploying.&lt;/li&gt;
&lt;li&gt;The refactor has a bug. Of course it does. It's unreviewed code.&lt;/li&gt;
&lt;li&gt;Your users find out before you do.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This isn't a hypothetical. This is what happens when the shortest path between "write code" and "deploy to production" has zero gates on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five-minute fix: branch protection rules
&lt;/h2&gt;

&lt;p&gt;Go to your repo on GitHub. Click &lt;strong&gt;Settings &amp;gt; Branches&lt;/strong&gt;. Under "Branch protection rules," click &lt;strong&gt;Add branch ruleset&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Give it a name in "Ruleset Name".&lt;/p&gt;

&lt;p&gt;Click "Add target" -&amp;gt; "Include by pattern"&lt;/p&gt;

&lt;p&gt;In the "Branch name pattern" field, type &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now check these boxes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Require a pull request before merging.&lt;/strong&gt; This is the big one. It means nobody, not you, not your agent, not anyone, can push directly to main. All changes go through a PR. Period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Require status checks to pass before merging.&lt;/strong&gt; You might not have CI set up yet. That's fine. Once you do, add them here. They'll automatically become gates on every PR. Future you will thank present you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Block force pushes.&lt;/strong&gt; Force pushing to main should never happen. Not by you, not by your agent, not by anyone. This is non-negotiable.&lt;/p&gt;

&lt;p&gt;That's all of your settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change "Enforcement status" to Active&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not allow bypassing the above settings.&lt;/strong&gt; This one matters more than people think. Without it, repo admins can skip all the rules. That includes you. The whole point of guardrails is that they work even when you're in a hurry and "just want to push this one thing real quick."&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create&lt;/strong&gt;. You're done.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this changes about your workflow
&lt;/h2&gt;

&lt;p&gt;Your agent can still write code. It can still commit. It just can't land those commits on main without going through a pull request.&lt;/p&gt;

&lt;p&gt;In practice, this means you (or your agent) work on a feature branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; feat/refactor-auth
&lt;span class="c"&gt;# ... do the work ...&lt;/span&gt;
git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"refactor auth module"&lt;/span&gt;
git push origin feat/refactor-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you open a PR, review the diff, and merge. That's it. One extra step that puts a human in the loop before code hits production.&lt;/p&gt;

&lt;p&gt;If you're using Claude Code, you can tell it to work on a branch. It's good at following that instruction. And if it tries to push to main, GitHub will reject the push. The guardrail works even when the human forgets.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But I'm a solo dev, I don't need PRs"
&lt;/h2&gt;

&lt;p&gt;Yes you do. Especially now.&lt;/p&gt;

&lt;p&gt;When it was just you writing code, pushing to main was a calculated risk. You wrote it, you understood it, you shipped it. Reckless, maybe, but at least you knew what you were deploying.&lt;/p&gt;

&lt;p&gt;That equation changes completely when an agent is generating code on your behalf. You didn't write it line by line. You prompted it. There's a real gap between "I asked for a thing" and "I understand every line of what was produced." The PR is where you close that gap.&lt;/p&gt;

&lt;p&gt;Even if the review is just you reading the diff for two minutes, that's two minutes of catching the bug that would have cost you two hours in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this doesn't cover
&lt;/h2&gt;

&lt;p&gt;Branch protection is one layer. Important, but not sufficient on its own. You still need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secret scanning&lt;/strong&gt; so your agent doesn't accidentally commit your &lt;code&gt;.env&lt;/code&gt; file (next article in the series).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI checks&lt;/strong&gt; so the code that lands on main actually passes linting and tests (coming soon).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A review process&lt;/strong&gt; that's calibrated for AI-generated code (also coming).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the foundation. Everything else stacks on top of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go do it now
&lt;/h2&gt;

&lt;p&gt;Seriously. Open your repos, the ones you're using with Claude Code or Copilot or Cursor or any agentic tool. Check if main is protected. If it's not, fix it. Five minutes.&lt;/p&gt;

&lt;p&gt;The best guardrail is the one that was already in place before something went wrong.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 1 of the Guardrails series on safe development with AI coding agents. Next up: Secrets, Agents, and .env Files.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
