<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DHg</title>
    <description>The latest articles on Forem by DHg (@dhg).</description>
    <link>https://forem.com/dhg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dhg"/>
    <language>en</language>
    <item>
      <title>AI Wrote My Code. I Couldn't FEEL It.</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Wed, 08 Apr 2026 13:51:28 +0000</pubDate>
      <link>https://forem.com/dhg/ai-wrote-my-code-i-couldnt-feel-it-4m51</link>
      <guid>https://forem.com/dhg/ai-wrote-my-code-i-couldnt-feel-it-4m51</guid>
      <description>&lt;p&gt;I've been using AI to write code for a while now. It's fast, it's getting more accurate, and the spec-first workflows are genuinely good. But I keep running into this feeling that I can't shake: when I let AI do too much, I feel disconnected to the codebase. I posted about this on r/ExperiencedDevs and the response confirmed something I was hoping for. It's not just me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Going fully agentic with AI coding made me faster but left me unable to understand my own codebase. I built a workflow where I write the skeleton and core logic by hand, let AI handle the deterministic stuff, and iterate on specs until they become the feature documentation. When I posted about this, hundreds of devs showed up with the same feeling, different coping mechanisms, and some sharp counterarguments that made me think harder. This post is everything I learned from that conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Disconnect
&lt;/h2&gt;

&lt;p&gt;When I let AI go fully agentic on a feature, even with a good spec, I feel disconnected from the codebase. The code is there. It works. But I don't generally get how it works. I just know the result. I don't know what it actually does. And that bothers me.&lt;/p&gt;

&lt;p&gt;That's a weird place to be as a software engineer. You shipped the feature, tests pass, PR got merged. But if someone asks you why you chose this table structure, or what happens when this edge case hits, you're reading your own code like it's someone else's.&lt;/p&gt;

&lt;p&gt;One commenter nailed it better than I could. They said there is none of your own intention in the codebase that you are reading. If code was something you could just "prompt, then review," then reviewing until you feel ownership would work outside of generated code too. But anyone who has been forced to maintain monolithic legacy codebases they didn't create will tell you that's not true.&lt;/p&gt;

&lt;p&gt;That hit me. Because what they're really saying is: reviewing is not the same as understanding. You can read every line and still not have the mental model. The intention gap is real.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Burn
&lt;/h2&gt;

&lt;p&gt;I got burned once. Wrote a short prompt, let AI implement a whole feature, went to test it, and the thing totally diverged from what I wanted. I couldn't even course-correct because I had no idea what it built. Had to scrap everything and start over.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workflow I Landed On
&lt;/h2&gt;

&lt;p&gt;Since the burn, I've been doing spec-first, but with my hands in it. I generate a spec, then I argue with it, poke holes in it, point out flaws, architect it myself. Once it's workable I implement the skeleton by hand. The schema, the core logic, the architecture. Then I feed it back to improve the spec more. As I implement I find more flaws, keep iterating, and eventually the spec becomes the documentation of the feature itself.&lt;/p&gt;

&lt;p&gt;The deterministic functions, the boilerplate? AI can have those. But the core stuff, I need to touch it. I need to write it. Otherwise I don't feel ownership of it. When the shit hits the fan in production, I need to be able to jump right in and know where the logic goes and where it breaks.&lt;/p&gt;

&lt;p&gt;When working with the spec, I generally provide it more context and ask it to come up with 2 or 3 aspects of the problem, then I need to really think about that. To sum it up: AI is my assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Speed Tradeoff
&lt;/h2&gt;

&lt;p&gt;I know my approach is slower than the folks throwing unlimited tokens at the latest models Claude 4.6 1M context windows. I accept that tradeoff. I feel all the f*cks that I did, and I want to keep feeling them.&lt;/p&gt;

&lt;p&gt;That matters. Maybe not on the sprint board this week. But it matters when production goes down and your team is looking at you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Machine Spirit
&lt;/h2&gt;

&lt;p&gt;There's this concept in Warhammer 40k called the machine spirit. Tech priests pray to the machines, wave incense before them, maintain a spiritual connection to the technology. Before AI I never thought about "connection to the codebase" as a thing. I coded 100% by hand so the connection was just there. Now that I have the option to lose it, I can actually feel it. I need to feel the machine spirit of my code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Community Told Me
&lt;/h2&gt;

&lt;p&gt;I posted this on r/ExperiencedDevs. 324 upvotes, 154K views, and a comment section full of people who either feel the same thing or have strong opinions about why I shouldn't.&lt;/p&gt;

&lt;p&gt;Here's what stood out.&lt;/p&gt;

&lt;h3&gt;
  
  
  "You Throw the First One Away"
&lt;/h3&gt;

&lt;p&gt;One senior dev described a workflow that's almost the inverse of mine. They let agentic AI build the throwaway iterations on greenfield stuff, learn from the mistakes, then take a couple days to refactor into something reasonable. Their least favorite part of a project is when they don't know what they don't know, so they let AI eat that discovery phase.&lt;/p&gt;

&lt;p&gt;But in an established codebase, they flip. They use AI as a rubber duck: following existing patterns, debating current design, working through alternatives. They said AI is not super helpful when you have a complex and working system, but it keeps them engaged with the codebase and thinking through refactors.&lt;/p&gt;

&lt;p&gt;That's a useful frame. Greenfield vs. established codebase might need completely different levels of AI autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building in Stages, Not Blobs
&lt;/h3&gt;

&lt;p&gt;One developer described exactly why full agentic output feels wrong. When they write code manually, they build in stages. Write the most basic thing. Test it. Add the next layer. Test it. Wire up the data. Test. Each cycle lets you grow the code from nothing to something functional.&lt;/p&gt;

&lt;p&gt;AI bypasses all of this. It just spews out a big blob, and sometimes it isn't even the blob you want. You're stuck debugging something that appears to have been written by a drunk intern who has no clue.&lt;/p&gt;

&lt;p&gt;Their solution: have AI write in stages too. Same iterative approach, but with AI sketching each layer. Some steps are simple enough to just code yourself. The point is to build incrementally, not to receive a finished product you don't understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review Fatigue is Real
&lt;/h3&gt;

&lt;p&gt;Multiple people brought up review fatigue. It's more work to review code than write it a lot of times. When you write it, you understand the intent. AI does silly things like adding a bunch of fallback code because it doesn't have enough info, so it implements cautiously. You tell it what not to do, but there's always something you forgot to include.&lt;/p&gt;

&lt;p&gt;I felt that. There is definitely some review fatigue where I have to read too much code.&lt;/p&gt;

&lt;p&gt;One dev pointed out there's no mandate at their workplace to go super-fast. Their team would rather they wrote high-quality code than a lot of code. If they need to spend a couple hours reviewing their own generated code, nobody holds it against them. They're already going twice as fast as they used to.&lt;/p&gt;

&lt;p&gt;That's a healthy take. The speed pressure is partly self-imposed.&lt;/p&gt;

&lt;p&gt;Another dev called this "busy waiting," borrowing from scheduling theory. The CPU is technically running instructions but producing zero useful output. With AI: you're generating code, accepting code, reviewing code, debugging code. Lots of motion. But comprehension is built through friction, not throughput. The commit history shows progress. Your mental model doesn't.&lt;/p&gt;

&lt;p&gt;Productivity is measured in problems you understand, not tokens consumed.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Flip-Flop Problem
&lt;/h3&gt;

&lt;p&gt;Someone raised a point I've been thinking about: when code is from a colleague, I already adjust my expectations and read it carefully. If something is unclear, I can poke my colleague and they'll explain why they did it that way.&lt;/p&gt;

&lt;p&gt;With AI, you ask why, and it says "Absolutely, that is my mistake..." It flip-flops. You point out the error in its backtrack and it backtracks again. A human sometimes does this too, but at least both parties learn something in the process.&lt;/p&gt;

&lt;p&gt;That matters more than it sounds. The dialogue you have with a human about code intent is fundamentally different from the dialogue you have with AI. A colleague can say "I copied from Stack Overflow and I don't fully understand it either." That honesty is more useful than AI's confident backtracking.&lt;/p&gt;

&lt;h3&gt;
  
  
  One Manual Task Per Session
&lt;/h3&gt;

&lt;p&gt;One dev shared a practical rule: keep one task per session that you do fully manually, usually the thing closest to actual system behavior. It keeps the mental model alive. If they go fully hands-off for a week, they lose track of what the code actually does vs. what they think it does.&lt;/p&gt;

&lt;p&gt;Simple. I like it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Navigator Mode Approach
&lt;/h3&gt;

&lt;p&gt;Another dev wrote a Claude skill called Navigator mode. It's instructions for AI to act like a pair programmer sitting next to you. You look at the ticket together, it asks clarifying questions, it tells you what to do, and you drive. It cuts throughput way down, but everything that goes into the editor has to pass through your eyeballs and out your fingertips.&lt;/p&gt;

&lt;p&gt;That's an interesting middle ground. AI as navigator, you as driver. Not the other way around.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sharpest Counterpoint
&lt;/h2&gt;

&lt;p&gt;The comment that made me think the hardest came from a retired engineer. They said: back when third-party libraries started becoming widely available, a lot of devs were on the NIH (Not Invented Here) struggle bus. "How can I trust code written by people I don't know to do these important foundational things?" The concerns weren't unfounded. Some libraries were bad, all contained bugs. But in the long run, productivity gains won out, libraries got better, and now none of us think twice about it.&lt;/p&gt;

&lt;p&gt;You don't feel ownership of those third-party libraries, nor should you. You feel ownership for what you build with them. They think AI code will eventually fall out the same way.&lt;/p&gt;

&lt;p&gt;That's a strong argument and I genuinely appreciate them taking the time to write it out. I think they might be right about the long arc. But right now, today, with AI at this level, I still need my hands in it. Maybe that changes. Maybe I'm the dev in 2003 insisting on writing my own string parsing library. I'll accept that possibility.&lt;/p&gt;

&lt;p&gt;I asked them if they're having fun in retirement. They said being retired is the bomb, and they're pretty sure they were born to sit around playing games and reading books in the sun. May we all get there someday.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Good Project Isn't One You Know by Heart
&lt;/h2&gt;

&lt;p&gt;The last piece of wisdom that stuck with me came from a dev who said: a good project isn't the one that you know by heart so you can change it with ease. A good project is one where a stranger can come into the codebase and find the relevant piece of code with ease. The stranger doesn't need to know the entire repo in order to make their fix.&lt;/p&gt;

&lt;p&gt;That reframes the whole question. Maybe the goal isn't "I must understand every line." Maybe the goal is "the codebase must be structured so that understanding is possible." If AI-generated code is clean, well-documented, and navigable, then maybe my feeling of disconnection is a signal to improve the code's structure, not to write more of it by hand.&lt;/p&gt;

&lt;p&gt;I'm still chewing on that. It's hard, and it points in a different direction than my instincts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Taught Me About Caring
&lt;/h2&gt;

&lt;p&gt;Before AI, I honestly didn't think about the connection to the codebase as something important. I coded 100% by hand, so the connection was just there by default. I never had to think about it because I never had the option to lose it.&lt;/p&gt;

&lt;p&gt;Now I can feel that feeling. The difference between code I understand and code that just exists in my repo. AI didn't just change how I write software. It made me realize I actually care about the craft in a way I couldn't articulate before.&lt;/p&gt;

&lt;p&gt;I feel all the f*cks that I did. And I want to keep feeling them.&lt;/p&gt;

&lt;p&gt;This is solvable. I'm trying to solve it. My workflow is one answer, not the answer. The community showed me there are a lot of different ways to keep your hands in it while still getting the speed benefits. The retired dev showed me this might all look different in ten years. The 40k fans reminded me that humans have always had a complicated relationship with the machines they depend on.&lt;/p&gt;

&lt;p&gt;The only wrong move is to stop caring about what your code does.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Hung, a fullstack developer building tools to help bring purpose to your life. You can follow my journey at &lt;a href="https://dhung.dev" rel="noopener noreferrer"&gt;dhung.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agentic</category>
      <category>dx</category>
      <category>engineering</category>
    </item>
    <item>
      <title>1GB VM. How do I pack whole production stack to it.</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:47:56 +0000</pubDate>
      <link>https://forem.com/dhg/1gb-vm-how-do-i-pack-whole-production-stack-to-it-3ha5</link>
      <guid>https://forem.com/dhg/1gb-vm-how-do-i-pack-whole-production-stack-to-it-3ha5</guid>
      <description>&lt;p&gt;I'm running a full production backend on GCP's free-tier e2-micro: 2 vCPUs, 1GB RAM. Fastify, PostgreSQL 18, Redis, nginx, systemd, TLS, the whole thing. It works, it's fast, and it costs me nothing while I validate the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  TLDR
&lt;/h2&gt;

&lt;p&gt;A 2-vCPU, 1GB VM is way bigger than you think. With some tuning you can fit a real stack on it, handle hundreds of users, and not pay a dime until you have revenue. Define the playbook once, and you can scale up or migrate to any provider by replaying the same sequence.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm running
&lt;/h2&gt;

&lt;p&gt;The app is Inner Anchor. It's a productivity app: daily management, task management, focus mode with a floating DPIP (Document Picture-in-Picture) button that reminds you what you're doing, and a purpose bar to remind you why you're doing it. The backend is Fastify with PostgreSQL and a small Redis. Right now it's pre-revenue with a few dozen beta testers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a raw VM instead of Railway or Render
&lt;/h2&gt;

&lt;p&gt;I want to deploy on the server the same way I would scale the server. I want to define the process once and then reuse the whole process when migrating. If I need more power, I just upsize the VM and the whole stack stays. If I need to switch providers, I bring the script to another provider and run the exact sequence and I replicate the environment exactly like that.&lt;/p&gt;

&lt;p&gt;Define once, use forever. That's the play.&lt;/p&gt;

&lt;p&gt;A managed database on DigitalOcean, which is the cheapest I've found, is minimum $15/month. I can host this entire stack for free because I don't have many users yet. I'm not going to burn money on infrastructure before I find product-market fit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Squeezing 1GB
&lt;/h2&gt;

&lt;p&gt;At my company I usually deploy on 2GB or 4GB VMs or more. I have never deployed anything serious on 1GB of RAM. So we have to squeeze out the most of the performance here.&lt;/p&gt;

&lt;h3&gt;
  
  
  ZRAM
&lt;/h3&gt;

&lt;p&gt;Since it's 1GB of RAM, we have to crank up the ZRAM almost to double. One gig of RAM and one gig of compressed swap. So basically we can use about 1.8 gigs of RAM before it OOMs. And we have to turn up the swappiness to 180 to actually utilize all that compressed swap.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;ALGO&lt;/span&gt;=&lt;span class="n"&gt;lz4&lt;/span&gt;
&lt;span class="n"&gt;PERCENT&lt;/span&gt;=&lt;span class="m"&gt;100&lt;/span&gt;
&lt;span class="n"&gt;PRIORITY&lt;/span&gt;=&lt;span class="m"&gt;100&lt;/span&gt;

&lt;span class="n"&gt;vm&lt;/span&gt;.&lt;span class="n"&gt;swappiness&lt;/span&gt;=&lt;span class="m"&gt;180&lt;/span&gt;
&lt;span class="n"&gt;vm&lt;/span&gt;.&lt;span class="n"&gt;vfs_cache_pressure&lt;/span&gt;=&lt;span class="m"&gt;500&lt;/span&gt;
&lt;span class="n"&gt;vm&lt;/span&gt;.&lt;span class="n"&gt;dirty_ratio&lt;/span&gt;=&lt;span class="m"&gt;5&lt;/span&gt;
&lt;span class="n"&gt;vm&lt;/span&gt;.&lt;span class="n"&gt;dirty_background_ratio&lt;/span&gt;=&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  PostgreSQL and Redis on the same box
&lt;/h3&gt;

&lt;p&gt;I install both directly. No Docker. Docker alone has 100-200MB of overhead. On a 1GB box, that's a lot. I don't even use PM2 to manage the Node process. I just spin up systemd. Prune down everything that's unnecessary and keep the core.&lt;/p&gt;

&lt;p&gt;PostgreSQL is tuned for the constraint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;shared_buffers&lt;/span&gt; = &lt;span class="m"&gt;128&lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;
&lt;span class="n"&gt;work_mem&lt;/span&gt; = &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;
&lt;span class="n"&gt;maintenance_work_mem&lt;/span&gt; = &lt;span class="m"&gt;32&lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;
&lt;span class="n"&gt;effective_cache_size&lt;/span&gt; = &lt;span class="m"&gt;256&lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;
&lt;span class="n"&gt;max_connections&lt;/span&gt; = &lt;span class="m"&gt;20&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Redis gets capped at 64MB with &lt;code&gt;allkeys-lru&lt;/code&gt; eviction. Both services get OOM score adjustments so the kernel kills the app process before it touches the database.&lt;/p&gt;

&lt;p&gt;Frankly, I have never tweaked any database params in my professional work. I found it surprisingly easy. The whole process of installing a raw database and tuning it is just some install commands and it works. I did not expect that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why not split them out?
&lt;/h3&gt;

&lt;p&gt;Because I want to scale vertically the same way. Just put in more RAM and more vCPU and it should magically scale. Eventually I'll move to a managed database, but until we have revenue, no.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it actually looks like in production
&lt;/h2&gt;

&lt;p&gt;I just pulled up htop. Memory is at 361MB used out of 965MB. Swap is at 147MB out of 965MB. CPU hovers at about 1%.&lt;/p&gt;

&lt;p&gt;When one user hits the API, both cores spike to about 6% and then drop back to 1%. Memory doesn't move much. For 10+ users spread across the day, this holds up fine.&lt;/p&gt;

&lt;p&gt;Response times from the server journalctl (processing time only, not counting latency between the US instance and Vietnam client):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average GET: ~4ms&lt;/li&gt;
&lt;li&gt;POST (update daily purpose): ~21ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;21 milliseconds server-side for a POST is pretty damn good if you ask me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security without overcomplicating it
&lt;/h2&gt;

&lt;p&gt;I block all ports except what's needed. I moved the SSH port off 22. fail2ban watches both SSH and nginx. There are firewall rules at both the GCP level and UFW. Root login is disabled, password auth is disabled, deploy user is key-only.&lt;/p&gt;

&lt;p&gt;I listed Tailscale as a future addition but haven't implemented it. The current setup is good enough. I don't have a reason to add it yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deploy script is dumb on purpose
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git pull origin develop
pnpm i
pnpm run migrate:deploy
pnpm run prisma:generate
pnpm run build
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Docker, no blue-green, no rollback. This is intentional. For an early stage where I have to move fast, and the beta users are very forgiving, this is fine. I'll add proper deployment when I have about 100 users or more. But right now they are forgiving.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full playbook
&lt;/h2&gt;

&lt;p&gt;This is the sequence, in order. Each step is idempotent and you can replay it on any Debian-based VM from any provider.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update base system, install essentials (openssh, curl, git, htop, ufw)&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;deploy&lt;/code&gt; user with key-only SSH, disable root login&lt;/li&gt;
&lt;li&gt;Set up ZRAM (lz4, 100% of RAM, swappiness 180)&lt;/li&gt;
&lt;li&gt;Install PostgreSQL 18 from the official repo, create database and user, tune for 1GB&lt;/li&gt;
&lt;li&gt;Install Redis, cap at 64MB with LRU eviction&lt;/li&gt;
&lt;li&gt;Install nginx as a reverse proxy to your app port&lt;/li&gt;
&lt;li&gt;Install fail2ban for SSH and nginx&lt;/li&gt;
&lt;li&gt;Move SSH to a non-default port&lt;/li&gt;
&lt;li&gt;Set OOM score adjustments (nginx &amp;gt; postgres &amp;gt; redis &amp;gt; app)&lt;/li&gt;
&lt;li&gt;TLS with certbot&lt;/li&gt;
&lt;li&gt;UFW: deny all incoming, allow your SSH port, 80, 443&lt;/li&gt;
&lt;li&gt;Set up deploy key for GitHub&lt;/li&gt;
&lt;li&gt;Install nvm, Node, pnpm&lt;/li&gt;
&lt;li&gt;Create systemd service with a 300MB memory guard&lt;/li&gt;
&lt;li&gt;Write the deploy script&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. The whole thing takes maybe 30-45 minutes if you're following along. And the next time you need to do it, you have the playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to leave
&lt;/h2&gt;

&lt;p&gt;This setup can't handle everything. If you need horizontal scaling, background job workers eating memory, or you're getting serious traffic, you'll outgrow it. But for a solopreneur validating a product with no revenue, this is the right level of infrastructure. You're not paying for capacity you don't need, and when you do need more, you just upsize the machine and everything stays exactly where it is.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Hung, a fullstack developer building tools to help bring purpose to your life. You can follow my journey at &lt;a href="https://dhung.dev" rel="noopener noreferrer"&gt;dhung.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fastify</category>
      <category>postgres</category>
      <category>devops</category>
      <category>solopreneur</category>
    </item>
    <item>
      <title>Axios got compromised. They attacked the human, not code.</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Sat, 04 Apr 2026 09:30:19 +0000</pubDate>
      <link>https://forem.com/dhg/axios-got-compromised-they-attacked-the-human-not-code-35e7</link>
      <guid>https://forem.com/dhg/axios-got-compromised-they-attacked-the-human-not-code-35e7</guid>
      <description>&lt;p&gt;On March 31, 2026, two malicious versions of Axios were published to the npm registry through a compromised account. Both versions injected a dependency called &lt;code&gt;plain-crypto-js@4.2.1&lt;/code&gt; that installed a remote access trojan on macOS, Windows, and Linux. The malicious versions were live for about three hours before being removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; The Axios npm compromise wasn't a code vulnerability. Attackers social-engineered the lead maintainer with a cloned company, a convincing Slack workspace, and a fake Microsoft Teams meeting that tricked him into installing a RAT. Open source is being attacked through humans first, not code.&lt;/p&gt;

&lt;h2&gt;
  
  
  I was spooked
&lt;/h2&gt;

&lt;p&gt;I was spooked a little bit because that night, I installed &lt;code&gt;1.14.0&lt;/code&gt;, not &lt;code&gt;1.14.1&lt;/code&gt;. God damn it, I was very anxious and went straight to checking the current version in my lockfile of my Inner Anchor.&lt;/p&gt;

&lt;p&gt;If you're not sure, check yours:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"axios@(1&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;14&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;1|0&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;30&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;4)|plain-crypto-js"&lt;/span&gt; package-lock.json yarn.lock 2&amp;gt;/dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If anything comes back, treat that machine as compromised. Downgrade to &lt;code&gt;axios@1.14.0&lt;/code&gt;, delete &lt;code&gt;node_modules/plain-crypto-js/&lt;/code&gt;, rotate every fucking secret, and check your network logs for connections to &lt;code&gt;sfrclak[.]com&lt;/code&gt; or &lt;code&gt;142.11.206.73&lt;/code&gt; on port 8000. If this happened on a CI runner, rotate any secrets that were injected during the affected build.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened
&lt;/h2&gt;

&lt;p&gt;The attacker gained access to the lead maintainer's PC through a targeted social engineering campaign and RAT malware. This gave them access to the npm account credentials, which they used to publish the malicious versions.&lt;/p&gt;

&lt;p&gt;In my opinion, open source is being attacked through humans first. Recently, and especially with AI helping, they are attacking more human than the code itself. That's what I see.&lt;/p&gt;

&lt;h2&gt;
  
  
  The timeline
&lt;/h2&gt;

&lt;p&gt;Jason Saayman, the lead maintainer, said he doesn't have the exact timeline for when the initial compromise occurred, but here's the sequence for the package itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;About two weeks before March 31: social engineering campaign initiated against the lead maintainer&lt;/li&gt;
&lt;li&gt;March 30, 05:57 UTC: &lt;code&gt;plain-crypto-js@4.2.0&lt;/code&gt; published to npm&lt;/li&gt;
&lt;li&gt;March 31, 00:21 UTC: &lt;code&gt;axios@1.14.1&lt;/code&gt; published with the infected &lt;code&gt;plain-crypto-js@4.2.1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;March 31, around 01:00 UTC: &lt;code&gt;axios@0.30.4&lt;/code&gt; published with the same payload&lt;/li&gt;
&lt;li&gt;March 31, around 01:00 UTC: first external detections. Community members file issues reporting the compromise. The attacker deletes those issues using the compromised account.&lt;/li&gt;
&lt;li&gt;March 31, 01:38 UTC: Axios collaborator DigitalBrainJS opened a PR to deprecate the compromised versions, flagged the deleted issues to the community, and contacted npm directly&lt;/li&gt;
&lt;li&gt;March 31, 03:15 UTC: malicious versions removed from npm&lt;/li&gt;
&lt;li&gt;March 31, 03:29 UTC: &lt;code&gt;plain-crypto-js&lt;/code&gt; removed from npm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole thing happened from about 00:21 UTC to about 03:15 UTC. About 2 hours and 30 minutes. That is fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they got the lead maintainer
&lt;/h2&gt;

&lt;p&gt;This is the part that matters most. Jason Saayman confirmed how the attack worked in the post-mortem thread. The attack vector mimics what Google has documented in their threat intelligence report on UNC1069 targeting cryptocurrency and AI through social engineering. But they tailored it specifically to him.&lt;/p&gt;

&lt;p&gt;They reached out masquerading as the founder of a company. They had cloned the company founder's likeness as well as the company itself. Then they invited him to a real Slack workspace. The workspace was branded to the company's CI and named in a plausible manner. The Slack was thought out very well. They had channels where they were sharing LinkedIn posts. The LinkedIn posts, he presumes, just went to the real company's account, but it was super convincing. They even had what he presumes were fake profiles of the team of the company, but also a number of other open source maintainers.&lt;/p&gt;

&lt;p&gt;They scheduled a meeting with him on Microsoft Teams. The meeting had what seemed to be a group of people involved. Then the meeting said something on his system was out of date. The UI in the Microsoft Teams web version popped up saying it was missing something and he had to install it. He installed the missing item because he presumed it was something to do with Teams. That was the RAT.&lt;/p&gt;

&lt;p&gt;In his own words: "everything was extremely well co-ordinated looked legit and was done in a professional manner."&lt;/p&gt;

&lt;h2&gt;
  
  
  What's changing
&lt;/h2&gt;

&lt;p&gt;Resolution was a complete wipe of all lead maintainer devices, resetting all credentials across all accounts, irrespective of platform, both personal and all other capacities.&lt;/p&gt;

&lt;p&gt;Going forward: immutable release setup, proper adoption of OIDC flow for publishing, improvement of overall security posture, and updating all GitHub actions to adopt best practices.&lt;/p&gt;

&lt;p&gt;The key lesson from the post-mortem: publishing directly from a personal account was a risk that could have been avoided. The OIDC flow and immutable release setup should have been in place before this happened. There was no automated way to detect an unauthorized publish. Detection depended entirely on the community noticing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source maintainers are the target now
&lt;/h2&gt;

&lt;p&gt;This is similar to previous attacks targeting open source maintainers. They exploit the human, the crucial part of it. We haven't really heard about code-level exploits as much, but the social engineering is really, really dangerous. Open source maintainers with high-impact packages are active targets for sophisticated social engineering. Hyper vigilance is needed both on the registry and in a personal capacity.&lt;/p&gt;

&lt;p&gt;Shoutout to DigitalBrainJS for acting fast when the compromised account had higher permissions than his own, and for getting npm to take action. The community response was fast. The attack was faster. That's the problem.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Hung, a fullstack developer building tools to help bring purpose to your life. You can follow my journey at &lt;a href="https://dhung.dev" rel="noopener noreferrer"&gt;dhung.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>npm</category>
      <category>security</category>
      <category>axios</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Migrated to a Monorepo in 2 Days. Things Broke.</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Thu, 02 Apr 2026 16:58:49 +0000</pubDate>
      <link>https://forem.com/dhg/i-migrated-to-a-monorepo-in-2-days-things-broke-1gif</link>
      <guid>https://forem.com/dhg/i-migrated-to-a-monorepo-in-2-days-things-broke-1gif</guid>
      <description>&lt;p&gt;I migrated my side project, Inner Anchor, from two separate repos to a monorepo. It took two days, things broke in production, and my users saw a white screen. But I'd do it again because I don't have to worry about accidentally causing a bug from mismatched contracts anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Multi-repo meant I had to define the API contract in two places. I missed it once and shipped a bug to production. So I migrated to a monorepo with Turborepo and pnpm. It was more work than I expected, things broke, but now I define a contract once and it works everywhere. That alone was worth it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Some context on the project
&lt;/h2&gt;

&lt;p&gt;Inner Anchor is a daily management app where people manage tasks with their purpose. It helps you start your day with the right task and adhere your tasks to a purpose. It has task management, focus mode, daily purpose, rollover, settings, user management. I started listing the features and realized, oh, that's a lot. But it's still relatively small. I was managing two repos: one for the API, one for the web app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I did this
&lt;/h2&gt;

&lt;p&gt;Let me go to the past a little bit. In my company, we have a tradition to do multi-repo. We just create multiple repos and develop them separately, navigating through VS Code workspaces. At most I manage four workspaces at the same time. And I felt some of its shortcomings: I have to copy the contract of the API to two places. I have to redefine it. Sometimes I miss it, and that causes a bug in production.&lt;/p&gt;

&lt;p&gt;I did cause a bug when I mismatched the contract between the front-end and the back-end. That was the breaking point. I wanted to try a new approach in my personal project, so I tried monorepo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Picking the tools
&lt;/h2&gt;

&lt;p&gt;I did a little bit of research on Turborepo and NX, and I asked Claude what the best tool for it was. It said Turborepo is simple with pnpm. So I just went with it, because tooling picking is not important. Just go there, use one of the two most popular, and figure out what you need later. That's my philosophy.&lt;/p&gt;

&lt;p&gt;And honestly, the tooling was easier than expected. It took me a few good minutes to get familiar with turbo.json. I had to Google to pull up the terminal UI for the monorepo, and that's it. The tool is very pleasant.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual migration process
&lt;/h2&gt;

&lt;p&gt;Here's the real sequence:&lt;/p&gt;

&lt;p&gt;First, enable pnpm. Create an &lt;code&gt;apps&lt;/code&gt; folder. Move the API folder inside the &lt;code&gt;apps&lt;/code&gt; folder, then move all the source code of the API into it. That's one.&lt;/p&gt;

&lt;p&gt;Second, copy the whole git tree of the front-end to a &lt;code&gt;web&lt;/code&gt; folder inside &lt;code&gt;apps&lt;/code&gt;. That's basically the restructure done.&lt;/p&gt;

&lt;p&gt;Then the tweaking: get both of them to build and dev first. Copy the env files to each. Build them to make sure it works.&lt;/p&gt;

&lt;p&gt;After that, install a universal git lint-staged at the root, and each repo gets a separate lint-staged config so each of them stages their own code. That adheres to the best practice of the lint-staged official doc.&lt;/p&gt;

&lt;p&gt;Clean up, prune down, and it should be done.&lt;/p&gt;

&lt;p&gt;It should be.&lt;/p&gt;

&lt;h2&gt;
  
  
  What broke
&lt;/h2&gt;

&lt;p&gt;After the restructure was "done," the real work started.&lt;/p&gt;

&lt;p&gt;First, I had to switch Cloudflare from the old front-end repo to the new monorepo. Second, I had to SSH into the VM to manually pull the new code, manually build and run it to see if it broke or not. Third, I had to rewrite the GitHub workflow and GitHub Actions to point to the new location of the deploy script. And fix the deploy script too.&lt;/p&gt;

&lt;p&gt;There's no way to do a zero-downtime migration on this, I think.&lt;/p&gt;

&lt;p&gt;And then the second thing that broke: after I migrated, I went to the web, and it's white. Something didn't build. I had to go back to the repo, turn it up, build, preview it to find the source of the bug. It was a DayJS &lt;code&gt;extend&lt;/code&gt; call where the import wasn't right. I had to add &lt;code&gt;.js&lt;/code&gt; to the file path of the duration plugin to make it work again.&lt;/p&gt;

&lt;p&gt;That was real panic, because the project has users now. If someone was using it right then and it just went poof, white screen. I'm sorry. If you experienced this, sorry.&lt;/p&gt;

&lt;p&gt;I expected the migration to be smooth, just redirect and restructure files. I was wrong. There were multiple points of pain: restructuring, redirecting Cloudflare to the new repo, rewriting the CI to make sure it worked again. That was too much work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like now
&lt;/h2&gt;

&lt;p&gt;Now I have a shared package called &lt;code&gt;contract&lt;/code&gt;. It's a shared contract between the front-end and back-end. I use Zod schema for schema parsing for both the UI forms and the back-end, so they share the same validation. That's it. We don't over-engineer it. It's just a simple contract package.&lt;/p&gt;

&lt;p&gt;Right now I'm implementing the payment workflow and I see it. It's kind of pleasant where I define once and it works in both repos and I don't have to worry if I break something. "I don't have to worry about breaking something" is the most worthy moment of the monorepo that I think made it worth it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tradeoffs
&lt;/h2&gt;

&lt;p&gt;The sidebar of the folder structure is longer now. I have a hard time scrolling. And I have to filter the API and the web in my terminal commands now, so that makes my commands longer. Mild inconvenience.&lt;/p&gt;

&lt;p&gt;I haven't experienced the monorepo long enough to feel the real downsides. I could pull up Google and tell you 10 points of downside of monorepo, but I won't. I don't experience them right now. Maybe I'll revise this blog in the future, but not now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;This is for solo devs with side projects. Small teams can do this too, and honestly small teams need this most because the source of bugs is often the mismatched contract. The bigger the team, the more you need the monorepo.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you're on the fence
&lt;/h2&gt;

&lt;p&gt;It will be a lot of work. But I'm happily enjoying it now because I don't have to worry about accidentally causing a bug. I don't have to worry about that anymore.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Hung, a fullstack developer building tools to help bring purpose to your life. You can follow my journey at &lt;a href="https://dhung.dev" rel="noopener noreferrer"&gt;dhung.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>engineering</category>
      <category>monorepo</category>
      <category>turborepo</category>
      <category>dx</category>
    </item>
    <item>
      <title>Why Budget Apps Don't Work (And What to Do Instead)</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Sat, 14 Mar 2026 10:08:50 +0000</pubDate>
      <link>https://forem.com/dhg/stop-building-budget-apps-expense-tracking-isnt-the-fix-18k6</link>
      <guid>https://forem.com/dhg/stop-building-budget-apps-expense-tracking-isnt-the-fix-18k6</guid>
      <description>&lt;p&gt;Most budget apps are the same product wearing different clothes.&lt;/p&gt;

&lt;p&gt;You download the app. You log every coffee. You categorize every snack. At the end of the month, the app shows you a chart that says: &lt;em&gt;"You spent money."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Then next month, you stop logging.&lt;/p&gt;

&lt;p&gt;If you keep trying expense trackers because you want to save more but can't sustain daily tracking, this post is for you. Here's the core idea:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If a system requires daily discipline to work, it fails precisely for the people who need it most.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of tracking everything, I'll walk through a simpler setup: a fixed monthly spending allowance plus automatic saving. Your budget works even when you're tired.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Tracking expenses is measurement, not control. It reports after the money is already gone.&lt;/li&gt;
&lt;li&gt;The real bottleneck is consistent human effort, not app features.&lt;/li&gt;
&lt;li&gt;Use tracking only as a short diagnostic sprint (2 to 4 weeks), not a lifestyle.&lt;/li&gt;
&lt;li&gt;A better default: "pay yourself first" plus one spending allowance account.&lt;/li&gt;
&lt;li&gt;You check one number ("how much allowance is left?") instead of 120 transactions.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why expense tracking doesn't work for most people
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The pattern everyone recognizes
&lt;/h3&gt;

&lt;p&gt;You download a budget app. You track for a few days. Maybe two weeks if motivation is high. Then life happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A brutal week at work&lt;/li&gt;
&lt;li&gt;A trip you didn't plan for&lt;/li&gt;
&lt;li&gt;A few late nights&lt;/li&gt;
&lt;li&gt;"I'll catch up later"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you have a backlog of unlogged transactions and the whole system collapses.&lt;/p&gt;

&lt;h3&gt;
  
  
  The discipline paradox
&lt;/h3&gt;

&lt;p&gt;People adopt expense tracking because they struggle with spending control.&lt;/p&gt;

&lt;p&gt;But expense tracking only works if you &lt;em&gt;already&lt;/em&gt; have strong discipline, because it demands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constant attention&lt;/li&gt;
&lt;li&gt;Repeated daily decisions&lt;/li&gt;
&lt;li&gt;Friction on every purchase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the exact weakness that makes you want tracking is the thing that breaks tracking.&lt;/p&gt;

&lt;p&gt;That's the paradox, and no amount of UI polish fixes it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Expense tracking is measurement, not control
&lt;/h2&gt;

&lt;p&gt;Tracking answers one question: &lt;em&gt;"What did I spend?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But by the time you need that answer, it's already too late. The money is gone.&lt;/p&gt;

&lt;p&gt;If your problem is overspending, you don't need a better ledger. You need &lt;strong&gt;constraints that act before the spend happens&lt;/strong&gt;, not analysis after.&lt;/p&gt;

&lt;p&gt;Think of it this way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tracking&lt;/strong&gt; is a speedometer. It tells you how fast you're going.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A real budget&lt;/strong&gt; is a speed limiter. It prevents you from going too fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most budget apps sell you a pretty speedometer and call it financial control.&lt;/p&gt;




&lt;h2&gt;
  
  
  The hidden cost no one talks about: category labor
&lt;/h2&gt;

&lt;p&gt;Expense trackers quietly turn you into an unpaid accountant.&lt;/p&gt;

&lt;p&gt;Even with auto-import from your bank, you still end up doing real work: categorizing ambiguous transactions, splitting shared purchases, renaming merchants, handling cash, reconciling errors.&lt;/p&gt;

&lt;p&gt;This work doesn't create value. It creates the &lt;em&gt;feeling&lt;/em&gt; of being financially responsible.&lt;/p&gt;

&lt;p&gt;And the moment you stop doing that work, the entire system dies. Your data gets stale, the categories drift, and you stop opening the app.&lt;/p&gt;

&lt;p&gt;So the question is: &lt;strong&gt;why build a financial system that only works when you do daily paperwork?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why detailed spending data is a trap
&lt;/h2&gt;

&lt;p&gt;A lot of tracking is fake progress.&lt;/p&gt;

&lt;p&gt;Knowing you spent $18 on snacks versus $22 doesn't change your financial outcome. What actually moves the needle is usually one of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total discretionary spending for the month&lt;/li&gt;
&lt;li&gt;Big recurring costs (rent, subscriptions, debt payments)&lt;/li&gt;
&lt;li&gt;Your automatic saving rate&lt;/li&gt;
&lt;li&gt;Income versus lifestyle creep&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most people don't have a data problem. They have a &lt;strong&gt;system design problem&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Adding more granularity to your spending data is like adding more decimal places to a wrong answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  When expense tracking actually makes sense
&lt;/h2&gt;

&lt;p&gt;I'm not saying tracking is useless for everyone. There are real cases where it helps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Short-term awareness building
&lt;/h3&gt;

&lt;p&gt;Tracking can be powerful as a diagnostic tool. Track for 2 to 4 weeks, learn your actual spending patterns, then switch to a lower-effort system.&lt;/p&gt;

&lt;p&gt;Use it like a blood test, not a daily vitamin.&lt;/p&gt;

&lt;h3&gt;
  
  
  External requirements
&lt;/h3&gt;

&lt;p&gt;Some situations genuinely require transaction-level records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business expenses and taxes&lt;/li&gt;
&lt;li&gt;Reimbursement claims&lt;/li&gt;
&lt;li&gt;Shared household budgets with a partner&lt;/li&gt;
&lt;li&gt;Tight debt payoff plans&lt;/li&gt;
&lt;li&gt;Irregular income with zero buffer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If tracking has a concrete external purpose, do it. Just don't confuse recordkeeping with behavior change. They're different activities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto-import reduces friction (but doesn't solve the core problem)
&lt;/h3&gt;

&lt;p&gt;Bank syncing makes logging easier, yes. But you still have to review, interpret, and adjust consistently. Lower friction improves adherence, but it still relies on your attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defaults beat attention. Every time.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  A better system: the one-number budget
&lt;/h2&gt;

&lt;p&gt;Here's what to do instead of tracking every purchase.&lt;/p&gt;

&lt;p&gt;The principle is simple: &lt;strong&gt;make saving automatic, and make overspending visibly impossible.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You do that by separating money into two buckets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Money you're allowed to spend&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Money you don't touch&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Not "in your head." In actual, separate accounts.&lt;/p&gt;

&lt;p&gt;Instead of tracking 120 transactions, you check one number: &lt;em&gt;how much allowance is left this month?&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How to set this up in 30 minutes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Pick a fixed monthly allowance
&lt;/h3&gt;

&lt;p&gt;This is your guilt-free discretionary spending for the month.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Income: $4,000/month&lt;/li&gt;
&lt;li&gt;Fixed costs (rent, utilities, subscriptions): $1,600&lt;/li&gt;
&lt;li&gt;Savings goal: $800&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Allowance: $1,600&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your allowance is the money you can spend on anything without categorizing it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create two accounts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Account A:&lt;/strong&gt; Bills + Savings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Account B:&lt;/strong&gt; Spending (your allowance)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your bank supports sub-accounts or "buckets," great. If not, a second checking account works fine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Automate transfers on payday
&lt;/h3&gt;

&lt;p&gt;This is the core of the system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Transfer savings first to a "do not touch" account&lt;/li&gt;
&lt;li&gt;Pay fixed bills&lt;/li&gt;
&lt;li&gt;Move the allowance to Account B&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now your month is pre-decided. You made one disciplined choice (the transfer), and the system handles the rest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Follow one rule
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If the allowance account is getting low, slow down. If it hits zero, stop discretionary spending until next month.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No categories. No transaction backlog. No guilt charts.&lt;/p&gt;

&lt;p&gt;Just one number. That's it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this works when budget apps don't
&lt;/h2&gt;

&lt;p&gt;Budget apps ask you to be disciplined every single day.&lt;/p&gt;

&lt;p&gt;This system asks you to be disciplined once a month: on payday, when you set up the transfers.&lt;/p&gt;

&lt;p&gt;That's a fair trade.&lt;/p&gt;

&lt;p&gt;Compare the failure modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tracking failure:&lt;/strong&gt; "I stopped logging." Now the system is blind and you have no data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Allowance failure:&lt;/strong&gt; "I ran out early." The system still worked. You just learned something.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that second failure mode is &lt;em&gt;useful&lt;/em&gt;, because it tells you exactly what to adjust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Was the allowance too low?&lt;/li&gt;
&lt;li&gt;Was the savings goal too aggressive?&lt;/li&gt;
&lt;li&gt;Did a fixed cost spike?&lt;/li&gt;
&lt;li&gt;Was there one big unplanned expense?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You debug at the &lt;strong&gt;budget level&lt;/strong&gt;, not the receipt level. That's a fundamentally different (and more sustainable) feedback loop.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common problems and fixes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  "I have irregular income"
&lt;/h3&gt;

&lt;p&gt;Switch to a weekly allowance. Put all income into a buffer account, then pay yourself a fixed weekly amount (every Monday, for example). Build a 1 to 2 month cash buffer if possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  "I blow through it early in the month"
&lt;/h3&gt;

&lt;p&gt;Split the allowance into weekly chunks. A monthly lump sum invites impulsive front-loading. Weekly amounts add friction in the right place.&lt;/p&gt;

&lt;h3&gt;
  
  
  "I still want some visibility into my spending"
&lt;/h3&gt;

&lt;p&gt;Track only the big stuff, lightly. Audit your subscriptions once a month. Review your top 10 merchants. Keep one "miscellaneous" category.&lt;/p&gt;

&lt;p&gt;Don't track 200 transactions. Track the 10 things that actually matter.&lt;/p&gt;




&lt;h2&gt;
  
  
  The best compromise: a tracking sprint
&lt;/h2&gt;

&lt;p&gt;If you genuinely enjoy tracking, keep doing it.&lt;/p&gt;

&lt;p&gt;But if you're tired of starting over every few months, try this: &lt;strong&gt;run a 2 to 4 week tracking sprint once or twice a year.&lt;/strong&gt; Get the awareness. Learn the patterns. Then go back to the allowance system for the other 48 weeks.&lt;/p&gt;

&lt;p&gt;That gives you insight without the permanent labor.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick-start checklist
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Choose your monthly allowance number.&lt;/li&gt;
&lt;li&gt;Open a separate spending account (or sub-account).&lt;/li&gt;
&lt;li&gt;Automate: save first, then bills, then move the allowance last.&lt;/li&gt;
&lt;li&gt;Check your allowance balance once a week.&lt;/li&gt;
&lt;li&gt;If you run out early, adjust one lever: reduce the allowance, reduce savings temporarily, cut a fixed cost, or switch to weekly allowance.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Frequently asked questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is expense tracking a waste of time?
&lt;/h3&gt;

&lt;p&gt;For most people trying to save more money, yes. Daily expense tracking creates busywork without changing spending behavior. The effort-to-outcome ratio is poor compared to automating your savings and using a fixed spending allowance. The exception is short diagnostic sprints (2 to 4 weeks) to identify patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do budget apps fail?
&lt;/h3&gt;

&lt;p&gt;Budget apps fail because they depend on sustained daily effort from users who adopted the app specifically because they lack that consistency. The app can't hold you accountable. It can only report what already happened. The better approach is building constraints into your accounts so overspending becomes structurally difficult.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the "pay yourself first" method?
&lt;/h3&gt;

&lt;p&gt;Pay yourself first means automatically transferring money to savings and investments before you spend on anything else. Instead of saving whatever is "left over" at the end of the month (which is usually nothing), you save first and spend from what remains. This flips the default: saving becomes automatic, spending becomes the constrained variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should I do instead of tracking expenses?
&lt;/h3&gt;

&lt;p&gt;Set up a two-account system: one for bills and savings (automated), one for discretionary spending (your allowance). Check one number each week: how much allowance is left. This replaces daily transaction logging with a single glance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does the allowance method work with variable income?
&lt;/h3&gt;

&lt;p&gt;Yes. Use a buffer account that collects all income, then pay yourself a fixed weekly allowance from that buffer. This smooths out income swings and gives you a consistent spending rhythm even when earnings fluctuate.&lt;/p&gt;




&lt;h2&gt;
  
  
  The one takeaway
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Budget apps fail because they bet on daily discipline. Build a budget that works even when you're exhausted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Stop tracking every expense. Decide your allowance once. Automate the rest. Then live your life.&lt;/p&gt;

</description>
      <category>personal</category>
      <category>product</category>
      <category>behavior</category>
      <category>finance</category>
    </item>
    <item>
      <title>Modern password policy 2026: stop Password@1</title>
      <dc:creator>DHg</dc:creator>
      <pubDate>Tue, 10 Mar 2026 16:46:18 +0000</pubDate>
      <link>https://forem.com/dhg/modern-password-policy-2026-stop-password1-1lac</link>
      <guid>https://forem.com/dhg/modern-password-policy-2026-stop-password1-1lac</guid>
      <description>&lt;p&gt;If you remember one thing from this post: &lt;strong&gt;password security is mostly won by (1) length + uniqueness and (2) server-side defenses.&lt;/strong&gt; Complexity rules and forced rotation mostly make people choose &lt;em&gt;more predictable&lt;/em&gt; passwords.&lt;/p&gt;

&lt;p&gt;This is for devs building or fixing a login system. A copyable policy and the reasoning behind each rule.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR: the policy you can ship today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Min length: 12&lt;/strong&gt; when password is the only factor, &lt;strong&gt;8+&lt;/strong&gt; if used alongside MFA.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Max length:&lt;/strong&gt; allow at least &lt;strong&gt;64&lt;/strong&gt; characters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No composition rules.&lt;/strong&gt; Don't force "upper + lower + number + symbol."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No periodic rotation.&lt;/strong&gt; Only reset on compromise evidence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blocklist:&lt;/strong&gt; reject common, expected, and compromised passwords.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UX that helps:&lt;/strong&gt; allow paste, support password managers, offer "show password."&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No password hints, no security questions.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rate-limit failed logins.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These align with NIST SP 800-63B. Not a loose interpretation. This is what the standard actually recommends.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why legacy rules fail
&lt;/h2&gt;

&lt;p&gt;You've probably inherited a policy like this: minimum 8 characters, must include a symbol and uppercase letter, expires every 90 days, security questions for recovery.&lt;/p&gt;

&lt;p&gt;It looks secure on paper. In practice, it trains users to do exactly the wrong thing. They pick a base password like &lt;code&gt;Password@1&lt;/code&gt; and increment the number every quarter. They reuse it across services. The "complexity" is a pattern attackers already guess early.&lt;/p&gt;

&lt;p&gt;When I built auth for Inner Anchor, I could have gone with the standard 8-char-plus-symbol approach. But I'd seen what that produces in real systems: a wall of &lt;code&gt;P@ssw0rd&lt;/code&gt; variants in breach dumps. So I went a different direction.&lt;/p&gt;




&lt;h2&gt;
  
  
  The rules, and why
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Minimum length: 12 characters
&lt;/h3&gt;

&lt;p&gt;NIST's baseline is 8. I recommend 12 for password-only auth because it pushes users past the "single word + mutation" pattern and into passphrase territory. Something like &lt;code&gt;mycat sleeps a lot&lt;/code&gt; is 20 characters, easy to remember, and orders of magnitude harder to crack than &lt;code&gt;C@tLover1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For Inner Anchor, I set the minimum at 12. No complaints from users. Most password managers generate 16+ anyway, so the floor rarely matters for people doing the right thing. It mainly catches the people who would otherwise submit &lt;code&gt;hello123&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If your app requires MFA, 8 is a reasonable floor since the password isn't the only barrier.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Max length: at least 64 characters
&lt;/h3&gt;

&lt;p&gt;Password managers generate long random strings. Passphrases are naturally long. If you cap at 16 or 20 characters, you're punishing the users with the best security habits.&lt;/p&gt;

&lt;p&gt;Allow at least 64. Set an upper bound you can afford to hash (128 or 256 is reasonable) so nobody can DoS your server with a 10MB password.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. No composition rules
&lt;/h3&gt;

&lt;p&gt;When you force "must contain uppercase, number, and symbol," users converge on the same patterns: capital first letter, number at the end, &lt;code&gt;!&lt;/code&gt; at the end. Attackers know this. They try those patterns first.&lt;/p&gt;

&lt;p&gt;Drop the rules. Require length and a blocklist instead. You get less predictability, not more.&lt;/p&gt;

&lt;p&gt;Some security teams will push back on this. Point them to NIST SP 800-63B, which explicitly says not to impose composition rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. No periodic forced rotation
&lt;/h3&gt;

&lt;p&gt;Expiring passwords every 90 days creates incrementing passwords and more reuse. Users treat passwords as disposable when they know it's temporary.&lt;/p&gt;

&lt;p&gt;Instead, force a reset only when you have evidence of compromise: a breach match, a suspicious login, a leaked credential report. This means you need monitoring and an incident response flow, but you needed those anyway.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Blocklist checks
&lt;/h3&gt;

&lt;p&gt;Reject passwords that appear in breach corpuses, common password lists, dictionary words, and context-specific terms (your app name, the user's email handle, obvious derivatives).&lt;/p&gt;

&lt;p&gt;This is the hardest part to implement well. You can check passwords against the Have I Been Pwned API using k-anonymity (you send a partial hash, not the actual password). For a local check, a bloom filter over a breach corpus works. Either way, keep a generic rejection message like "this password is too common" rather than revealing &lt;em&gt;why&lt;/em&gt; it was blocked.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Support password managers
&lt;/h3&gt;

&lt;p&gt;This means: allow paste in password fields, set proper &lt;code&gt;autocomplete&lt;/code&gt; attributes (&lt;code&gt;new-password&lt;/code&gt; for signup, &lt;code&gt;current-password&lt;/code&gt; for login), and offer a "show password" toggle.&lt;/p&gt;

&lt;p&gt;Password managers are one of the few things that reliably improve password security at scale. Blocking paste actively prevents your best users from doing the right thing.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. No hints, no security questions
&lt;/h3&gt;

&lt;p&gt;"What was your first pet's name?" is guessable, researchable, and probably already in a data breach somewhere.&lt;/p&gt;

&lt;p&gt;Use real recovery flows instead: email magic link with device confirmation, recovery codes generated at signup, or authenticator-based recovery. Recovery is usually the weakest link in any auth system. Removing security questions forces you to build something better.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Rate-limit failed attempts
&lt;/h3&gt;

&lt;p&gt;Even a perfect password policy falls apart if attackers can try unlimited guesses. Rate-limit by account and by IP.&lt;/p&gt;

&lt;p&gt;Prefer progressive delays over hard lockouts. Hard lockouts let attackers lock out real users on purpose. A backoff curve (short delay after 5 failures, longer delay after 10, CAPTCHA after 20) protects accounts without creating a denial-of-service vector.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hash passwords with Argon2id
&lt;/h2&gt;

&lt;p&gt;This isn't optional. If your database leaks and passwords are in plaintext or weak hashes (MD5, SHA-256 without salt), every account is compromised instantly.&lt;/p&gt;

&lt;p&gt;Use Argon2id. It's memory-hard, which means attackers can't just throw GPUs at it. OWASP has a cheat sheet with recommended parameters.&lt;/p&gt;

&lt;p&gt;For Inner Anchor, I use Argon2id with tuned parameters for my server's resources. The key idea: pick settings that take around 200-500ms on your hardware. That's invisible to a user logging in once, but brutal for an attacker trying millions of hashes.&lt;/p&gt;

&lt;p&gt;If you're currently on bcrypt, that's still acceptable. But if you're building from scratch in 2026, go with Argon2id.&lt;/p&gt;




&lt;h2&gt;
  
  
  What you trade off
&lt;/h2&gt;

&lt;p&gt;Longer minimums can frustrate some users. Mitigate this with passphrase examples during signup ("try a short sentence instead of a complex word") and password manager support.&lt;/p&gt;

&lt;p&gt;Blocklists require maintenance. You need to update your breach corpus periodically and handle the privacy implications carefully. Never log raw passwords, even rejected ones.&lt;/p&gt;

&lt;p&gt;No forced rotation means you must actually detect compromise. If you drop rotation without adding monitoring, you've made things worse, not better.&lt;/p&gt;

&lt;p&gt;Rate limiting can be weaponized. An attacker could trigger delays for a real user. Risk-based scoring (device fingerprint, geo, behavior) helps, but adds complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://pages.nist.gov/800-63-4/sp800-63b.html" rel="noopener noreferrer"&gt;NIST SP 800-63B&lt;/a&gt; for the full password requirements and rationale.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.ncsc.gov.uk/blog-post/problems-forcing-regular-password-expiry" rel="noopener noreferrer"&gt;NCSC on why forced password expiry is harmful&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html" rel="noopener noreferrer"&gt;OWASP Password Storage Cheat Sheet&lt;/a&gt; for Argon2id parameters and hashing guidance.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you're shipping auth this week, start with the TL;DR. Get the minimum length, blocklist, and Argon2id hashing in place. That alone puts you ahead of most apps still running &lt;code&gt;8 chars + symbol + 90-day rotation&lt;/code&gt;. The rest you can layer in as you go.&lt;/p&gt;

</description>
      <category>security</category>
      <category>authentication</category>
      <category>passwords</category>
    </item>
  </channel>
</rss>
