Jerod here! š
So, Spencer Chang made a thing. A thing that made my day.
Itās called the alive internet theory and it makes the case that āthe internet will always be filled with real people: looking for each other, answering calls for help, and sharing laughs even in the midst of arguing.ā This is a website thatās better felt than tellāt, so Iāll leave you with this 1k word equivalent š
Ok, letās get into this weekās news.
Andrew Nesbitt builds tools and open datasets to support, sustain, and secure critical digital infrastructure. Heās been exploring the world of open source metadata for over a decade. First with libraries.io and now with ecosyste.ms, which tracks over 12 million packages, 287 million repos, 24.5 billion dependencies, and 1.9 million maintainers. š„ VIDEO HERE š
AI-related job losses (and future non-hires) are the talk of the software town right now, but (at least in the short/near term) a new AI-led tech role has emerged with a massive increase of job postings (800%) over the last 9 months:
Forerunners in the AI race, such as Anthropic and OpenAI, are actively recruiting software engineering specialists called forward-deployed engineers (FDEs) to help with tailoring AI models to meet customer needs. More than just working with back-office coders, these engineers are embedded within customer and product engineering teams.
Still not sure what an FDE does, exactly?
Unlike traditional software engineers, FDEs go beyond writing code to go out in the field and understand where AI can make the biggest impact. Their mission is to bridge the ālast mileā of AI: transforming a general-purpose model into scalable AI solutions that reflect complex client requirements and solve their problems.
If this trend has any staying power, and if you want to be in demand in 2026, now is the time to ensure you can confidently (and truthfully) put FDE on your resumƩ.
Corey Quinn (who is hilarious, btw) finally realized what Iāve known since the first time I tried shipping a Rails app on EC2: AWS, for the uninitiated, is pure pain:
Recently, I was spinning up yet another terribly coded thing for fun because I believe in making my problems everyone elseās problems, and realized something that had been nagging at me for a while: working with AWS is relatively painful.
Corey lays out what a typical zero-to-one AWS setup often requires, then compares it to the silky smooth experience Vercel provides on top of AWS. His explanation for the discrepancy: itās generational
This feels generational to me. For folks of a certain age (Gen X and Millenials), AWS and GCP have made their bones. We came of technical age with the platforms and weāre used to their foibles. Azure is of course the Boomer Cloud, but Gen Z is using platforms that arenāt designed as tests of skill to let customers prove how much they want something.
Hat tip to Cory for calling Azure the āBoomer Cloudā. Thatās amazing. However, I donāt think this is a generational thing. Thereās an entire group of elder devs, like myself, who have always preferred Heroku-style deployment platforms over AWS.
While his view of the past seems skewed from inside the AWS bubble, he might be right about the future:
AWS spent two decades building the most powerful cloud platform in the world. They may spend the next two watching it become irrelevant to anyone who wasnāt already bought in.
Thomas Ptacek makes the case that to truly grok LLM agents (so you can be the best hater (or stan) that you can be) you need to write one.
Agents are the most surprising programming experience Iāve had in my career. Not because Iām awed by the magnitude of their powers ā I like them, but I donāt like-like them. Itās because of how easy it was to get one up on its legs, and how much I learned doing that.
I had this experience back in April with Thorsten Ballās post walked me through it step by step. Thomas isnāt wrong. Building one for yourself brings clarity to what is likely the most important developer-facing technology of the decade.
Depot just dropped another deep-dive, and this one hits home for anyone using GitHub Actions. They analyzed thousands of workflows and found that 98.5% of organizations are running actions/checkout slower than they need to.
Turns out, the default settings most teams use areā¦not great. Cold clones, missing shallow fetches, and bloated histories waste precious CI minutes. And this is BEFORE your build even starts. Depotās post breaks down why this happens, how much time itās costing you, and what you can do to fix it.
The takeaway? CI performance isnāt just about bigger runners. Itās about smarter ones. Depotās obsessed with shaving seconds off every step, and this new data proves thereās a ton of low-hanging fruit hiding in your pipelines.
Paul Kinlan says he was wrong last October when he predicted that LLMs would abstract away framework choice. Well, maybe not wrong. But wrong about the timeline.
The reality is more interesting and more permanent: React isnāt competing with other frameworks anymore. React has become the platform. And if youāre building a new framework, library or browser feature today, you need to understand that youāre not just competing with Reactāyouāre competing against a self-reinforcing feedback loop between LLM training data, system prompts, and developer output that makes displacing React functionally impossible.
When he says āself-reinforcing feedback loopā, heās not exaggerating. TIL Replit, Bolt, and tools like them are literally hardcoding React into their system prompts.
They have to. If youāre building a tool today to attract developers, you need to give them code they can maintain. And ācode developers can maintainā now means āReactā for the vast majority of web developers.
I remember back in 2022 when Josh Collinsworth declared, āReact isnāt great at anything except being popular.ā (He even debated this with us on a pod)
Turns out that might be all it neededā¦
Weāre still trying to figure out this agentic coding thing.
Should we make the agent write the tests and write the implementation ourselves?
Should we write the tests and make the agent write the implementation?
Should we just sit back and say, āagent, take the wheelā?
Andrew Gallagher has thoughts:
There is a growing sentiment that LLMs are good for CRUD, boilerplate, and tests. While I am not so sure about how good AI is at making CRUD or thumping out boilerplate, a year of working as an SWE in the modern LLM-powered AI codescape has proven to me that LLMs write unconstructive, noisy, brittle, and downright-bad unit tests. Please do not vibe code your unit tests.
Andrew does say thereās a way to get good tests from LLMs, but right now it requires you to make them write tests one at a time. Aināt nobody got time for that!
On this seventh iteration of our award-worthy game show filled with obscure jargon, fake definitions, and expert tomfoolery: past winners battle to determine the champion of champions. (Also, Adam.) š„ VIDEO HERE š
Juan M. MƩndez Rey shares his 20-year journey though software archaeology:
How I spent two decades tracking down the creators of a 1987 USENET game and learned modern packaging tools in the process.
MeshtasticĀ® is a project that enables you to use inexpensive LoRa radios as a long range off-grid communication platform in areas without existing or reliable communications infrastructure. This project is 100% community driven and open source!
When they say long range they mean loooOOOooong range (331km record)
Jessica Kerr on āthree things MCP can do, and an infinite number of things it canāt do (all of which make it great).ā
- Tokuin
- Katakate
- The case against pgvector
- What is special about MCP?
- Build a Beeper bridge for $50k (bounty)
- Snapchatās cross-platform UI framework
- Why software quality disappeared: culture
- Googleās plan to put AI data centers in space
- Things I donāt like in configuration languages
- Building Phoenix LiveView into a single binary
- Montana enshrines āright to computeā into law
- The best programming language you havenāt heard of
Thatās the news for now, but stay tuned for Wednesday when Hacker Newsā favorite blogger, Sean Goedecke, joins the show!
Have yourself a great week,
the hand of the diligent makes rich,
and Iāll talk to you again real soon. š
āJerod