Create a Phishy URL
A URL shortener that’s dodgy by design.
A URL shortener that’s dodgy by design.
I’m not the only one who’s amazed by how much you can do with just a little CSS these days.
This is a nifty initiative:
This site lets you rank the proposals you care about, giving us data we can use when reviewing which proposals should be taken on for 2026.
For the record, here’s my top ten:
- Cross-document view transitions
- Speculation Rules API
img sizes="auto" loading="lazy"
- Customizable/stylable
select
- Invoker commands
- Interoperable rendering of HTML
fieldset
/legend
- Web Share API
- CSS scroll-driven animations
- CSS
accent-color
property- CSS
hanging-punctuation
property
I prefer my tools to help me with repetitive tasks (and there are many of those in programming), understanding codebases, and authoring correct programs. I take offense at products that are designed to think for me. To remove the agency of my own understanding of the software I produce, and to cut connections with my coworkers. Even if LLMs lived up to the hype, we would still stand to lose all of that and our craft.
I’ve worked in the tech industry for close to two decades at this point. I’ve seen how difficult it is to build quality products, but I’ve also seen that it can be done. It just feels like no one gives a shit anymore, beyond a handful of independent devs and small shops. It’s wild.
Stick with this. It’s worth it.
A great interview with Ted Chiang:
Predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.
I’m fascinated by eponymous laws, and here’s a whole bunch of them gathered together, including a few I hadn’t heard of (mostly from the world of software).
I’ve personally struggled to implement a decentralized approach to quality in many of my teams. I believe in it from an academic standpoint, but in practice it works against the grain of every traditional management structure. Managers want ‘one neck to wring’ when things go wrong. Decentralized quality makes that impossible. So I’ve compromised, centralized, become the bottleneck I know slows things down. It’s easier to defend in meetings. But when I’ve managed to decentralize quality — most memorably when I was running a small agency and could write the org chart myself — I’ve been able to do some of the best work of my career.
- Start with the text
- Use size intentionally
- Contrast weights and styles
- Play with spacing
- Use colour, but don’t rely on it
- Limit your font choices (but choose well and wisely)
- Repeat, repeat, repeat
- Test your system
A profile of Tim and the World World Web.
A fascinating look at the importance of undersea cables, taken from a new book called The Web Beneath the Waves.
God, I love the way that Denise writes:
On the train there’s an ad for Adobe Express: “Commercially safe AI. Trusted results”. The ad shows a photo slotting in to a design. Commercially safe for everyone but photographers and designers. I couldn’t get a seat facing forwards, so I head backwards into the future like some half-arsed AI metaphor.
Here’s a comprehensive round-up of new CSS that you can use right now—you can expect to see some of this in action at Web Day Out!
I love this conversation.
I’ve added this handy little bit of CSS to my starting styles.
I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.
React is no longer winning by technical merit. Today it is winning by default. That default is now slowing innovation across the frontend ecosystem.
I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.
A microwave isn’t going to take your job; a chef who knows how to use a microwave is going to take your job.