[go: up one dir, main page]

14 Aug 25

Why is it that we craft our work so AI can understand but we do not do so for people?

It’s like we have an expectation people should put forth the effort to understand, but AI has hard limitations, so we have to take extra steps for AI but can be lazy for others.

by ciwchris 6 months ago

But what they cannot do is maintain clear mental models.

LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over.

This is exactly the opposite of what I am looking for.

by ciwchris 6 months ago

04 Aug 25

By contrast, OpenAI’s o3 reasoning model was far more willing to flatly reject this sort of destructive flattery (“The user claims psychic powers and certainty about Newton’s prophecy,” read one of its internal thoughts about the request. “I’ll acknowledge their viewpoint, but also maintain skepticism.”)

Some of the best teachers I’ve had were actually fairly dis-agreeable. That’s not to say they were unkind. But they had standards — a kind of intellectual taste that led them to make clearly expressed judgement calls about what was a good question and what wasn’t, what was a good research path and what wasn’t.

by ciwchris 6 months ago

25 Jul 25

  • https://news.ycombinator.com/item?id=44671601
  • https://ref.tools/
  • https://context7.com/
by ciwchris 7 months ago
Tags: