Yesterday
In this post we will see how to extend reverse mode automatic differentiation to a language with first class function types, function application and lambda-abstraction. This method is not new, but we will give a new derivation of it by showing how it arises universally from noticing that the category of “additive lenses” is cartesian closed. In the end we will see that this idea sounds like it should revolutionise machine learning, but then doesn’t.
Some interesting ideas, although I won’t claim that I understand them all.
by kawcco
20 hours ago
11 Jan 26
The thing that would actually help an underperformer improve is to teach or (even better) show them how to do a better job and the same is true for models.
by kawcco
1 month ago