Have been using Neo Launcher since it had the features I needed from Nova (mostly hiding most apps from the app list while having them on the home screen in some folder so that it isn’t a mess when you want to find something specific). It hasn’t been updated in a while, but it works perfectly fine for me.
- 4 Posts
- 165 Comments
A piece of plastic broke off from my laptop once. It was supposed to hold one of the two screws fixing the cover of the RAM & drive section and now there was just a larger round hole. I’ve measured the hole and the screw, designed a replacement in Blender (not identical, I wanted something more solid and reliable) and printed it; took two attempts to get the shape perfectly right. Have had zero issues with it in all these years.
Thanks! I now see that Tai Chi is mentioned frequently online in context of the film unlike yoga so that should be right; it narrows things down.
Audalinto Books•Hey all! anyone know a good free ereader that has accessibility functions?English1·11 months agoKOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.
That’s the ones, the 0414 release.
QWQ-32B for most questions, llama-3.1-8B for agents. I’m looking for new models to replace them though, especially the agent one.
Want to test the new GLM models, but I’d rather wait for llama.cpp to definitely fix the bugs with them first.
Audalinto LocalLLaMA@sh.itjust.works•Anyone found "optimal" settings for llama.cpp partial offload?English4·1 year agoWhat I’ve ultimately converged to without any rigorous testing is:
- using Q6 if it fits in VRAM+RAM (anything higher is a waste of memory and compute for barely any gain), otherwise either some small quant (rarely) or ignoring the model altogether;
- not really using IQ quants - afair they depend on a dataset and I don’t want the model’s behaviour to be affected by some additional dataset;
- other than the Q6 thing, in any trade-offs between speed and quality I choose quality - my usage volumes are low and I’d better wait for a good result;
- I load as much as I can into VRAM, leaving 1-3GB for the system and context.
I knew a Horn of Plenty is a good choice, but I didn’t think it’s that good. Thanks!
Oh, forgot about healing wells, thanks for the reminder. You should probably be able to throw the ankh directly too? But I don’t encounter them every run (e.g. didn’t have any this one) so they aren’t reliable.
I know ascending is easy (did it many times, though only with 0-1 challenges, none of them Swarm Intelligence) and adds a 1.25 multiplier and I’ll do it when I go for that badge - but I didn’t plan for it (thought 6 challenges would be 2-3x harder than it turned out) so I wasn’t prepared to ascend this run. I’d have probably died in the 21-24 zone.
So you think it should be On Diet? Hmm, maybe. But exploration with both On Diet and Into Darkness will be challenging.
My intuition:
- There’re “genuine” instances of hapax legomena which probably have some semantic sense, e.g. a rare concept, a wordplay, an artistic invention, an ancient inside joke.
- There’s various noise because somebody let their cat on the keyboard, because OCR software failed in one small spot, because somebody was copying data using a noisy channel without error correction, because somebody had a headache and couldn’t be bothered, because whatever.
- Once a dataset is too big to be manually reviewed by experts, the amount of general noise is far far far larger than what you’re looking for. At the same time you can’t differentiate between the two using statistics alone. And if it was manually reviewed, the experts have probably published their findings, or at least told a few colleagues.
- Transformers are VERY data-hungry. They need enormous datasets.
So I don’t think this approach will help you a lot even for finding words and phrases. And everything I’ve said can be extended to semantic noise too, so your extended question also seems a hopeless endeavour when approached specifically with LLMs or big data analysis of text.
Audalinto Hardware•DOOM can now run on a quantum computer with Quandoom port — seminal FPS blood and gore mixed with spooky actionEnglish9·2 years agoOf course:
The rest of the instructions are all valid n-controlled Toffolis and Hadamards, but of course mostly Toffolis since it’s replicating a classical algorithm. There is no quantum advantage, it’s just a classical algorithm written in a format compatible with a quantum computer.
Add small errors to the quantum simulator (quantum computers always have those) and all’ll break entirely - apparently (1) no error correction was used and (2) it’s just logic gates for Doom rewritten as quantum gates. No wonder the author got bored, I’d be bored too.
Audalinto Free Open-Source Artificial Intelligence•How does Llama3 generates images?English5·2 years agoLLaMA can’t. Chameleon and similar ones can:
Audalinto Ask Lemmy•is there a genre of written work specifically concerned with the conception and procedure of literary works, or is it all random interviews or annotated guides?English2·2 years agoFor Tolkien’s work, there is the twelve volume “The Complete History of Middle Earth” which is about as inside baseball as you can get for Tolkien.
I’d replace HoME with Parma Eldalamberon, Vinyar Tengwar and other journals publishing his early materials here.
Audalinto Ask Lemmy•is there a genre of written work specifically concerned with the conception and procedure of literary works, or is it all random interviews or annotated guides?English2·2 years agoRecommending Italo Calvino’s Six Memos for the Next Millennium, the lectures he has been preparing shortly before his death.
Not an assembly guide for a work of literature, but it’ll help your own process if it’s already ongoing and you want to improve.
The lectures also have some comments on what Calvino himself was doing here and there and why.
For me specifically, if spoilers hurt a book, it probably wasn’t worth reading in the first place. I love when authors demonstrate mastery of language and narration, and no amount of spoilers can overshadow the direct experience of witnessing it enacted.
Audalinto LocalLLaMA@sh.itjust.works•Are there any good open source text-to-music models, preferably with lyrical abilities?English2·2 years agoChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
Should be doable with Termux:
termux-sms-listandtermux-sms-sendcommands);termux-sms-listreturns messages in JSON, which is easy enough to handle with, say,jqin bash orjsonin python. The script itself can be a simple loop that fetches the latest messages every few minutes, filters for unprocessed ones from whitelisted numbers and callstermux-sms-send.Maybe it’d make sense to daemonise the script and launch it via
sv.But the Termux app weighs quite a bit itself.