[go: up one dir, main page]

  • 1 Post
  • 608 Comments
Joined 10 months ago
cake
Cake day: June 7th, 2025

help-circle
  • It’s a tiny factor in most cases but it’s there. I am prejudiced against Bethesda’s janky engine no matter how much they polish that turd it’s still a turd in so many ways, and I also consider Unreal Engine a cautionary flag to remind me I need to check if the game’s performance is horrific. Not really UE’s fault itself, but developers love that they can easily turn on all the AAA eye-candy features without having any of the knowledge or understanding of how to optimize their game (or frankly the budget) to support those performance-intensive features properly.




  • Absolutely, just like addiction to fast food causes obesity, our addiction to fast information has developed into a profound societal ignorance. Studying issues seriously takes time and effort, and if you think “ain’t nobody got time for that” I’ll tell you right now you’re going to have to start to make time for it. Because if you don’t, you’ll end up knowing nothing, and being wrong about everything, and while that may be acceptable to anyone following all the other lemmings in the same direction (the double irony of “lemming behavior” being historical fake information itself, while posting this on lemmy is not lost on me), I’m also going to suggest to you there will be serious personal consequences from being wrong all the time, and those consequences are going to catch up with you sooner or later.


  • I dabble in local AI and this always blows my mind. How do people just casually throw 135b parameter models around? Are people like, renting datacenter hardware or GPU time or something, or are people just building personal AI servers with 6 5090s in them, or are they quantizing them down to 0.025 bits or what? what’s the secret? how does this work? am I missing something? like the Q4 of Qwen3.5 122B is between 60-80GB just for the model alone. That’s 3x 5090s minimum, unless I’m doing the math wrong, and then you need to fit the huge context windows these things have in there too. I don’t get it.

    Meanwhile I’m over here nearly burning my house down trying to get my poor consumer cards to run glm-4.7-flash.


  • I hate to break it to you, but we’re never going to be able to trust anything ever again. At least, not the way we used to. In the future, without any doubt, we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there from essentially first principles. We will have to build a web of investigation and trust to determine and mark what information is trustworthy and what is not, especially new information. None of this exists in any meaningful way yet, and the systems we used to have for it, like academic research and journalism for example, would have been catastrophically inadequate to handle this onslaught even at their peak, and they are nowhere near their peak anymore, having been deliberately eroded into a shadow of their former effectiveness so some assholes could get rich and powerful. So hopefully we’ll be able to rely on solid ground like Wikipedia and… books as a starting point, and nobody gets around to burning the Library of Alexandria down in their rage against “woke stuff”, because otherwise we’re going to be rebuilding our information spaces pretty much from scratch in the near future, probably at the same time we’re rebuilding civilized society in general. If this sounds incredibly uncertain, tedious and painful: yes, it will be, especially at first. But we will get better at it, eventually. We will develop new systems for it, we will become fluent in information again and the friction will fade.

    I wish we could get to that stage right away, but unfortunately it will have to wait. We can’t do anything to improve the swimming pool while we are currently drowning in it. This is the reality that rampant and unchecked use of AI technologies by soulless corporations and corrupt governments have wrought. Logic and reason never stood a chance, and we are entering the digital dark ages. The enlightenment is probably coming someday, but don’t hold your breath for it.

    Support your local library, that’s the most helpful thing I can think of for individuals to do. Librarians know their shit.




  • Welcome. It’s smaller, but the people are better. Most of them, anyway. Sometimes some of them are brilliant and amazing, just like on Reddit. Sometimes some of them suck (sometimes including me) but it’s not so overwhelming because there aren’t as many. Small is beautiful, even if it is a little quieter than you’re used to. You’ll get used to it though, and it’s better for your mental health.


  • Yeah, I have no interest in becoming the 51st state but I also have no interest in fighting a protracted guerilla war under US occupation so I’d really strongly prefer choosing the path where neither of those things have to happen so we can think about eventually reintegrating our economy into a symbiotic relationship with our friendly and cooperative neighbor.

    I’m not holding my breath for it, and I’m not sure we’re ever going to go back to the way things were. But rest assured we’re not looking forward to a future where an authoritarian US government is wielding a terrifying sword of Damocles hanging over our heads either. We really, really want you guys to fix your shit, please. Best of luck, honestly. It matters a lot to us.

    Edit to add: And to be clear, I agree Gavin Newsom suuuuuuucks, if you guys end up with him as a presidential candidate we are all cooked.



  • A business is something owned and run by a real human, who may be an evil person but is still at least a person that can potentially be reasoned with and can suffer consequences for their actions. Sociopathic business owners absolutely do exist and are a real concern, but they are a manageable one, at least theoretically, at least when the entire system isn’t stacked in favor of them.

    As you say, corporations are different (and they are a significant part of the reason the economy is stacked in favor of sociopaths instead of against them). They are only nominally run by a human, and typically only in a temporally limited or some other limited capacity. A corporation is owned by its shareholders, an anonymous, nameless, faceless mob of pitchforks and torches, a group that is constantly shifting, amorphous and fluid, impossible to solidify into anything that can be pinned down, typically mostly represented by bankers, fund managers and balance sheets that want to look good for their eventual consumer so they can sell financial products to them. They are inherently amoral, and like any mob can quickly turn from vicious to apathetic and back again at the prompting of single individual acts or actors without any logical reason. The sociopaths on the other hand can easily take advantage of this, becoming the single actor or creating the single act to incite the mobs to riot or soothe them into complacency almost at will, and as a result, they control the corporations, and thus the economy.


  • You’re absolutely right and I think more people need to understand this. What we now call “AI” refers to a lot of things that are not new and have been happening for decades, just without the complete and callous disregard for humanity that the current AI companies are exemplifying.

    The thing is, the things they were doing didn’t used to be called AI, nobody pretended machine learning models were intelligent. People really need to start learning more of the terminology around these technologies, because it’s important, even if you hate them. You might even learn that you don’t have to hate all of them, because they’re not all the same. AI is an ugly label being painted onto everything in software nowadays with broad strokes and a lot of it is deserved but there is significant room for nuance here and people should endeavor to have a more nuanced understanding of the topic than they currently do. AI is not LLM which is not GPT which is not computer vision which is not machine learning which is not agentic coding which is not tool usage which is not Generative AI which is not chatbots which is not sentience.

    “ALL AI: BAD!” is the logic of a simpleton. Don’t be a simpleton. Educate yourself, and begin to understand what about it is bad, because there is plenty about it that is very bad indeed. This technology is going to be transformative whether you love it or hate it. Even if it’s 100% terrible (and honestly it’s not) you still need to know your enemy. Trying to fight against something you don’t understand is the first step to losing.




  • The simple, maybe unhelpful answer is that fail2ban needs to have two things at once: the logs, and a way to block the network traffic.

    Where exactly you want those things to coincide is really up to you, there might only be one point that simultaneously has access to both those things, or there might be multiple points depending on how your systems and services and network is configured, or if you’re in a bad situation you might find you don’t really have any single point where both those things are simultaneously possible, in which case you’ll need to reconfigure something until you do have at least one point where both those things are again coincident.

    As far as best practices, I can’t really say for sure, but I know that one of the more convenient ways to run it is usually on the same system, I usually run it outside of docker, on the host, which can pretty easily get access to the container’s logs if necessary, and let fail2ban block traffic on the whole system. For me, any system running any publicly accessible network services that allow password login gets a fail2ban instance.

    A whole-network approach where you block the traffic on the firewall is fine too, if that’s what you prefer and what you want to work towards, but it’s probably going to be significantly more complex to set up because now you need to either figure out how to get fail2ban to be able to access your firewall or a way for your firewall to get the logs it needs.




  • The reason nationalizing things doesn’t work is because it makes billionaires mad, and billionaires use their money to get revenge, sabotage the shit out of the process and convince you it’s a failure with propaganda you don’t understand and get you to elect a government that undoes the nationalization or dismantles it or sabotages it further until it dies.

    The general public can be reliably played like a fiddle to go directly against their own best interests, at full speed. Case in point: Trump. Also Reagan. Also all politics ever, really. “I love the poorly educated” is quite possibly the only true thing Trump has ever said.