[go: up one dir, main page]

  • 43 Posts
  • 6.48K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • Let’s set all ethics and politics and humanitarianism aside.

    From a selfish, strategic, “NonCredibleDefense” kind of perspective, Ukraine is an incredible military power. Militaries around the world should be on their knees begging for their experience, if not their units. With, for instance, shooting down drones.

    And it’s absolutely mind boggling to me that Western powers don’t see that. Think how Ukraine could help them militarily all over the world… If I were Xi, I wouldn’t invade Taiwan with a bigger chunk of Ukrainians training them.



  • Of course it can be discussed…

    I’m really into (open weights) genAI myself, have been for years, but at the same time I’m under no illusion the space is clean. The vast majority of services are scams, many open source AI projects are autogenerated slop from someone with AI psychosis (if not outright Tech Bro scams), and that’s not even touching on what Big Tech is pushing.

    What I’m asserting is that a fat slab of skepticism is healthy in this kind of space. Be an enthusiast, not a believer. I know much less about blockchain, so perhaps I was a little zealous in judgement, but something about this project just raised a lot of red flags in my head like scam-adjacent AI projects do.


    Another thing is that the blockchain scams haven’t gone away, and in ten years they probably will still stubbornly persist. GenAI is going to be the same.


  • Corporate, for now.

    Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.

    Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.


  • brucethemoosetoScience Memes@mander.xyzhow things become science
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    22 hours ago

    That’s pretty much what local ML is.

    If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.




  • To illustrate what I mean more clearly, look at the top comments/replies for the NASA Artemis posts, as an example.

    …It’s basically all conspiracy theorists, and government skeptics.

    Twitter’s focusing the Artemis posts on them because it’s what they want to see, and most engaging for them.

    In the EFF’s case, I’m not just talking about Musk’s influence. The algorithm will only show the EFF to users who would be highly engaged by it. E.g., angry skeptics who wouldn’t be swayed by the EFF anyway, or fans who already agree with the EFF. It’s literally not going to show the EFF to people who need to see it, as Twitter’s metrics would show it as unengaging.


    This is the “false image” I keep trying to dispel. Twitter is less and less an “even spread” of exposure like people think it is, like it sort of used to be, more-and-more a hyper focused bubble of what you want to hear, and only what you want to hear. All the changes Musk is making are amplifying that. Maybe that’s fine for some orgs, but there’s no point in the EFF staying in that kind of environment, regardless of ethics.





  • That’s the issue; hydrogen has none of the inertia of fossil fuel ICE, most of the drawbacks, and a ton of its own unique issues (like bulky/dangerous storage of hydrogen). It won’t be cheaper than ICE. It won’t be cheaper than EV-drivetrain hybrids, either.

    And even if hydrogen cars were somehow cheaper, why spend billions setting up hydrogen infrastructure? We already have gasoline. Or natural gas fuel cells, if that’s the tech angle.

    Hydrogen was an interesting “transition” fuel like two decades ago when electric drivetrains and power transmission were less advanced, and leaders thought populations would care about climate change. But society collectively decided to just ignore it and keep using gas. Hence I think that window is passed.









  • To add to what others said:

    LPDDRX is used in some inference hardware. The same stuff you find in laptops and smartphones.

    Also, the servers need a whole lot of regular CPU DIMMs since they’re still mostly EPYC/Xeon severs with 8 GPUs in each. And why are they “wasting” so much RAM on CPU RAM that isn’t really needed, you ask? Same reason as a lot of AI: it’s immediately accessible, already targeted by devs, and AI dev is way more conservative and wasteful than you’d think.

    Same for SSDs. Regular old servers (including AI servers) need it too. In a perfect world they’d use centralized storage for images/weights with near-“diskless” inference/training servers. Some AI servers do this, but most don’t.


    Basically, the waste is tremendous, for the same reason they use cheap gas generators on-site: it’s faster-to-market.