[go: up one dir, main page]

buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Recent posts by users in this instance

1 ★ 0 ↺

[?]Anthony » 🌐
@abucci@buc.ci

@davidgerard@circumstances.run Yes, they have been at this a long time. I was still running my startup at that point and remember laughing about this when it came out. A few people took it seriously and were like 🤯 . GAN-based image generators were popping up on the web around that same time and people were all 🤯 about those too. A preview of things to come.

    1 ★ 1 ↺
    AI Channel boosted

    [?]Anthony » 🌐
    @abucci@buc.ci

    Meanwhile on the dystopia beat: deledating!
    Lush @LushAIAgency
    Today, we’re excited to announce the public release of LetsMatch.ai - the world’s first agentic AI dating platform, powered by Lush’s infrastructure.
    ...
    You get an AI agent built directly into your social media (starting with Instagram) that acts as your personal wingman and matchmaker - booking you real dates on autopilot while you live your life.
    ...
    We believe the future of dating isn’t swiping - it’s delegating.
    From https://nitter.net/LushAIAgency/status/2042233869986845074#m (twitter/X)

    Apparently they don't think relating to other human beings is part of living.


      6 ★ 0 ↺

      [?]Anthony » 🌐
      @abucci@buc.ci

      @davidgerard@circumstances.run @spzb@infosec.exchange Oh, so it's another ruse to sucker people into training their AI for them for no pay?

        1 ★ 0 ↺

        [?]Anthony » 🌐
        @abucci@buc.ci

        @yoginho@spore.social I wonder a lot about what you dubbed phil-washing. Anyone who pushes on tech hard enough recognizes that it's a bit of an intellectual husk. There's no grounding, no there there, for the vast majority of it (in my opinion of course). Wouldn't it be lovely if one of the tech giants came up with a philosophy! Wouldn't it be remarkable if it came from a division named "deep mind"!

          0 ★ 0 ↺

          [?]Anthony » 🌐
          @abucci@buc.ci

          I just inadvertently put a pair of headphones on my computer keyboard and by the time I picked them up again there were 51 screenshots taken of the same window. Oops.

          1 ★ 0 ↺

          [?]Anthony » 🌐
          @abucci@buc.ci

          @yoginho@spore.social
          I'm positively surprised to see so much sense coming out of , for a change. What's going on?
          A guess? Critihype: hype of their viewpoints and methods clothed in what appears to be criticism, in hopes that people like us spread it (thereby achieving the goal of hyping themselves). Google has regularly done just this for roughly two decades at this point. I won't read corporate PR wrapped in a lab coat from such clearly compromised labs with obvious, deep conflicts of interest. However, that'd be my guess. A lot of these folks fight amongst themselves about whether AGI is possible, whether doom or utopia will result from attempts at creating it, and other religious nonsense.

            0 ★ 0 ↺

            [?]Anthony » 🌐
            @abucci@buc.ci

            @dingemansemark@scholar.social Oh wow thanks for sharing this, going to dig in when I get a chance next!

              3 ★ 4 ↺
              Anthony boosted

              [?]Anthony » 🌐
              @abucci@buc.ci

              This preprint on arXiv from CMU, Oxford, MIT and UCLA adds to the growing list of harms that overuse of AI tools are implicated in.

              I would suggest that folks who think using AI is great for mathematicians should think again. It seems as little as 10 minutes of use can be problematic. What else do we know that provides short-term gains at the expense of long-term loss?

              Here, through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence for two key consequences of AI assistance: reduced persistence and impairment of unassisted performance. Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up. Notably, these effects emerge after only brief interactions with AI (approximately 10 minutes). These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.
              From AI Assistance Reduces Persistence and Hurts Independent Performance, on arXiv https://arxiv.org/abs/2604.04721


                2 ★ 0 ↺

                [?]Anthony » 🌐
                @abucci@buc.ci

                @davidgerard@circumstances.run @atax1a@infosec.exchange Imagine having a conversation with the main character from Memento and he's furiously writing shorthand of everything you say onto his arms to maintain the thread.

                  2 ★ 1 ↺

                  [?]Anthony » 🌐
                  @abucci@buc.ci

                  @nielsa@mas.to @olivia@scholar.social Thanks for sharing! This is thought-provoking. Here's one reaction I had, for what it's worth.

                  I grew up in rural Pennsylvania, and though the sex ed then and there was a tiny bit better than what this author describes experiencing in Arkansas, it was not by much. I wonder sometimes whether non-Americans grasp how backwards and regressive US culture can be. Anyway, an attempt to adapt the rhetoric of abstinence-only sex "education" to shame AI critics is complicated for this reason. E.g., arguments about the need to abstain from use of AI might actually work on some people. It might fall flat or even raise the ire of others who had bad experiences. Putting on my evil tech marketer hat, I'd avoid this frame because of the complexity and unpredictability about how it might land (I don't actually have this hat). There are probably some effective wedges to drive here, and I think the linked article hits on one.

                    0 ★ 0 ↺

                    [?]Anthony » 🌐
                    @abucci@buc.ci

                    We all have good ethical and political reasons to reject the president’s words. But those who serve in government, and in the armed forces, have been placed under the legal shadow of genocide by what Trump wrote. To bomb a bridge or a dam or a power plant or a desalinization facility, very likely a war crime in any event, could very well have a different legal significance, a genocidal one, if it takes place after the expression of genocidal intent by the commander and head of state.
                    From https://snyder.substack.com/p/the-president-speaks-genocide


                      1 ★ 0 ↺

                      [?]Anthony » 🌐
                      @abucci@buc.ci

                      Trump and Vance telegraphing significant weakness and fear today.


                        3 ★ 1 ↺
                        Sely Friday boosted

                        [?]Anthony » 🌐
                        @abucci@buc.ci

                        Part of the yard right now.


                        Snowy scene. Sky is overcast and fat snowflakes are visible in the foreground. An increasingly snow-covered yard with a forsythia bush, pin cherry tree, a wrapped fig tree. Pine and bare deciduous trees in the background.

                        Alt...Snowy scene. Sky is overcast and fat snowflakes are visible in the foreground. An increasingly snow-covered yard with a forsythia bush, pin cherry tree, a wrapped fig tree. Pine and bare deciduous trees in the background.

                          0 ★ 1 ↺

                          [?]Anthony » 🌐
                          @abucci@buc.ci

                          The snow is pretty. The prospect of shoveling it, not so much. It's above freezing so I could wait for it to melt, but we need the driveway clear today.


                            4 ★ 2 ↺
                            #tech boosted

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            Re: LB:
                            What appears as critique – yearning for smaller, weirder, more human spaces – often functions as brand repair. Netstalgia becomes a strategy: it restores trust without redistributing power, softens anger without changing infrastructures and reframes structural problems as matters of vibe, design or community feeling.
                            "Am I working on change, or am I working on brand repair?" is an important question to ask oneself regularly, it seems to me. It's especially relevant for the tech sector, open source, and computer science.


                              2 ★ 0 ↺

                              [?]Anthony » 🌐
                              @abucci@buc.ci

                              @nielsa@mas.to @olivia@scholar.social Likewise---I tend to be blunt I guess, but I am legitimately interested in pushing ideas forward, especially with regards to this AI situation we've all been thrust into. Thank you for sticking through it.

                                1 ★ 1 ↺

                                [?]Anthony » 🌐
                                @abucci@buc.ci

                                @nielsa@mas.to Maybe there's a cultural thing here. I am in and from the US, where "purity test"---like the Wikipedia article Olivia linked---is very frequently used in bad faith to shut down discussion and conversation, especially by the powerful when anyone challenges their position from a more principled position. For instance, the Democratic party here---a corporate, centrist or center-right party---frequently accuses anyone with left-leaning politics as unreasonably demanding ideological purity, thereby distracting from the core policy debate.

                                I've seen similar language around AI, which is also a project of the powerful, used to stifle reasonable debate about this technology. So, I'm quite sensitive to this rhetoric.

                                  2 ★ 0 ↺

                                  [?]Anthony » 🌐
                                  @abucci@buc.ci

                                  @nielsa@mas.to @olivia@scholar.social Now that I've read all the interactions after your thread I think I understand you, yes. I don't think you're speaking in bad faith, slinging dogwhistles, or anything like that. I'm sorry if I wasn't clearer about that earlier.

                                    0 ★ 0 ↺

                                    [?]Anthony » 🌐
                                    @abucci@buc.ci

                                    Long post [SENSITIVE CONTENT]@phnt@fluffytail.org
                                    In the before times (5+ years ago), very few cared who was joining the network. (Notice the "network", this place isn't Mastodon and never was.) When someone joined, it was seen as a good thing no matter who that was, because it made the network larger, the decentralization was spreading. But in the last 5 years, the goals seemingly shifted. Suddenly more people on here turned to a bad thing, a decentralized network meant to allow anyone to have a voice turned into a fractured space of gatekept echo-chambers with very little bridges between them. Some might say, that is the result of not gatekeeping the today's gatekeepers, but I don't really care and still mostly have the old mindset in my mind. It is more of a reflection on how humanity changed.
                                    I've been using "the network" since the days of USENET, 1990 onward, and I can attest that, at least in my experience, none of this rings true even a little.

                                    Even so, the discourse I'm responding to is about Mastodon, not about some nebulous or idealized "network". Goalpost shifting is not constructive.

                                    Nobody has "power" here
                                    Of course we do. I have the power to block whoever I want and whichever hashtags I want, for instance. I also have the power to restrict who registers an account on my fediverse instance. You are not permitted to join my instance, and in that sense I very much have power over you: I am able to restrict your liberty. You may not want an account and I don't blame you, but that doesn't change the equation.

                                    I said nothing about excluding people from the network. I literally said "excluding people and topics they don't wish to interact with". You seem to be arguing against something that wasn't said, which is not constructive.

                                    Oh, and if anyone cares, my little gatekept and bridgeless corner of the fediverse is quite lovely, thanks, and grand proclamations about fractured spaces or whatnot have no bearing whatsover on this simple reality.

                                    excluding anyone from this network is equivalent for both cases, marginalized groups and "AI people".
                                    These are obviously not equivalent in any sense that matters. You might as well include "people who love putting topsoil on their pizza" as a marginalized group because someone said "eww" once. Superficial associations like this sound disingenuous to my ears, and in any case are not constructive.

                                    And that it isn't healthy.
                                    Why would excluding "AI people" in particular be unhealthy? What exactly are the ill effects?

                                      2 ★ 0 ↺

                                      [?]Anthony » 🌐
                                      @abucci@buc.ci

                                      @nielsa@mas.to If you're suggesting I've come at you with bad intent, I'd offer that this is a great way to convince me that your own words are being offered in bad faith. I think my read of your posts was a reasonable one even if it is not what you intended to express. I know it's frustrating to be misread but that's one reason we interact, isn't it? To clarify?

                                      And absolutely I've seen a bunch of people say rude stuff to @olivia@scholar.social on here. Ugly stuff, undeserved.

                                        4 ★ 2 ↺
                                        emenel boosted

                                        [?]Anthony » 🌐
                                        @abucci@buc.ci

                                        Re: https://social.coop/users/scottjenson/statuses/116352800579635299
                                        Yes, a lot of you don't want AI posts in your feed (or pick any other topic) but the solution isn't to keep "AI People" from joining Mastodon
                                        If this were not a disingenuous strawman---because it's impossible for one thing---I'd ask "why not?" I wouldn't invite the "AI People" I've encountered into my house either, because I've found them to be unpleasant and I get to choose who enters my space. This solution has worked quite well for me over the years.

                                        It seems to me that what this person is saying is that people should give up the power they have---namely, their power to exclude people and topics they don't wish to interact with---because it favors them. That's a typical rhetorical move of AI boosters: demanding you give up your power because you having and exercising that power inconveniences them.

                                        any more than it is keeping marginalized communities off of Mastodon.
                                        One should ask why this person chose to use the most offensive possible metaphor to make their case for inclusion. It's almost as though they don't believe the argument their words are shaped into resembling.


                                          16 ★ 5 ↺

                                          [?]Anthony » 🌐
                                          @abucci@buc.ci

                                          @nielsa@mas.to @olivia@scholar.social
                                          avoidance purity is incompatible with increasing AI literacy
                                          "Avoidance purity" is both a strawman and a dogwhistle. Nobody serious is doing either of these things, and a lot of bad actors use this phrase to cudgel people into submission or sow doubt. A strange take, frankly.

                                          That said, the conclusion is false. I practice an extreme form of avoidance purity when it comes to experimenting with whether murder would enhance my life. Nevertheless, I am "murder literate". I contend the overwhelming majority of folks can say the same.

                                          (I recognize that I too am whacking a strawman, but this is for effect; the point gestured at stands regardless).

                                            5 ★ 3 ↺

                                            [?]Anthony » 🌐
                                            @abucci@buc.ci

                                            Watching someone share their screen on Zoom and seeing the dozen AI-related icons, tabs, and popups polluting every application and web app is pretty amazing. It is so much like intrusive advertising both in how invasive it is and in how much it degrades the general experience. I don't use any of these apps and aggressively use uBlock Origin rules to block every icon, widget, and box having to do with AI on every web site I do regularly visit. Having made all these rules over time I had a sense for how bad it was getting but till today I hadn't had the full frontal assault of AI cruft the tech world has gifted us.


                                              1 ★ 0 ↺

                                              [?]Anthony » 🌐
                                              @abucci@buc.ci

                                              I don't know Zoom, maybe you should open another browser tab and application window, I didn't see the first four.


                                                0 ★ 0 ↺

                                                [?]Anthony » 🌐
                                                @abucci@buc.ci

                                                @lienrag@mastodon.tedomum.net It's fine if you don't like chess tournaments, but a lot of other people do, and the point was that a computer beating Kasparov did not change this.
                                                (which led to the suicide of a good player recently).
                                                Are you referring to Daniel Naroditsky?

                                                  5 ★ 4 ↺

                                                  [?]Anthony » 🌐
                                                  @abucci@buc.ci

                                                  @cwebber@social.coop @mttaggart@infosec.exchange No one with a moral compass would respond positively to someone purposely polluting a community and then writing about how they managed to make a fancy chemical that has positive uses but felt bad about it in the process. It is time to take a principled ethical stance against any use of this technology and refuse to use it altogether until it can be created and deployed in an ethical manner. It's been years at this point. There are thousands of posts and articles like this. We don't need more.

                                                    2 ★ 4 ↺
                                                    Literbook boosted

                                                    [?]Anthony » 🌐
                                                    @abucci@buc.ci

                                                    I cosign this sentiment from Ross Barkan's Substack, and would add that it extends to software development as well:
                                                    I’ve made this point before about how inane AI hype is now, but a computer beat the best chess player in the world in 1997. No one pretended, after 1997, it wasn’t worthwhile to have humans compete in chess. In fact, the world of chess developed strict protocols around computer use and you can get banned from tournaments if you use a computer program as you play. You are certainly shamed and mocked.

                                                    AI and writing needs to be treated the same way. I do think people should be shamed for using AI to help them write creatively. It’s an embarrassment, and a form of cheating.


                                                      2 ★ 0 ↺

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      @ngaylinn@tech.lgbt I don't know if this explains your specific experience, but generally speaking there's a fairly large loophole in pharma regulation that leads to stuff like this. Basically if you produce a pharmaceutical or bio-based product, you can artificially extend its patent by combining it with a digital artifact and rebranding it as a medical device. It's also cheaper, faster, and easier to obtain FDA approval for medical devices than for other types of products. I've come across this numerous times in my consulting works as well as at the startup I cofounded (where I heard this strategy explained from the horse's mouth so to speak).

                                                      I agree completely that none of this is good!

                                                        1 ★ 0 ↺

                                                        [?]Anthony » 🌐
                                                        @abucci@buc.ci

                                                        @ObsidianUrbex@mstdn.social Wow! Are you able to share where this is? I grew up in PA!

                                                          28 ★ 11 ↺
                                                          Ghostrunner boosted

                                                          [?]Anthony » 🌐
                                                          @abucci@buc.ci

                                                          Microsoft rushed Azure out of the gates under intense competitive pressure. Corners were cut. Fundamental principles of reliability and operational simplicity were quietly abandoned.
                                                          Meaning all this came straight from the top. "Intense competitive pressure" is self-induced.

                                                          Fortunately this kind of thing, rushing software out the door under self-induced competitive pressure, doesn't happen anymore. Organizations have learned their lessons about the perils of operating this way. (/s)

                                                          Layered on this chaos was an Azure-wide mandate: all new software must be written in Rust.
                                                          LOL

                                                          On a more serious note: LMAO

                                                          On top of all that, the org had a hard commitment to deliver the already long-delayed OpenAI bare-metal SKUs that had been promised for years. This work started around May 2024 with a target of Spring 2025 and was led by a Principal engineer who had evidently never tackled a task of that scale.

                                                          Fast-forward to March 10, 2025: OpenAI signed an $11.9 billion compute deal with CoreWeave for model training and services.

                                                          This detail really struck me. Microsoft's deep internal dysfunction drove OpenAI right into the outstretched arms of Datacenter Enron.

                                                          An unbelievable series thanks for sharing. I feel like @davidgerard@circumstances.run could make quite a bit of hay out of this one.