deleted by creator
- 5 Posts
- 1.14K Comments
pixxelkickto CoupleMemes@sh.itjust.works•You are beautiful, you just need someone who truly values youEnglish1·9 hours ago
pixxelkickto CoupleMemes@sh.itjust.works•You are beautiful, you just need someone who truly values youEnglish191·9 hours agoI actually dislike this.
The narrative that your value comes from external validation is one you see a lot.
But a person shouldn’t be told that “you are beautiful based on another persons perspective” because that cuts both ways.
If you tell them this, it can easily be flipped around to say “okay but theres 10 million people on the internet who would love to call you ugly so thats 10 million to 1”
Instead, the only person whos opinion on your body that matters is YOU, and thats it.
And Ill keep banging that drum.
If I have a daughter, Im gonna tell her this all the time, “tell anyone who tries to convince you that beauty is in the eye of the beholder, that the beholder outta keep their fuckin opinions to themself”
Theres also a massive distinction between consuming something necessary/important, vs consuming something 100% optional.
Harry Potter isnt food, shelter, or any other kind of critical necessity.
Theres literally countless better alternatives to Harry Potter media you can choose to consume from that doesnt directly put money straight into the pocket of someone actively funding direct harm
This isn’t multiple layers of washing here, that money basically goes straight towards actively harming minority groups.
Its not even a good fucking book, and I used to be a fan of it as a kid, but I went back and read my old books and… it just fuckin sucks dawg, its not good lol.
Go pick like, any other fandom at least.
I mean for what its worth, if the dude has a partner they couldve just picked the kid up, its not exactly unheard of for one partner to do drop off and the other do pick up… kind of a lukewarm take here.
pixxelkickto Technology•Number of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionEnglish1·22 hours agoThe difference, when the tool is used correctly, is so massive that only someone deeply uninformed or naive would contend it.
I got about 4 entire days worth of work completed in about 5 hours yesterday at my job, thats just objective fact.
Tasks that used to take weeks now take days, and tasks that used to take days now take hours. Theres no “feeling” about this, Ive been a software developer for approaching 17 years now professionally. I know how long it takes to produce an entire gambit of integration tests for a given feature. I spend almost all of my time now reviewing mountains of code (which is fairly good quality, the machines produce fairly accurate results), and then a small amount of time refining it.
People deeply do not at all understand how dramatically the results have changed over the past 2 years, and their biases are based on how things were 2 years ago.
Sure, 2 years ago the quality was way worse, the security was bad, the enforcement almost non existent, and peoples overall skill with how to use the tools was just beginning to grow. You cant exactly be good at using a tool that only just came out.
But its been two years of very rapid improvement. Its good now. Anyone who has been using these tools and actually monitoring progression can speak to this.
Things heavily shifted about 5 months ago when competition started to really fire up between different providers, and I wont say its even close to great yet, but its definitely good, it works, its fast, and it’s pretty damn good at what I need it to do.
pixxelkickto World News•JUST IN: Iran Ends Direct Talks With US After Trump Threatens to Destroy ‘Whole Civilization’English64·3 days agoGo find the richest person in your local city.
You have one, they exist. Probably several in the same area.
Make it their problem.
pixxelkickto World News•JUST IN: Iran Ends Direct Talks With US After Trump Threatens to Destroy ‘Whole Civilization’English91·3 days agoWe arnt in active physical danger.
You arent in percieved active physical danger.
If trump launches a nuke boy that sure will change lickity split tho
pixxelkickto Technology•Number of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionEnglish1·3 days agoYou know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.
Everyone keeps citing this preliminary study and ignores:
- Its old now
- Its sample size was incredibly tiny
- Its sample group were developers not using proper tooling or trained on how to use the tools
Its the equivalent of taking 12 seasoned carpenters with very little experience on industrial painting, handing them industrial grade paint guns that are misconfigured and uncalibrated, and then asking them to paint some of their work and watching them struggle… and then going “wow look at that industrial grade paint guns are so bad”
Anyone with any sense should look at that and go “thats a bogus study”
But people with intense anti-ai bias cling to that shoddy ass study with such religious fervor. Its cringe.
Every professional developer with actual training and actual proper tooling can confirm that they are indeed tremendously more productive.
I find this only is the direction sought by half baked devs who aren’t bothering to actually proof read the stuff their agents churn out.
They “trust” it without any true proof of trust.
Agents are INCREDIBLY prone to fudging and faking “success” metrics, especially when put under context pressure.
Ive seen everything from commenting out tests to fake passes, or changing the asserts on tests to fake a success, to just slapping “to be implemented later” and then calling that done.
You fundamentally cannot automate away proving that an agent actually did its job right, full stop. You can make it write tests, but now how do you know the tests were written right?
At some point you HAVE to actually sit and read the code, read the diffs, and check the work. If you don’t, you are opening yourself up to all manner of problems, especially if whatever you are working on is remotely sensitive. If the tool/app/whatever has any kind of auth or handles any kind of sensitive data, you MUST still be auditing every change.
And thus, the IDE continues to still be the tool I prefer to sit and sanity check the code as it gets produced.
Doesnt matter which one I use, I need the ability to live read and diff code and steer the agent away from disaster.
If you blindly trust agents without constantly auditing their code, you are just setting yourself up for failure.
pixxelkickto Technology•Number of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionEnglish1·4 days agoLovely anthropic mcp. Make sure you give anthropic lots of money and use their tools
Its becoming clear you have no clue wtf you are talking about.
Model Context Protocol is a protocol, like http or json or etc.
Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.
Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.
I bet you your contract with them says they’re not liable for shit their llm does to your files
Setting aside the fact that I dont even use anthropic’s tools, my copilot LLMs dont have access to my files either. Full stop.
The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete
/var/libor whatever, I click 1 button to reboot and reset it back to working state.The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it
After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.
This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades
Doing this isnt that hard, its just that a lot of people are:
- Stupid
- Lazy
- Scared of linux
The concept of “make a docker image that runs an “agent” user in a very low privilege env with write access only to its home directory” isnt even that hard.
It took me all of 2 days to get it setup personally, from scratch.
But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands
Let me make this abundantly clear if you cant wrap your head around it:
LLM Agents, that I run, dont even have the executable commands exposed to them to invoke that can cause any damage, they literally dont even have the ability to do it, full stop
And it wasnt even that hard to do
pixxelkickto Technology•Number of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionEnglish1·4 days agoYou’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check
Read what I wrote.
Its not a matter of “rules” it “obeys”
Its a matter of literally not it even having access to do such things.
This is what Im talking about. People are complaining about issues that were solved a long time ago.
People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.
We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.
The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.
People still do that. Stupid people who deserve to have it blow up in their face.
Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.
Agents shouldn’t even have the ability to do damaging actions in the first place.
pixxelkickto Technology•Number of AI chatbots ignoring human instructions is increasing— Research finds sharp rise in models evading safeguards and destroying emails without permissionEnglish1·4 days agoThe only people who have these issues, are people who are using the tools wrong or poorly.
Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.
Unfortunately that takes effort and know-how, skill, and understanding how these tools work.
And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.
But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.
My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.
And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.
Example: I grant my LLMs access to git commit and status, but not rebase or checkout.
Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.
This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.
These people are playing with fire imo.
pixxelkickto 3DPrinting•Having issues with PETG. Is this bed leveling issue, bed dirty issue, wet filament issue, or something else?English5·7 days agoTo start, your z offset is too high, your nozzle isnt close enough to the bed. Your lines should not have gaps like that, gapping in your lines means the nozzle isnt close enough to the bed to “squish” the plastic outwards and join with the adjacent line. Im gonna go out on a limb and guess this print peels off the ned very easily, and as you pull it up and off it sprt of “splinters” a bit, parts of it “fall apart” right?
If you get your z offset right, the print should come off as 1 solid piece without gapping.
Quick question:
If you run it the same time, are the failings in the exact same spots, or different spots?
Exact same spots
Bed issues, or physical issues perhaps with the wire harness to your heater catching or stretching, causing hiccups. Watch as it prints the problem areas for physical issues like it bumping stuff, or scraping on things, the wire harness, etc.
Almost same spots but not exact same
Flow issues, lower print speed 20%, see if it improves a bunch. Your nozzle is backing up and then surging causing inconsistent pressure on long runs.
Different spots
Check your z offset and watch your spool as it spins, if your spool is catching everytime it turns, it’ll inflict random jerks on your nozzle. Also check tightness on your movement system, it might be loose and getting perked a bunch by pulling on the spool as it works through the filament.
Whats your memory consumption like to produce responses like this?
pixxelkickto politics •Did Nazis escape on a UFO? Dev who asked the question just built the official White House app.4·8 days agoOH!
I honestly did not understand that link there with how they wrote it initially, but that makes sense now, thank you lol.
pixxelkickto politics •Did Nazis escape on a UFO? Dev who asked the question just built the official White House app.1·8 days agoIm bolding the part right in the middle that seems to be a totally random non-sequiter that makes no sense.
Edit: see my edit above.
pixxelkickto politics •Did Nazis escape on a UFO? Dev who asked the question just built the official White House app.31·8 days agoThat makes zero sense to me, the sentence seems to be a total non-sequiter with respect to the text before and after it.
pixxelkickto politics •Did Nazis escape on a UFO? Dev who asked the question just built the official White House app.123·8 days agoThe White House app was created by 45Press, a company based in Canfield, Ohio, a town of fewer than 8,000 people located roughly halfway between Cleveland and Pittsburgh. (Donald Trump was the 45th president of the United States.) The company’s website describes it as a “design, development, and DevOps agency” and a WordPress VIP Agency Partner; it lists Amazon, NBC, and Sony as past clients.
Wat?
Anti-AI measure?
AI generated noise?
Why is that random sentence in there…?
Edit: I see now, thanks to dhork for pointing out to me the aside line is pointing out the link to the name, 45Press and Trump being the 45th president. It still is weirdly written imo but at least it makes sense.
pixxelkicktoKagi Small Web Appreciated RSS Feed•Vibecoders can't build for longevityEnglish2·12 days agoVibecoding, understood as shipping code one hasn’t even read, is the exact opposite.
Based on that definition sure.
Most people coding with LLMs however are reviewing the code, because thats the sane and normal thing to do.
If your company is shipping code without a human reviewing it, even if a human wrote it, you deserve to fail.
It doesnt matter if a human or a machine wrote it, companies still typically have code review processes in place…
Bypassing that is just outright stupid.
I dunno Beholders have a tendency to think pretty highly of themselves though… 🤔