[go: up one dir, main page]

Cryptography nerd

Fediverse accounts;
@Natanael@slrpnk.net (main)
@Natanael@infosec.pub
@Natanael@lemmy.zip

Lemmy moderation account: @TrustedThirdParty@infosec.pub - !crypto@infosec.pub

@Natanael_L@mastodon.social

Bluesky: natanael.bsky.social

  • 65 Posts
  • 1.19K Comments
Joined 1 year ago
cake
Cake day: January 18th, 2025

help-circle


  • This gets at my own personal perspective of using LLMs to respond - it’s not just about not putting effort into understanding and responding yourself, rather it is about making yourself a proxy to a tool I could use myself, and doing so *without even having a better understanding of how to use the tool to answer my question*, and still thinking you’re somehow made a positive contribution, that is the most disrespectful.

    If you genuinely thought the LLM could help me then you should be explaining your process to me for how to use it and validate responses, or else at least you should ask me for more info and explain how you think it’s responses could help if you really do think you’re better at operating it.

    Imagine doing the same in a workshop, and taking a powertool to an object before you even bothered figuring out what the other person wanted. Or trying to be helpful by asking questions on your behalf to other departments, but messing up the context and thus repeatedly producing useless answers that you have to put time into refuting.





  • At some point it comes down to incentives, to not shun such terrible people just helps increase their influence. Accepting their money makes it look like you think what they did isn’t bad. Terms like greenwashing exists just highlight this problem, we have to make it clear it’s unacceptable to behave like that and that you can not buy your way out of consequences.

    It’s basic risk assessment

    Literally everything else you’re talking about is solved by ensuring due process is followed