[go: up one dir, main page]

previously @jrgd@lemm.ee, @jrgd@kbin.social

Lemmy.zip

  • 2 Posts
  • 40 Comments
Joined 10 months ago
cake
Cake day: June 3rd, 2025

help-circle
  • It’s probably not a case for everyone due to the obvious limitations, but I primarily use KeepassXC from my main workstation. I have backup scripts that periodically run for my user on said workstation that capture my Keepass database among other user files and backup to external storage, cloud storage as dictated.

    For my laptops, mobile devices; I periodically push the database from either the main workstation or pull from a backup to these devices. I do not write new entries from these devices in order to avoid having to handle writeback to the main instance of my Keepass db. This can be done, but inherently starts to hinge on needing network access all the time to ensure an up-to-date copy of the DB is present as well as being explicitly a single-user db to prevent a syncing protocol from accidentally writing over new entries from any given device. Obviously, if network sync and the potential for multi-user is important to you, continue using Bitwarden. It is a perfectly fine solution.


  • I’ll list a few of my regularly-used tools, both CLI and GUI.

    CLI:

    • ncdu: An interactive TUI variant of du, for tracing disk usage across targeted directories.
    • podman: An alternative runtime to Docker that is arguably just better at this point. Can handle rootless containers with ease, works with SELinux, can handle Docker compose files with add-on tool podman-compose, can automatically update containers intelligently, can integrate well with SystemD, and more.
    • mtr: Another network tool. Effectively traces network routes to a given IP. Great for diagnosing faults in latency or major packet loss.
    • ffmpeg: A very complicated, but powerful tool for converting, manipulating video and audio files in all sorts of ways. FFMpeg can essentially be the answer to any ‘do X to Y file’ question.

    GUI:

    • Kdenlive: A powerful video editor developed by KDE. For free and open source, it is impressive how little you can’t do when editing videos with this tool. A bit buggy at times, but has gotten significantly more reliable over years.
    • Handbrake: A FFMpeg frontend that allows for mass transcodes of video files based on created profiles. Great for archiving, finalizing videos down for web upload, or just converting content to a more efficient format. Specializes in lossy/destructive operations.
    • LosslessCut: A FFMpeg frontend that allows for trimming videos, stripping and/or exporting tracks from videos, editing mkv metadata, editing video chapters, and any other lossless/non-destructive operations that can be done on video files.
    • Subtitle Composer: An outright semi-professional-grade subtitle editor developed by KDE. Supports and can convert between pretty much any subtitle format you might encounter. Great for creating, editing, timing, and translating subtitles for videos.
    • KeepassXC: My password manager of choice. Has browser autofill integration, though requires some holepunch work to function with Flatpak browsers. Explicitly is based on local files. Does not rely on cloud providers.
    • Limo, R2Modman: Native mod-managers that allow for modding various Steam games (native and Proton) on Linux.
    • Blender: A powerful 3D editor. Capable of hard and soft 3D modeling, character rigging, animation, material creation and UV mapping, compositing and rendering. Pretty much an all-in-one tool for 3D art and design.
    • FreeCAD: A somewhat daunting, but functional 3D CAD software. Has received a lot of recent (~3 years) patches to improve on a lot of long-standing pain points in the software.

  • From the project’s site, an eBlockerOS-powered device allegedly uses ARP spoofing to hijack as the default gateway and serving as a second hop in the internal network. Behaving as an NGFW can have some benefits over just DNS filtering that services like PiHole achieve. Things the project lists it is capable of include global VPN tunneling, DNS request masking, parental controls, content blocking.

    It could be technically better than just PiHole (assuming the project is legitimate), but I will argue so would a router running OpenWRT or similar. Depending on the SBC used with it, I would be concerned with network throughput performance (both Ethernet link speed limits and CPU utilization from certain services). Additionally, its configuration would mean it being a full second-hop device on the local network, which may cause its own category of issues. The main use case I could see for this project is if you’re completely stuck with your ISP’s router and cannot touch much about it.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldAuthentik Helm woes
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    Coming back and checking the values file posted. Not sure why your authentik block won’t get used in your values file. Your current issue of non-starting is likely the Authentik server container starting successfully, but failing liveness while waiting for the worker container(s) that is definitely not spooling up with your current configuration.

    Something to denote about Authentik itself that won’t be well-explained by the quickstart for the Helm chart itself is that Authentik is split into two containers: server and worker. For most environment variabless and mounted secrets, both the server and worker definitions should have them applied. The chart tends to handle most of the essential shared stuff in the authentik block to prevent the duplication, but secrets will likely need to be mounted for both volumes if using file or env references in the shared config, as well as most env overrides will need to be applied for both.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldAuthentik Helm woes
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    In my case I’m running an external Postgres DB and external cache plus a handful of other settings. As such, I have a decently sized values file. All of the env vars I was looking for in my case are provided in the chart, so I didn’t need to set any directly, but just through their counterparts in the values file.

    I don’t use ArgoCD in my case, so I couldn’t really say if it would affect your deployment strategy in any way.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldAuthentik Helm woes
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 days ago

    When I did my authentik setup through helm chart a while back, the only real problems I had were with learning blueprints and not so much with getting Authentik to do its thing.

    The main things you should be checking given a liveliness probe failure is kubectl -n <namespace> describe pod <podname> to check the reason for failure. Additionally, kubectl logs -p -n <namespace> <podname> [container]. Will get you logs of the last run of the pod that has already failed, rather than the current run that may be soon to fail. Those two commands should point you pretty directly where your chart config has gone wrong. I can likely help as well if you are unsure what you are looking at.

    Additionally, once you get things working, please go back and usw secrets properly with the chart. Authentik lets you sub many values for env vars or files, which combined with mounting secrets is how you can use them.


  • Iproute2 definitely does write things a bit compact. ip address show and shorthands state the routed local address space (192.168.1.x/24) and the actual /32 address (192.168.1.214) you are assigned as one unit. Additionally, it shows the broadcast address for the space. Ironically, ip route show may genuinely give you less confusing information, clearly splitting the actual route and showing your straight IPv4 address as src.

    Typically in firewalling, you’d use /32 to target a singular IPv4 host. This is analogous to using /128 for IPv6 hosts. You can absolutely use /24, /16, /8, or any other mask really if you need to target a range of IP addresses for a rule to apply to. Technically, /32 is a range itself, just with a size of 1. There are CIDR calculators available to play around and see what different CIDR masks actually target.


  • The routing and firewalling is a bit different in terms of why certain CIDR masks are used. For the router, the /24 suffix is usually defined for itself on the LAN interface to denote the address space it may send route information to, and what addresses are controlled by the device. Almost certainly, (unless using a lower CIDR range and actually handing out /24 blocks to subsequent routers,) you are granting /32 IPv4 addresses to your devices from your router.

    For your system firewall, 192.168.1.135/24 is identical to 192.168.1.0/24 as they are the same address space. You’re simply allowing from a subnet of hosts to accept from. Given the /24 mask is 255.255.255.0, it does not matter what the last number of the IPv4 address is, but the lowest possible number to match the mask is standard form. Without knowing what rule(s) specifically is being applied, I couldn’t tell you if your firewall rules are something that would affect hostname resolution of other hosts from your system or not.


  • jrgd@lemmy.ziptoLinux@lemmy.mlBeginning with Linux
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    In addition to the other reply on the fundamentals of why not in general, maybe we don’t recommend daily driving one of DHH’s pet projects.

    If anyone is out of the loop of who DHH is, tons of people have covered the topic but I think Niccolò Venerandi has quite comprehensive and digestible coverage. If anyone cares to read or watch Nicco’s coverage.


  • jrgd@lemmy.ziptoLinux@lemmy.mlBeginning with Linux
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    1 + 2:

    There’s not much involved in burning an ISO to a flash drive, booting from it, and installing typically. It is different in booting from one on a Mac. If you have an M-series Mac, you will be restricted mainly to anything with the experimental Asahi Linux kernel. If you have an Intel-based Mac, you should generally be good to go. Whenever booting a Linux installer, you’ll generally be able to check out the system before installing. It’s a good time to check things like backlight brightness and wireless capabilities are working out of the box on your distro of choice.

    Accessing the boot menu on a Mac

    3:

    OpenSUSE Tumbleweed and Fedora are generally good picks. I recommend going for KDE unless you have a strong preference for how GNOME works. As good as the distros are, I generally recommend staying away from distros like Linux Mint (for now) as their implementation of the newer display system called Wayland is not yet complete for Cinnamon. Desktops like KDE and GNOME have functional implementations and will overall provide a solid experience.

    4:

    You’ll see mixed opinions all over the place with this. Personally, I do sit in the GrapheneOS camp at this point. If you don’t want to purchase a secondhand Google phone, I’d wait and see for the partnered device that GrapheneOS devs are in works with a currently undisclosed manufacturer on.

    I’ll repeat the core points the GrapheneOS devs drone on about other Android OSP distributions, but without the hyperbole the devs constantly put in. Yes, e/OS does generally have security problems, some of which stem from the use of microG, and how microG just has to function on the device. It is a trade-off in security for some privacy gained. If you really don’t need anything of Google Play Services at all, you could always go for straight LineageOS without any Google services package installed at that point.

    5:

    By all means, older laptops can definitely still be functional for lighter or alternative tasks. Even if it’s not a good workstation anymore, could be fun to experiment with. Older phones (especially Android devices) really do have a set lifespan that I’d recommend to stop using them as daily drivers. When the manufacturers stop supporting them, they can be horrifically vulnerable devices as exploits are found over time. You might still get use out of it though without using its networking capabilities. It likely still has functional storage, screen, cameras, etc. If you’re lucky, you might be able to play around with straight Linux projects like PostmarketOS.

    For new stuff, Linux-centric vendors can be nice (though a lot of them seem to just rebadge Clevo laptops with a decent markup) as a guarantee of good hardware support. Most business laptops make for good Linux laptops. I personally bought a Framework 13 a few years back and that’s my primary laptop. Though if you want to stay away from United States-based projects, your initial choices are probably a good fit. Additionally, you might lean more toward OpenSUSE than Fedora as well in the same principle.


  • My set of recommendations:

    RPMFusion is recommended to add to your system. It’s the best way to use Steam, certain drivers (nvidia, v4l2loopback, etc.) as needed.

    SELinux is present, but the default policy sets are unlikely to impede your usage. The SELinux applet (seapplet) is a useful tool for diagnosing on the very rare chance you’re finding permission denied somewhere that cannot otherwise be explained.

    If you pull most of your software as flatpaks from Flathub already, your day-to-day experience won’t be much different from Debian.

    Fedora’s equivalent to LTS releases would be the downstream LTS releases provided by Redhat, RockyLinux, AlmaLinux, and others. They don’t have the same package sets as base fedora, and may need extra repositories to get some of the less essential, but ‘core’ software back. Ultimately not much of a reason to run them on a desktop workstation for personal use.

    Upgrading is pretty seamless. It’s as easy as graphical updates now or otherwise using the system upgrade module in dnf. I generally have the policy of waiting 2-4 weeks for any minor bugs that made it into a new release to settle. I have been expediting my upgrades for the past few releases in order to catch bugs before friends and family upgrade their machines and haven’t found any large problems regardless.

    Fedora doesn’t inherently expect a system to upgrade forever without maintenance, with 5 years being a typical target for things that may break. With that said, it is good to read the release notes before upgrading to the next edition, as there can rarely be something (like the recent recommendation and changed default for a larger /boot partition) that may require maintenance on a long-term system before upgrading. That said, you do have time to hold off on upgrading the distro, as the general lifetime of each release is ~13 months, giving 1 month overlap into a release two releases ahead. For instance, Fedora 43 will still be maintained up to a month into Fedora 45’s release.




  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    It does depend on the connection type, but the general rule is not completely, barring some connection types like DSL. Given it sounds like you have Fiber, DOCSIS, or similar; you likely fall under the general rule. That said, you can absolutely tune and test above the typical 10-15% safety margin many guides start with without actually incurring any noticeable bufferbloat. The 10-15% is usually a good value for ISPs that fluctuate heavily in available babdwidth to the customer, but for more consistent connections (or for those that overrate high enough that the bandwidth fluctuations sit out of range for what the customer is actually paying for), you can absolutely get much closer to your rated connection speed, if not meeting or even passing it.

    The general process is to tune one value at the time (starting with the bandwidth allocations for your pipes), apply the changes, noting the previous value, and performing a bufferbloat test with Waveform’s or others’ testing tools. Optionally, (this will drastically slow down the process, but can be worth it) one should actually hammer the network with actual load for a good few hours while testing some real-world applications that are sensitive to bufferbloat. Doing this between tweaked values will help expose how stable or unstable your ISP’s connection truly is over time.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Yeah, not having cake sqm is the one thing that will probably kill Opnsense as a choice for some people. That’s not to say you cannot get excellent results with fq_codel, because you absolutely can (I actively use both OpenWRT and OPNSense on different network applications personally). It is definitely more work to get good results though. OPNSense’s wireguard support has been excellent for a number of years now, and it’s exclusively what I use for tunneling in a VPC I rent.

    If you’re particularly constricted on host hardware and need a lightweight router to manage multiple other VMs on said host, I could definitely see the benefits of running a minimal OpenWRT over OPNSense in that case.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    I mean, the mini PCs don’t come with a managed switch, and often without good wireless connectivity that most home routers will come equipped with. So in total with Wi-Fi APs and a decent switch, definitely more than €100 in total.

    Also unrelated, but if you’re running a x86 system with gigabytes of RAM, why not run Opnsense at that point?


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 months ago

    Looking up the router, it was allegedly produced in 2024, according to the OpenWRT wiki. Barring any outliers, OpenWRT generally only sunsets hardware when a new version has higher hardware requirements than is provided by a device. The supported devices page lists out the hard requirements as well as recommendations. Currently 8 MiB flash storage is the minimum, with 16+ MiB recommended (for additional functions, user addons, etc.). 64 MiB is the minimum RAM target, with 128+ MiB recommended. According to the router’s wiki page, your chosen router exceeds both recommended requirements. Overall, the router should be suitable for a good while not barring any severe hardware or bootloader-level exploitable vulnerabilities are discovered with the device. There is no explicit date of when your router will no longer be supported, but you can check the history of the supported devices page to get the general trend of when OpenWRT bumps up the minimum requirements. For instance, it was just 4/8+ MiB flash storage and 32/64+ MiB RAM in early 2017.

    Depending on what you want to do with the router, getting something with more RAM and a stronger CPU could be beneficial for various tasks (e.g. adblock-fast, cake sqm, etc.). Definitely do research on what you want your router to do though before choosing to go with higher specs or not.


  • With LosslessCut, I’ve had good success with doing keyframe cuts with h.264 footage in MKV containers. Frame cuts end up in broken outputs pretty much every time. There’s also Avidemux, which might be worth a try. More than likely though, if you want frame-precision in your cuts, you’ll have to re-encode, at which point you could use something as minimal as Handbrake or a full NLE editor like Kdenlive.