[go: up one dir, main page]

  • 2 Posts
  • 324 Comments
Joined 3 年前
cake
Cake day: 2023年6月20日

help-circle



  • Ah ok. That explains a lot. I think what you are looking for is the git web server option for Otter wiki.

    How that probably works is that there is a git repo inside the container, and probably stored in /app-data. If you have that mount set up you may see what looks like a bare git repo in there.

    If you want to poke around inside the container, you can run a shell inside of it like this, assuming /bin/bash exists in the container:

    docker exec -it containername /bin/bash
    

    Anyway, you don’t need to mess with that, because Otter wiki exposes a git server itself. All you have to do is clone the URL it gives you in the GUI, add your files, commit and push.

    So the most straightforward way is:

    1. Clone the repo to a local working copy: git clone https://otterwiki.example.com/repopath (the URL should show in the UI)
    2. Enter the directory: cd ./repopath
    3. Make your changes, add files, etc.
    4. Stage the changes git add .
    5. Commit the changes git commit -m "imported buncha files"
    6. Push them to the origin remote (Otter wiki): git push

    If you’re not comfortable with the git command line, there are a bunch of TUI and GUI git clients you can use. It makes no difference which one. I usually just use the built-in vscode one for my hobby projects.


  • I understand Docker can be confusing, especially if you’re not a developer. Docker was developed to solve the problem of “it works on my computer” (but not yours), meaning it is meant to behave the same regardless of where it is installed. It does this by carrying all its dependencies with it. How well it succeeds in solving the problem is up for debate and I’m not here to debate the merits of Docker anyway.

    I would say two things: don’t get discouraged:; it has a learning curve. And conversely: don’t think Docker solves everything, either. Understanding how it works and what it can do is your best bet at using it successfully or even deciding that your use case is not right for it.

    I’m not familiar with Otter wiki (just heard about it today in this thread), but I will go by the online docs and see if I can help you get going. Going by the docker compose instructions, the example shows this compose file:

    services:
      otterwiki:
        image: redimp/otterwiki:2
        restart: unless-stopped
        ports:
          - 8080:80
        volumes:
          - ./app-data:/app-data
    

    The two last lines are key: the volumes section. The first and only entry maps a local directory ./app-data outside the container through a bind mount, to the path /app-data (note the leading slash) inside the container. It is mapped by default read-write, so the container can write to the directory. You can also read/write from the host side and put files you want in there.

    The files here are only stored in that one place (i.e., not copied), so if you want backups you need to take care of that separately.

    For your use-case, it may be enough to use the customization instructions for Otter wiki. Notice how their customized docker compose file has a second volume in it. In their example, you would put additional files into the ./custom directory on the host. It works the same as the app-data volume.

    That takes care of just running the application, but what if you wanted to customize it more? That is where you need to use the source and git. I am going to just use a fictional example app for this, because Otter wiki app may have some extra steps that can make it confusing. Basically what you want to do is clone the git repo of whatever you’re building:

    git clone https://github/whatever
    

    Now you have the source. You can add files, make code changes, whatever. And once you’re done with that you will use git as usual and commit the changes, push it to your branch, and maybe open a PR to the original repo. That’s not important for Docker.

    The next thing you do is build the Docker image. Many people confuse Docker images and containers, but on short: the image is like the file system for the container, and the container is an instance of that file system + processes running on a Docker host. Many containers can run the same image on the same host and in fact that is how you horizontally scale an application.

    Once you have the image, you can just run it locally, or you can push it to a registry. Here is how you would do that:

    # Build the local image in the context . (current directory) and tag it as my-cool-app with tag latest
    # This uses the instructions in the Dockerfile to build the image. It can run commands, copy files, etc.
    docker build . -t my-cool-app:latest 
    # ...lots of messages...
    
    # Run the app - press control-C to stop it
    docker run --rm -it my-cool-app:latest 
    

    If you want to push it to Docker hub or another registry:

    # Tag the image with the repo name in front
    docker tag my-cool-app:latest registry.example.com/myuser/my-cool-app:latest 
    # Push the image to the registry
    docker push registry.example.com/myuser/my-cool-app:latest  
    

    One thing to keep in mind here is do not include any secrets with the image. If you do, anyone who can download the image can read them, because they get baked in with it.

    Let me know if you have any other questions and I’ll be happy to answer.



  • It really depends what you want out of your computer, how much you like to tinker, and how comfortable you are getting your hands dirty. I got back onto a daily driver Linux desktop a little under two years ago, but I’ve been running Linux on servers since um…mid 90s? I’ve had Linux desktops mostly on secondary computers, but didn’t go back fully until more recently.

    I don’t run Arch, but I feel like that community is probably closest to the feeling Linux had back in the day–when we recompiled the kernel with the specific drivers we needed for everything to save memory, I knew every process running, every program I installed. I compiled most of my own programs from source. Or maybe Gentoo is the current version of that. If that’s your jam, go that route.

    For a while in the early aughts I ran a ton of servers with RedHat and developed an aversion to rpm and its mess of dependencies. Debian felt so much more stable and I’ve been picking Debian for servers ever since. If you want boring and stable, you can’t go wrong with Debian. I have many times just set up Debian with automatic update and reboot, and those things just keep going for years. I can’t remember when a Debian update broke my system, which I definitively can’t say for every OS.

    Then, I started wanting to game on Linux. The flip side of boring and stable is outdated. So when I planned my new Linux desktop build I went distro shopping a bit. I tried out a few live distros at first. I knew I wanted up-to-date drivers (for new hardware), but not a lot of tinkering, because I got a lot older and less patient at this point.

    I ended up on Fedora this time. My choice was driven by the balance of being up to date enough for my (simple) gaming needs, yet mainstream enough (read: boring) that if anything broke, there would be forums available and I could get back to just enjoying my computer. I prefer KDE Plasma over Gnome, so that’s what I ended up with.

    I’m happy with it and not planning to change. But I do get that sinking feeling of not really knowing what my computer is doing, because, just like on Windows, there are a hundred processes running in the background and I don’t know what half of them do. It’s just that at this point I’m not curious enough anymore to go digging into the man pages and the wikis and peruse the source code to find out. I just want it to work and let me get to my doom scrolling.

    So for mainstream and boring, I recommend Debian or Fedora, maybe one of the Arch derivatives like CachyOS. If you want to customize and tinker, probably plain Arch or one of the smaller distros that are well documented and less opinionated. I didn’t mention Mint, because I think it’s a bit too simplified for someone with some Linux experience. I would install it for my parents, though.




  • I’ve been around long enough,I know. It is indeed good advice to make things extensible, within reason. I have written some over engineered things that ended up being a pain to maintain.

    My best advice is to learn the domain as much as possible, then you’ll be more aware of both the concepts involved and the potential pitfalls. In the OP comic, knowing that the machines are capable of making other shapes would have helped predict the problem even when management says “we’re a triangle shop”.









  • It only happens with ISO weeks. An example is 2006. Each of the weeks belong to different years (so 2005W52 and 2006W52 are different), but each can contain days from another year. So for example Sunday 2006-01-01 is part of week 52 of 2005. Week 1 starts on January 2. Then at the end of 2006 you have another week 52, but that week is actually part of 2006.

    It’s a bit of a cheeky thing to point out, because at no point is a day in two different weeks, and the week itself only belongs to one year. It’s just that you can’t assume that any given day belongs to the same year as the week it is in. That is: 2006-01-01 is in 2005W52 not 2006W01.