(This is a repost of this reddit post https://www.reddit.com/r/selfhosted/comments/1fbv41n/what_are_the_things_that_makes_a_selfhostable/, I wanna ask this here just in case folks in this community also have some thoughts about it)

What are the things that makes a selfhostable app/project project good? Maybe another way to phrase this question is, what are the things that makes a project easier to self-host?

I have been developing an application that focuses on being easy to selfhost. I have been looking around for existing and already good project such as paperless-ngx, Immich, etc.

From what I gather the most important thing are:

  • Good docs, this is probably the most important. The developer must document how to self-host
  • Less runtime dependency–I’m not sure about this one, but the less it depends on other services the better
  • Optional OIDC–I’m even less sure about this one, and I’m also not sure about implementing this feature on my own app as it’s difficult to develop. It seems that after reading this subreddit/community, I concluded that lots of people here prefer to separate identity/user pool and app service. This means running a separate service for authentication and authorization.

What do you think? Another question is, are there any more good project that can be used as a good example of selfhostable app?

Thank you


Some redditors responded on the post:

  • easy to install, try, and configure with sane defaults
  • availabiity of image on dockerhub
  • screenshots
  • good GUI

I also came across this comment from Hacker News lately, and I think about it a lot

https://news.ycombinator.com/item?id=40523806

This is what self-hosted software should be. An app, self-contained, (essentially) a single file with minimal dependencies.

Not something so complex that it requires docker. Not something that requires you to install a separate database. Not something that depends on redis and other external services.

I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.

Do you agree with this?

  • emon@masto.top
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    @hono4kami To me, good documentation is the number one thing that makes a selfhostable application good.
    Second would be “is it dockerized ?”

    • hono4kami@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      To me, good documentation is the number one thing that makes a selfhostable application good.

      I agree. If you don’t mind: what are your qualifications for good documentation? Do you have some good examples of good docs?

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Yep, documentation and a good base level default installation configuration/guide with minimal friction.

      I’m perfectly willing to play around once I know at the basic level that the core flow is going to work for me. If it takes me digging through a stack of documentation (especially if it’s bad) to even get something to experiment with on my own system? I won’t bother.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.

    I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.

    What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.

    My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.

    • hono4kami@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down.

      I see, it’s probably good to have some balance between those. Noted

  • 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    2 months ago

    To me the number one thing is, that it is easy to setup via Docker. One container, one network (ideally no network but just using the default one), one storage volume, no additional manual configuration when composing the container.

    No, I don’t want a second container for a database. No I don’t want to set up multiple networks. Yes, I already have a reverse proxy doing the routing and certificates. No, I don’t need 3 volumes for just one application.

    Please just don’t clutter my environment.

    • traches@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 months ago

      I disagree with pretty much all of this, you are trading maintainability and security for easy setup. Providing a docker-compose file accomplishes the same thing without the sacrifice

      • separate volumes for configuration, data, and cache because I might want to put them in different places and use different backup strategies. Config and db on SSD, large data on spinning rust, for example.
      • separate container for the database because the official database images are guaranteed to be better maintained than whatever every random project includes in their image
      • separate networks because putting your reverse proxy on a different network from your database is just prudent
    • hono4kami@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      No, I don’t want a second container for a database.

      Unless you’re talking about using SQLite:

      Isn’t the point of Docker container is to only have one software/process running? I’m sure you can use something like s6 or other lightweight supervisor, but I feel like that’s seems counterintuitive?

    • ÚwÙ-Passwort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I prefer this, but if the options are available its shows me that soneone actually thought about it while creating the software/conatiner

    • silmarine@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I came here to basically say this. It’s especially bad when you aren’t even sure if you want to keep the service and are just testing it out. If I already have to go through a huge setup/troubleshooting process just to test the app, then I’m not feeling very good about it.

  • thelittleblackbird@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    My points are totally in the other direction:

    • stable, this is critic, if the app is not able to performs its duties with. 2 weeks uptime, then it is bad. This also applies to random failures. I don’t want to spend endless days to fix it
    • docker, with a all-in-image, and as a nice to have the possibility to connect external docker composes for vpn, or databases
    • a moderate use of resources, not super critic, but nobody likes to have ram problems

    And then as a second league that lean the balance:

    • integration with LDAP or any central user repo
    • relatively easy to backup and restore
    • relatively low level of break changes from version to version
    • the gui / ease of use (in like with the complexity of the problem I want to address)
    • sane use of defaults and logging capabilities

    That’s all from my side

  • tatterdemalion@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago
    • Has a simple backup and migration workflow. I recently had to backup and migrate a MediaWiki database. It was pretty smooth but not as simple as it could be. If your data model is spread across RDBMS and file, you need to provide a CLI tool that does the export/import.

    • Easy to run as a systemd service. This is the main criteria for whether it will be easy to create a NixOS module.

    • Has health endpoints for monitoring.

    • Has an admin web UI that surfaces important configuration info.

    • If there are external service dependencies like postgres or redis, then there needs to be a wealth of documentation on how those integrations work. Provide infrastructure as code examples! IME systemd and NixOS modules are very capable of deploying these kinds of distributed systems.

  • e0qdk@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Do you agree with this?

    Yes, at least for hobby use. If it really needs something more complex than SQLite and an embedded HTTP server, it’s probably going to turn into a second job to keep it working properly.

  • Prunebutt@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Please be mindful of HDD spindown.

    If your app frequently looks up stuff in a database and also has a bunch of files that are accessed on-demand, then please have an option to separate the data-directory from the appdata-directory.

    A lot of stuff is self-hosted in homes and not everyone has the luxury of a dedicated server room.

    • hono4kami@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      separate the data-directory from the appdata-directory

      Would you mind explaining more about this?

      • Prunebutt@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Take my setup for jellyfin as an example: There’s a database located on the SSD and there’s my media library located on an HDD array. The HDD is only spun up when jellyfin wants to access a media file.

        In my previous setup, the nextcloud database was located on a HDD, which resulted in the HDD never spinning down, even if the actual files are never really accessed.

        In immich, I wasn’t able to find out if they have this separation, which is very annoying.

        All this is moot, if you simply offer a tiny service which doesn’t access big files that aren’t stored on SSDs.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    I’d say it’s good if it’s easy to use, well written with maintainability in mind, offers good functionality, is reliable and follows current best practices.

    It’s easy to selfhost if it’s packaged. Because then I can just apt install gitlab edit a few config files and I’m done. Or click on it in Yunohost, or maybe run the Docker container.

    But just “easy” isn’t the whole story. It needs to be maintainable, still around in a few years, integrate into the rest of my ecosystem…