Just thought I’d share this since it’s working for me at my home instance of federate.cc, even though it’s not documented in the Lemmy hosting guide.
The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM.
By way of example, I’m hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I’m focusing only on disk space for now.
Vultr’s object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn’t count against your main VM, plus this content is served off of Vultr’s CDN instead of your instance, meaning even less CPU load for you.
This is pretty easy to do. What we’ll be doing is diverging slightly from the official Lemmy ansible setup to add some different environment variables to pict-rs.
After step 5, before running the ansible playbook, we’re going to modify the ansible template slightly:
cd templates/
cp docker-compose.yml docker-compose.yml.original
Now we’re going to edit the docker-compose.yml with your favourite text editor, personally I like micro
but vim
, emacs
, nano
or whatever will do…
favourite-editor docker-compose.yml
Down around line 67 begins the section for pictrs
, you’ll notice under the environment
section there are a bunch of things that the Lemmy guys predefined. We’re going to add some here to take advantage of the new support for object storage in pict-rs 0.4+:
At the bottom of the environment
section we’ll add these new vars:
- PICTRS__STORE__TYPE=object_storage
- PICTRS__STORE__ENDPOINT=Your Object Store Endpoint
- PICTRS__STORE__BUCKET_NAME=Your Bucket Name
- PICTRS__STORE__REGION=Your Bucket Region
- PICTRS__STORE__USE_PATH_STYLE=false
- PICTRS__STORE__ACCESS_KEY=Your Access Key
- PICTRS__STORE__SECRET_KEY=Your Secret Key
So your whole pictrs
section looks something like this: https://pastebin.com/X1dP1jew
The actual bucket name, region, access key and secret key will come from your provider. If you’re using Vultr like me then they are under the details after you’ve created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com, and your region is the domain prefix, so in this case sjc1.
Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage.
You’re now good to go and things should pretty much behave like before, except pict-rs
will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money.
Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I’ve done this on my own instance at federate.cc and so far I can’t see any ill effects.
Happy Lemmy-ing!
Thank you for this write-up. Your post is the only place I can find on the internet on making the transition to object storage specifically with Lemmy.
Glad you found it helpful!
I’m using S3FS to achieve the same thing, but without modifying the ansible config or using native object storage within pict-rs.
deleted by creator
Hello again! I just completed object storage migration. Here’s what I learned if you want to do it with an instance that’s already setup:
- Download the binary file for pict-rs from the project’s git repository.
- Stop the pict-rs container.
- Perform the migration as indicated in the pict-rs documentation. If it hangs at some point due to a missing file, re-run with --skip-missing-files.
- Verify that files have been migrated to object storage.
- Change docker-compose settings.
- And here the most important part… changes won’t be applied unless you run
docker-compose up -d
. Simply running docker-compose restart will NOT apply the new config. This might be obvious for docker users but I didn’t know about it and had to rollback the first time because it wouldn’t fetch images from object storage while they had already been migrated there.
I’m attempting this migration on an instance that has been running for about a month, is federated with the top 10+ instances and has synced a lot of data.
The steps I’m using are as follows:
stop docker: sudo docker stop domainname_pictrs_1
run docker-compose to open a session in the stopped container: sudo docker-compose run pictrs sh
run the cmdlet to migrate pictrs via https://git.asonix.dog/asonix/pict-rs/#filesystem-to-object-storage-migration
When this runs, it appers to be trying to sync like… all of the lemmy fediverse… to my object storage:
2023-08-13T17:55:44.426301Z WARN pict_rs: Running checks
2023-08-13T17:55:45.188984Z WARN pict_rs: Checks complete, migrating store
2023-08-13T17:55:45.275403Z WARN pict_rs: 56963 hashes will be migrated
Most of these fail, and I’m trying to run it again with --skip-missing-files , but based on what I’m seeing I don’t know if this is really something that can be done once an instance has federated with a lot of other instances.
Am I missing something?
Edit: with --skip-missing-files its telling me that it’s going to take 23403 seconds (6.5 hours) to complete this migration.
When I look into the bucket, I see all kinds of random images being migrated over, so it’s definitely storing pretty much every image that my instance has ever synced. Is there a way to just migrate content that originated on my instance?