We did it not because it was easy, but because we thought it would be easy.
We did it not because it was easy, but because we thought it would be easy.
I switched to Immich recently and am very happy.
The bad:
Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)
Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.
I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).
But overall I would say if you aren’t having any problems no need to bother. The onboard graphics are simple and efficient.
Yes. As this is a workstation the memory use is highly variable, >95% of the time I would probably barely notice having 32GiB. But other times it is a huge performance win to have that capacity available. Sometimes I am compiling lots of stuff and 32 compilers running + ample disk cache is very important. Other times I am processing lots of data and other times I am running a few VMs.
It is a bit of a luxury. I think if I was on a tighter budget I would have gone for 64GiB. However the price difference wasn’t that much and at least a handful of times I have been quite happy to have that capacity available. And worst case I just have everything sitting in disk cache after a warm up which is a small performance win on every small task.
I have enough disk space.
Plus my /tmp
is a ramdisk and sometimes I compile large things in there (Firefox) so it is nice to let it be flushed out to disk if there are more important uses for that RAM than holding a file that most likely won’t be read again.
is framework agnostic
But it isn’t, because they depend on framer-motion and React. JSX is, but the icons aren’t.
You can trivially provide on-hover animations using CSS in SVG then your icons are framework agnostic. Not to mention smaller to download and more efficient to execute.
There are three parts to the whole push system.
My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.
% free -h
total used free shared buff/cache available
Mem: 125Gi 15Gi 90Gi 523Mi 22Gi 110Gi
Swap: 63Gi 0B 63Gi
I’ll use it eventually. Just gotta let the disk cache warm up.
I don’t want the end executable to have to bundle these files and re-parse them each time it gets run.
No matter how you persist data you will need to re-parse it. The question is really just if the new format is more efficient to read than the old format. Some formats such as FlatBuffers and Cap'n Proto are designed to have very efficient loading processes.
(Well technically you could persist the process image to disk, but this tends to be much larger than serialized data would be and has issues such as defeating ASLR. This is very rarely done.)
Lots of people are talking about Pickle. But it isn’t particularly fast. That being side with Python you can’t expect much to start with.
Must be because Factorio released 2.0 and the Space Age DLC recently.
IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.
UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).
Why are these TypeScript + JSX rather than just SVGs? It seems that the paths are defined as SVG but they are using some JavaScript framework to define the animations rather than just using SVG or CSS animations.
Why WASM? It seems to me that the attack surface of WASM is negligible compared to JavaScript (and IIUC disabling JavaScript will also disable WASM).
Third-party frames is definitely a good way to reduce your attack surface though. Ad embeds are often used to distribute exploits.
What are you running MS-DOS? laughs in multi-tasking.
I just drag my vi terminals to another workspace and launch a new editor.
A few hundred a month is just a few per day. That is pretty low volume by most standards.
I would say in general if the SMTP server could be replaced by a single human writing and mailing snail-mail letters by hand it qualifies as low volume.
This isn’t how YouTube has streamed videos for many, many years.
Most video and live streams work by serving a sequence of small self-contained video files (often in the 1-5s range). Sometimes audio is also separate files (avoids duplication as you often use the same audio for all video qualities as well as enables audio-only streaming). This is done for a few reasons but primarily to allow quite seamless switching between quality levels on-the-fly.
Inserting ads in a stream like this is trivial. You just add a few ad chunks between the regular video chunks. The only real complication is that the ad needs to start at a chunk boundary. (And if you want it to be hard to detect you probably want the length of the ad to be a multiple of the regular chunk size). There is no re-encoding or other processing required at all. Just update the “playlist” (the list of chunks in the video) and the player will play the ad without knowing that it is “different” from the rest of the chunks.
Ah ok. You aren’t doing auth. I don’t understand how this is relevant.
Are you doing auth in the reverse proxy for Jellyfin? Do you use Chromecast or any non-web interface? If so I’m very interested how you got it to work.
It honestly sounds more like someone convincing you that crypto is great than someone convincing you that Greenpeace is great.