• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: January 4th, 2024

help-circle


  • Based on how you’re observing the loading move from 100% CPU ro 100% GPU, I would suggest that it is “working” to some extent.

    I don’t have any experience with that GPU, but here’s few things to keep in mind with this:

    1. When you use a GPU for video encoding, it’s not the case that it’s ‘accelerating’ what you were doing without it. What you’re doing is switching from running a software implementation of an HEVC encoder on your CPU to running a hardware implementation of an HEVC encoder on your GPU. Hardware and Software encoders are very different to one another and they won’t combine forces; it’s one or the other.

    2. Video encoders have literally hundreds of configuration options. How you configure the encoder will have a massive impact on the encoding time. To get results that I’m happy with for archiving usually means encoding at slower than real-time for me on a 5800X CPU; if you’re getting over 100fps on your CPU I would guess that you have it setup on some very fast settings - I wouldn’t recommend this for anything other than real-time transcoding. Conversely, it’s possible you have slower settings configured for your GPU.

    3. Video encoding is very difficult to do “well” in hardware. Generally speaking software is better suited to the sort of algorithms that are needed. GPUs can be beneficial in speeding up an encode, but the result won’t be as good in terms of quality vs file size - for the same quality a GPU encode will be bigger, or for the same file size a GPU encode will be lower quality.

    I guess this is a roundabout way of suggesting that if you’re happy with the quality of your 100fps CPU encodes, stick with it!





  • Single GPU with scripts that run before and after the VM is active to unload the GPU driver modules from the kernel.

    I think this was my starting point and I had to do just a few small tweaks to get it right for my setup - i.e. unload and reload the precise set of kernel modules that block GPU passthrough on my machine.

    https://gitlab.com/Karuri/vfio

    At this point from a user experience p.o.v it’s not much different to dual booting, just with a different boot sequence. The main advantage though is that I can have the Windows OS on a small virtual harddrive for ease of backup/clone/restore and have game installs on a dedicated NVME that doesn’t need backing up


  • FBJimmy@lemmus.orgtoLinux@lemmy.mlSwitching back to Windows. For now.
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    6 months ago

    I’ve been 100% linux for my daily home computing for over a year now… With one exception… To be honest I didn’t even try particularly hard to make gaming work under Linux.

    Instead I have a Windows VM - setup with full passthrough access to my GPU and it’s own NVME - just for Windows gaming. To my mind now it’s in the same category as running console emulation.

    As soon as I click shutdown in windows, it pops me straight back into my Linux desktop.











  • I agree it’s good that the article is not hyping up the idea that the world will now definitely be saved by fusion and so we can all therefore go on consuming all the energy we want.

    There are still some sloppy things about the article that disappoint me though…

    1. They seem to be implying that 500 TW is obviously much larger than 2.1 MJ… but without knowing how long the 500 TW is required for, this comparison is meaningless.

    2. They imply that using more power than available from the grid is infeasible, but it evidently isn’t as they’ve done it multiple times - presumably by charging up local energy storage and releasing it quickly. Scaling this up is obviously a challenge though.

    3. The weird mix of metric prefixes (mega) and standard numbers (trillions) in a single sentence is a bit triggering - that might just be me though.