That’s cool, nothing bad ever happens when Germany’s economy tanks…
That’s cool, nothing bad ever happens when Germany’s economy tanks…
Based on how you’re observing the loading move from 100% CPU ro 100% GPU, I would suggest that it is “working” to some extent.
I don’t have any experience with that GPU, but here’s few things to keep in mind with this:
When you use a GPU for video encoding, it’s not the case that it’s ‘accelerating’ what you were doing without it. What you’re doing is switching from running a software implementation of an HEVC encoder on your CPU to running a hardware implementation of an HEVC encoder on your GPU. Hardware and Software encoders are very different to one another and they won’t combine forces; it’s one or the other.
Video encoders have literally hundreds of configuration options. How you configure the encoder will have a massive impact on the encoding time. To get results that I’m happy with for archiving usually means encoding at slower than real-time for me on a 5800X CPU; if you’re getting over 100fps on your CPU I would guess that you have it setup on some very fast settings - I wouldn’t recommend this for anything other than real-time transcoding. Conversely, it’s possible you have slower settings configured for your GPU.
Video encoding is very difficult to do “well” in hardware. Generally speaking software is better suited to the sort of algorithms that are needed. GPUs can be beneficial in speeding up an encode, but the result won’t be as good in terms of quality vs file size - for the same quality a GPU encode will be bigger, or for the same file size a GPU encode will be lower quality.
I guess this is a roundabout way of suggesting that if you’re happy with the quality of your 100fps CPU encodes, stick with it!
Yelp still exists!?
Who is finding a McDonalds location on their phone’s map app and then thinking “I’d better cross-check this against Yelp first”!?
1440p for the win!
Single GPU with scripts that run before and after the VM is active to unload the GPU driver modules from the kernel.
I think this was my starting point and I had to do just a few small tweaks to get it right for my setup - i.e. unload and reload the precise set of kernel modules that block GPU passthrough on my machine.
https://gitlab.com/Karuri/vfio
At this point from a user experience p.o.v it’s not much different to dual booting, just with a different boot sequence. The main advantage though is that I can have the Windows OS on a small virtual harddrive for ease of backup/clone/restore and have game installs on a dedicated NVME that doesn’t need backing up
I’ve been 100% linux for my daily home computing for over a year now… With one exception… To be honest I didn’t even try particularly hard to make gaming work under Linux.
Instead I have a Windows VM - setup with full passthrough access to my GPU and it’s own NVME - just for Windows gaming. To my mind now it’s in the same category as running console emulation.
As soon as I click shutdown in windows, it pops me straight back into my Linux desktop.
This video of one of the rioters getting repeatedly struck with bricks thrown by his own mates is well worth a watch… Or two… Or three…
I had some hard to track down intermittent network issues when I upgraded from LMDE5 to LMDE6 - the solution was to get a newer kernel from backports - its fairly painless…
No experience myself, but one of the fitness YouTubers I like posted this recently: https://youtu.be/_ro-YvnLF-4
The real question is why did they install a system based on 5.25" floppy disks in 1998 in the first place!?
The 5.25" floppy was surpassed by the 3.5" floppy by 1988 - ten years prior to this systems installation - and by 1998 most new software was being distributed on CD-ROM. So by my reckoning, in 1998 they installed a ‘new’ system based on hardware that was 1.5 generations out-of-date and haven’t updated it in the 26 years since.
At Kuala Lumpa International Airport half the signs were like this near our gate a couple weeks ago…
Yep, especially surface mount lithium batteries - they’re very sensitive to the solder reflow profile being juuuust right
I’ve found all of the tabs on Google have a tendency to go AWOL these days - like the other day I was searching for camera lenses and Google took away the ‘Products’ (formerly kmown as ‘Shopping’) tab, even though what I was searching for couldn’t have been more obviously a product. Instead, all I could get were super low quality copy-paste blogs vaguely related to the product.
Fun fact: While metric predates our full understanding of electricity, our understanding of electricity played a key role in the definition of the SI units.
As I understand it, the reason the SI unit for mass is kg not g - making it an outlier to my mind - is so that electical engineers could keep volts and amperes as convenient numbers.
Long read: https://arxiv.org/abs/1512.07306
I agree it’s good that the article is not hyping up the idea that the world will now definitely be saved by fusion and so we can all therefore go on consuming all the energy we want.
There are still some sloppy things about the article that disappoint me though…
They seem to be implying that 500 TW is obviously much larger than 2.1 MJ… but without knowing how long the 500 TW is required for, this comparison is meaningless.
They imply that using more power than available from the grid is infeasible, but it evidently isn’t as they’ve done it multiple times - presumably by charging up local energy storage and releasing it quickly. Scaling this up is obviously a challenge though.
The weird mix of metric prefixes (mega) and standard numbers (trillions) in a single sentence is a bit triggering - that might just be me though.
Check out this elite human, somehow certain it’s not drug or alcohol related.