It’s necessary for my very important hobby of generating anime nudes.
Can we stop using the Steven Crowder meme already. The guy is a total chode.
Lol. He gives chodes a bad rep. Call him what he is. A christofascist misogynist grifter.
I don’t really disagree, but I think that was the original intent of the meme; to show Crowder as a complete chode by having him assert really stupid, deeply unpopular ideas.
The meme’s use has become too soft on Crowder lately, though, I think.
I notice lately that many memes origins are worse than I thought from the context they are used in. Racist, homophobic, and lying people are not something I usually accept as entertainment, but they sneak their way unnoticed into my (non-news) feed through memes. I guess most people won’t know the origins of the meme and use it according to the meaning they formed on their own. Other memes like the distracted boyfriend meme are meaningless stock photos, so I understand why many people use memes without thinking about the origins.
Anyway, thanks for pointing out who the person in the picture actually is.
I must admit when I learned this was Crowder I had a sad
Just change and reupload :D
Oh please. There are better templates than this stupid Nazi cunt. I really don’t want to see this fuckface.
Yes! This is a nice alternative template for example.
For the longest time I just thought he was that one guy from modern family.
I just now learned he was not
isnt the joke that he wont change his mind on his stupid ideas, though?
I thought NixOS was the new Arch btw
From: https://knowyourmeme.com/memes/btw-i-use-arch
BTW I Use Arch is a catchphrase used to make fun of the type of person who feels superior because they use a more difficult Linux distribution.
I think that’s fair to apply to some NixOS users. Source: BTW I use NixOS.
I mean the barrier of entry is kind of high if you’re used to more traditional package managers.
Source: I tried using nix on my Debian machine
Check out my comment about setting up nix on Debian
Thanks, I’ll try that
There’s definitely a steep initial learning curve as you observed and dialing in your configuration is time consuming in my experience but once you’ve got things the way you like, it’s pretty smooth sailing from there.
Edit: removed compared to arch references. Not relevant to the comment.
As someone who tried NixOS recently for the first time, it feels like an uphill battle.
Some immediate concerns I have as a newbie are below. Bear in mind that I’m a single user on a single system.
Organisation is daunting as fuck
Even a relatively simple desktop config seems rather large to me. I expect the complexity of my config to balloon if I were to use this as my primary OS. There seems to be no consensus on how things should be separated.
I’ve heard home-manager is good, but I don’t really get the point of it. What does it achieve for me that editing configuration.nix doesn’t? I’ve yet to find a benefit. It’s just another place to dump endless configs and another command to remember to run.Installing software feels like the roll of a dice
I installed NixOS to try Hyprland, and their docs say to just useprograms.hyprland.enable = true
, which I’ve come to learn is a module. But that’s not the only way to install things! You also have system packages and user packages! I just want to install some software, I don’t want to have to look up whether it’s a module or a package every time I want something new. I’m never sure what I should add to which section. No other distro that I know of has this problem! Having 3 different places to add software seems excessive. What am I using? Windows? And now there’s Flakes too. I’m sure they’re great, but right now I just see them as yet another way to install software on Nix. Great.There’s more, but I’ll leave it there for now. I’m sure there are reasonable answers to all that I’ve said, but I’m just frustrated. I really want to like Nix, but it’s not making it easy.
tl;dr: Two things. 1) Lack of consensus on how configs are organised is confusing. 2) Having 3 different ways of installing software (modules/packages/flakes) does not feel better than
apt install
orpacman -Syu
etc.Nix is a programming language, so you have to organize your configuration yourself like you would for any programming project, usually by splitting it into multiple files. Also you can search system modules on the same page that you search for packages though usually there’s not much of an explanation for what it does outside of reading the source code.
System modules use the package from the repository while enabling some systemd stuff and whatever other options that you will want enabled with it. On a single user system, there is no meaningful difference between system packages and user packages.
Home-manager can be used to manage files in your home directory, like your configs for apps and stuff. It also can have more module options for apps so you can set up their settings declaratively. Its not for everyone but this is what its supposed to do, outside of your normal nix configuration.
Nix flakes aren’t a way to install packages, but a way to manage the nix based projects which include nix packages and your nixos configuration and is supposed to make it more reproducible, so its not directly related to installing packages. However if a package for something isn’t in the repos, someone may make a nix flake for installing and building the package.
Its understandable that you are having trouble though, because the documentation for nix and nixos is terrible, and it only got better for me once I actually spent time learning the nix programming language.
Organisation is daunting as fuck
Read up on modules. It’s obvious you haven’t even googled it.
I’ve heard home-manager is good, but I don’t really get the point of it. What does it achieve for me that editing configuration.nix doesn’t?
-
You’re not supposed to use configuration.nix for userland packages. Separation of concerns, and so you don’t need to rebuild all the time.
-
Declarative package management and configuration
-
You only need to remember one command to install and update all your packages
Installing software feels like the roll of a dice
There are many ways to install a package, and that allows you to chose the one you want to use. Nobody’s forcing you to use the module instead of just the package…
And now there’s Flakes too. I’m sure they’re great, but right now I just see them as yet another way to install software on Nix. Great.
You don’t use flakes to install packages. You use them to control the package definitions, pin specific versions, add packages from outside of nixpkgs in a declarative manner, and so on.
I really want to like Nix, but it’s not making it easy.
You really want to like Nix, but don’t want to learn basic concepts and instead expect it to behave like every other distro.
If installing packages is too much for you, give up on nixos and use something else. That’s literally the easiest and most issue free part of using it. You can install hyperland through nix on Debian or whatever distro you want.
does not feel better than
apt install
orpacman -Syu
etc.Yeah, why would anyone want a list of packages they currently have installed. Can’t think of any benefits, nope…
-
I mean at this point, people use that phrase themselves, so I don’t think it really makes fun of them anymore.
I use EndeavorOS, btw
deleted by creator
Damn you’re kinda right
Good point
At least the Arch people are not shilling for some corp.
I’m tired of people taking sides like companies give a shit about us. I wouldn’t be surprised to see five comments saying something like “you shouldn’t buy Nvidia AMD is open source” or “you should sell your card and get an amd card.”
I’d say whatever you have is fine, it’s better for the environment if you keep it for longer anyway. There are soo many people who parrot things without giving much though to an individuals situation or the complexity of a company’s behavior. Every companies job is to maximize profit while minimizing loss.
Basically if everyone blindly chose AMD over Nvidia the roles would flip and AMD would start doing the things Nvidia is doing to maintain dominance, increase profit, reduce cost and Nvidia would start trying to gain more market share from AMD by opening up, becoming more consumer friendly, competitively priced
For individuals, selling your old card and buying a new AMD card for the same price will net you with a slower card in general or if you go used there is a good chance it doesn’t work properly and the buyer ghosts you. I should know, I tried to get a used AMD card and it died every time I ran a GPU intensive game.
I also went the other way upgrading my mother’s Nvidia card with a new AMD card that was three times as expensive as her Nvidia card ($50) would be on eBay and it runs a bit slower than her Nvidia card did. She was happy about the upgrade though because I used that Nvidia card in her movie server resulting in better live video transcoding than a cheap AMD card would.
Who is saying to sell your card so you can buy AMD?
A collection of folks ranging from moralizing open source fans, Wayland aficionados, and AMD fanboy. They also like to blame any Linux problem the user might be having on Nvidia use even when the user hasn’t actually mentioned Nvidia. Daughter got chlamydia? Shouldn’t have gone with Nvidia.
You’d be surprised it happens quite a bit, especially when your trying to help an Nvidia user with technical issues.
The problems I see people have are kinda trivial and is usually fixed by installing a package or changing a kernel parameter. Stuff you spend a few minutes researching for a half second fix. It’s like saying “Apple pages unintuitive to use? Throw out your MacBook and get a PC!”
I don’t like MacBooks but I’m not going to tell them to replace it.
When there are only two options and one is nearly a monopoly, do you still think there’s no good side?
Both open source drivers from Nvidia and AMD have an MIT license, so what’s your point. Both firmwares are still proprietary. You have the option of Intel arc now so it’s not even a duopoly anymore.
Way to sidestep the question. I was talking about market dominance not licenses.
Why do you keep linking the creative commons copywrite websiter? It’s not music, a movie, or a piece of art.
I’ve given up on whatever you’re talking about because I don’t think you even know what you’re talking about.
I have no reading comprehension, so I won’t continue the discussion
That’s actually smart 👍
Steven Crowder is a despicable human and does not deserve a meme template.
I thought we were using the Calvin and Hobbes image now.
Its impressive you know the source of this meme
He is a despicable human. Point and laugh at the moron. Make an example out of him instead of trying to sanitize the internet.
Not a criticism of you but a little fun fact about him for others, he has a bunch of friends who “aren’t” Nazis but calling themselves or have friends who like to call themselves stuff like “race realist”.
I run Stable Diffusion with ROCm. Who needs CUDA?
What distro are you using? Been looking for an excuse to strain my 6900XT.
I started looking at getting it running on Void and it seemed like (at the time) there were a lot of specific version dependencies that made it awkward.
I suspect the right answer is to spin up a container, but I resent Docker’s licensing BS too much for that. Surely by now there’d be a purpose built live image- write it to a flash drive, reboot, and boom, anime
vampire princeshot girlsIf you don’t like docker take a look at containerd and podman. I haven’t done any cuda with podman but it is supposed to work
I resent Docker’s licensing BS too much for that.
Pay us if you’re a mid+ sized compan is BS?
I think people don’t like dramatic changes in business model. I had installed it for like 3 days, long before the switchover, to test out something from another dev. When they made the announcements, the hammer went down in our org not to use it. But that didn’t stop them from sending sales-prospecting/vaguely threateningly worded email to me, who has no cheque-writing authority anyway.
Plus, I’m not a fan of containers.
STOP DOING CONTAINERS.
- Machines were not meant to contain other smaller machines.
- Years of virtualization yet no real-world use found for anything but SNES emulation
- Wanted to “ship your machine to the end-user” anyway for a laugh? We had a tool for that. It was called “FedEx”.
- “Yes, Please give me
docker compose up meatball-hero
of something. Please give me Alpine Linux On Musl of it” – Statements dreamed up by the utterly deranged.
“Hello, I would like 7.5GB of VM worth of apples please”
THEY HAVE PLAYED US FOR ABSOLUTE FOOLS.
Poor capitalists need to pay for the tools that make them money. Stop it, I’ll break down in tears just thinking about the horror of it.
Do you use some different solution, or did you completely avoid containers and orchestration?
But that didn’t stop them from sending sales-prospecting/vaguely threateningly worded email to me, who has no cheque-writing authority anyway.
They only spam me with promotional material. You used the business email I’m guessing?
I use stable diffusion on rocm in an ubuntu distrobox container. Super easy to set up and there’s a good guide in the opensuse forum for it.
That is exactly what I do too and it works perfectly! This is a link to said guide.
It’s effectively install distrobox, save the config, run
distrobox assemble
and thendistrobox enter rocm
and clone the Automatic1111 stable diffusion webui somewhere and runbash webui.sh
to launch it.
I set mine up on arch. There’s an aur package, but it didn’t work for me.
After some failed attempts, I ended up having success following this guide.
Some parts are out of date though, so if it fails to install something you’ll need to have it target a newer available package. Main example of this is inside the
webui-user.sh
file, it tells you to replace an existing line withexport TORCH_COMMAND="pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1"
. This will fail because that version of pytorch is no longer available. So instead you need to replace the download URL with an up to date one from the pytorch website. They’ve also slightly changed the layout of the file. Right now the correct edit should be to find the# install command for torch
line and change the command under it to:pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7
You may need to swap pip to pip3 too, if you get a pip error. Overall it takes some troubleshooting, look at any errors you get and see if it’s calling for a package you don’t have or anything like that.
Ubuntu native for me…no containers needed.
Arch ofc.
I use Fedora. It works great, with some tweaks to the startup script.
I can confirm that it works just fine for me. In my case I’m on Arch Linux btw and a 7900XTX, but it needed a few tweaks:
- Having xformers installed at all would sometimes break startup of stable-diffusion depending on the fork
- I had an internal and an external GPU, I want to set HIP_VISIBLE_DEVICE so that it only sees the correct one
- I had to update torch/torchvision and set HSA_OVERRIDE_GFX_VERSIONI threw what I did into https://github.com/icedream/sd-multiverse/blob/main/scripts/setup-venv.sh#L381-L386 to test several forks.
CUDA?! I barely even know’a!
Then show us your anime titty pics!
Earlier in my career, I compiled tensorflow with CUDA/cuDNN (NVIDIA) in one container and then in another machine and container compiled with ROCm (AMD) for cancerous tissue detection in computer vision tasks. GPU acceleration in training the model was significantly more performant with NVIDIA libraries.
It’s not like you can’t train deep neural networks without NVIDIA, but their deep learning libraries combined with tensor cores in Turing-era GPUs and later make things much faster.
AMD is catching up now. There are still performance differences, but they are probably not as big in the latest generation.
Things have changed.
I can now run mistral on my intel iGPU using Vulkan.
If you’re talking about “running”, that’s inference. I’m talking about elapsed training time.
Same thing. Inference just uses a lot less memory.
deleted by creator
Brother of “I need nVidia for raytracing” while only playing last decade games.
I completely unironically know people who bought a 4090 exclusively to play League
Not gonna lie, raytracing is cooler on older games than it is newer ones. Newer games use a lot of smoke and mirrors to simulate raytracing, which means raytracing isn’t as obvious of an upgrade, or can even be a downgrade depending on the scene. Older games, however, don’t have as much smoke and mirrors so raytracing can offer more of an improvement.
Also, stylized games with raytracing are 10/10. Idk why, but applying rtx to highly stylized games always looks way cooler than on games with realistic graphics.
Quake 2 does looks pretty rad in RTX mode
Playing old games with Ray tracing is just as amazing as playing new games with Ray tracing. I know quake rt gets too dark to play half way through, they should have added light sources in those areas.
Then again, I played through cyberpunk 2077 at 27fps before the 2.0 update. Control was pretty good at 50fps, and I couldn’t recommend portal enough at about 40fps on my 2070 super. I don’t know if teardown leveraged rt cores but digital foundry said it ran better on Nvidia and I played through that game at 70fps.
I love playing with new technologies. I wish graphics card prices stayed down because rt is too heavy nowadays for my first gen RT card. I play newer games with rt off and most setting turned down because of it.
I love playing with new technologies. I wish graphics card prices stayed down because rt is too heavy nowadays for my first gen RT card. I play newer games with rt off and most setting turned down because of it.
I wish they stayed down because VR has the potential to bring back crossfire/SLI. Nvidia’s gameworks already has support for using two GPUs to render different eyes and supposedly, when properly implemented, it results in a nearly 2x increase in fps. However, GPUs are way too expensive right now for people to buy two of them, so afaik there aren’t any VR games that support splitting rendering between two GPUs.
VR games could be a hell of a lot cooler if having 2 GPUs was widely affordable and developers developed for them, but instead it’s being held back by single-gpu performance.
Wasn’t there an issue with memory transfer latency across the connector? I thought they killed it because the latency was too high for higher frame rates causing a consistent stuttering.
They tried to reuse that enterprise connector with higher throughput but last I heard they never fully developed support for it because of a lack of interest from devs.
I’m holding out building a new gaming rig until AMD sorts out better ray-tracing and cuda support. I’m playing on a Deck now so I have plenty of time to work through my old backlog.
I was straight up thinking of going to AMD just to have fewer GPU problems on Linux myself
In my experience,
AMD is a bliss on Linux,
while Nvidia is a headache.Also, AMD has ROCM,
it’s their equivalent of Nvidia’s CUDA.Yeah but is it actually equivalent?
If so I’m 100% in but it needs to actually be. a drop in replacement for “it just works” like cuda is.
Once I’ve actually got drivers all set cuda “just works”. Is it equivalent in that way? Or am I going to get into a library compatibility issue in R or Python?
Not all software that uses CUDA has support for ROCM.
But as far as setup goes, I just installed the correct drivers and ROCM compatible software just worked.
So - it can be a an equivalent alternative, but that depends on the software you want to run.
It’s the equivalent, but does the software make use of the ROCM if they are programmed for CUDA?
Never had an issue with Nvidia on Linux. Yes, you have to use proprietary drivers, but outside of that I’ve been running Linux with Nvidia cards for 20 years.
Wayland is non-stop issues.
Been running Wayland for 2 years and only issue I had with it was Synergy not working.
Even not the “issue” that basically every time you update something, you have to wait a long time to download proprietary nvidia drivers?
That’s what annoyed me the most back in the day with the Nvidia drivers,
so many hours wasted on updating the drivers.With AMD, this is not the case.
And haven’t even talked about my issues with Optimus (Intel on-board graphics + Nvidia GPU) yet, which was a true nightmare, took me weeks of research to finally make it work correctly.
You don’t need to update NVIDIA drivers every time there’s a release. I don’t even do that on my Windows machine. Most driver updates are just tweaks for the latest game, not bug fixes or performance improvements.
And hell, you’re using Linux. Vim updates more often than the graphics driver, what do you expect?
It automatically happened,
I believe with every install of an updated Flatpak, which is rather often.Been a while though, since lately I’ve been happily using AMD for quite some time.
But I do recall Nvidia driver updates slowing down my update process by a lot,
while I have none of that with AMD.Ah, I always update the driver through the package manager and it never auto-updates.
My experience with the Deck outside of CS2 has been nothing short of mind-boggling. I don’t even REALLY have a problem with CS2 but I cannot play online for VAC reasons I can’t sort out. I have a ticket open with Steam Support. 🤷
Yeah, the deck has really increased my trust in AMD hardware.
CUDA isn’t the responsibility of AMD to chase; it’s the responsibility of Nvidia to quit being anticompetitive.
It’s also not my problem either. I don’t give a shit what nvidia or AMD does, I just want to be able to run AI stuff on my rig in as open-source a manner as is possible.
…in as open-source a manner as is possible.
And that means “not with CUDA,” because CUDA is proprietary.
This is a semantic argument I don’t feel like getting into. I don’t give a shit what library it is – I want AMD to be able to crunch pytorch as well as nvidia.
The fact that CUDA is proprietary to Nvidia isn’t even slightly “semantics;” it’s literally the entire problem this thread is discussing. CUDA doesn’t work on AMD because Nvidia doesn’t allow it to work on AMD.
deleted by creator
deleted by creator
Amd
Cuda support
😐
Mb zluda also works
Rocm is the AMD version
last I heard AMD is working on CUDA working on their GPUs and I saw a post saying it was pretty complete by now (although I myself don’t keep up with that sort of stuff)
Well, right after that Nvidia amended their license agreements stating that you cannot use CUDA with any translation layers.
The project you’re thinking of is ZLUDA.
NVIDIA finally being the whole bitch it seems, not unexpected when it comes to tech monopolies.
In the words of our lord and savior Linus Torvalds “NVIDIA, fuck you! 🖕”, amen.
In all reality, a lot of individuals aren’t gonna care when it comes to EULA B’s unless they absolutely depend on it and this whole move has me want an AMD gpu even more.
Ya, I was a fairly big Nvidia defender for the past few years. I used a ton of their stuff for my last job, and genuinely didn’t have any issues with them (all Linux systems, gaming and AI workloads).
But their recent actions have really soured my view of them.
It’s sad that more companies are just willing to screw their customers and squeeze them dry of every last penny just for the sake of profit and infinite growth even though we know infinite growth will never be attainable.
Every corporate entity is willing to forfeit their goals for money, especially if they hold a monopoly in a certain space and when growth slows they will look for other ways to offset that income.
I’ve learned that loyalty means jack shit to the company and it’s just another thing they can exploit you with, I’m not loyal to AMD but right now they’re the least unethical party in this race to the bottom.
That sucks if that’s true. And would also ironically not be the first time AMD is getting denied a thing after they already have an implementation ready for it lol.
It is true. Most all tech news outlets covered it.
At the very least you can use a display port to HDMI adapter!
I need NVDA for the gainz
Edit: btw Raspberry PI is doing an IPO later this year, bullish on AMD
Man I just built a new rig last November and went with nvidia specifically to run some niche scientific computing software that only targets CUDA. It took a bit of effort to get it to play nice, but it at least runs pretty well. Unfortunately, now I’m trying to update to KDE6 and play games and boy howdy are there graphics glitches. I really wish HPC academics would ditch CUDA for GPU acceleration, and maybe ifort + mkl while they’re at it.
Use driver 535 until explicit sync is implemented by NVIDIA.