I’ve been an IT professional for 20 years now, but I’ve mainly dealt with Windows. I’ve worked with Linux servers through out the years, but never had Linux as a daily driver. And I decided it was time to change. I only had 2 requirements. One, I need to be able to use my Nvidia 3080 ti for local LLM and I need to be able to RDP with multiple screens to my work laptop running Windows 10.

My hope was to be able to get this all working and create some articles on how I did it to hopefully inspire/guide others. Unfortunately, I was not successful.

I started out with Ubuntu 22.04 and I could not get the live CD to boot. After some searching, I figured out I had to go in a turn off ACPI in boot loader. After that I was able to install Ubuntu side by side with Windows 11, but the boot loader errored out at the end of the install and Ubuntu would not boot.

Okay, back into Windows to download the boot loader fixer and boot to that. Alright, I’m finally able to get into Ubuntu, but I only have 1 of my 4 monitors working. Install the NVIDIA-SMI and reboot. All my monitors work now, but my network card is now broken.

Follow instructions on my phone to reinstall the linux-modules-extra package. Back into Windows to download that because, you know, no network connections. Reinstall the package, it doesn’t work. Go into advanced recovery, try restoring packages, nothing is working. I can either get my monitors to work or my network card. Never both at the same time.

I give up and decide it’s time to try out Fedora. The install process is much smoother. I boot up 3 of 4 monitors work. I find a great post on installing Nvidia drivers and CUDA. After doing that and rebooting, I have all 4 monitors and networking, woohoo!

Now, let’s test RDP. Install FreeRDP run with /multimon, and the screen for each remote window is shifted 1/3 of the way to the left. Strange. Do a little looking online, find an Issue on GitHub about how it is based on the primary monitor. Long story short, I can’t use multiple monitor RDP because I have different resolution monitors and they are stacked 2x2 instead of all in a row. Trust me I tried every combination I could think of.

Someone suggested using the nightly build because they have been working on this issue. Okay, I try that out and it fails to install because of a missing dependency. Apparently, there is a pull request from December to fix this on Fedora installs, but it hasn’t been merged. So, I would need to compile that specific branch myself.

At this point, I’m just so sick of every little thing being a huge struggle, I reboot and go back into Windows. I still have Fedora on there, but who would have thought something that sounds as simple as wanting to RDP across 4 monitors would be so damn difficult.

I’m not saying any of this to bag on Linux. It’s more of a discussion topic on, yes, I agree that there needs to be more adoption on Linux, but if someone with 20 years of IT experience gets this feed up with it, imagine how your average user would feel.

Of course if anyone has any recommendation on getting my RDP working, I’m all ears on that too.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    10 months ago

    I need to be able to use my Nvidia 3080 ti for local LLM

    Well there’s your problem. You’ve been blindly loyal to a brand that has shown no loyalty towards consumers.

    • MortalWombat@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      Lol, have you seen the state of rocm in the LLM space? It’s a dumpster fire. As much as everybody hates nvidia’s profiteering and blackbox drivers, at least cuda works.

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        It’s not just their blackbox drivers, though, it’s the way they entice businesses to work with them and use their software for their products such that no other players can perform in the market.

        I’m not familiar enough to confirm, but it would be entirely unsurprising to me if NVidia cards only work well with LLM’s because LLM’s have been designed with NVidia cards and with support from NVidia. On the one hand, it’s nice that the manufacturer is supporting developers, on the other the way NVidia historically does this drastically limits consumer choice.

        • conciselyverbose@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          They’ve been designed for nvidia because cuda is better.

          And because nvidia has been pushing hardware features needed for AI way before AMD has even considered it for ages.

        • 520@kbin.social
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          You say that like OpenCL hasn’t been an option for years now.

          • Markaos@lemmy.one
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            Well, Nvidia doesn’t support OpenCL 2, so if you want your software to support the most commonly used cards, you’re going to be limited to OpenCL 1.2, which is pretty crap compared to the shiny CUDA. There’s also a lot of great tooling made or heavily sponsored by Nvidia that’s only available for CUDA.

            And yes, Nvidia now supports OpenCL 3, but that’s pretty much just OpenCL 1.2 with all OpenCL 2 features marked as optional (and Nvidia doesn’t support them, obviously).

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Its actually not as bad as it was. Its not good but if you can get docker working you might be ok.

        You also could just get two GPUs. An used AMD card shouldn’t be to expensive you you deal hunt a little.