Yes, I would recommend creating a backup (perhaps on your phone or a different computer over the network) and then upgrading to 21 and then 22. IMHO Mint has steadily gotten better and there is typically no reason to stay on an older version.
Yes, I would recommend creating a backup (perhaps on your phone or a different computer over the network) and then upgrading to 21 and then 22. IMHO Mint has steadily gotten better and there is typically no reason to stay on an older version.
This seems to be a limitation of Intel host controllers. The USB 2.0 specification (including 12 Mbps Full Speed) allows for up to 127 devices. Each of those devices can have up to 16 IN and 16 OUT endpoints, c.f. https://www.usbmadesimple.co.uk/ums_3.htm Depending on how you count, that would be a maximum of 2k to 4k endpoints in total. I guess Intel thought it wasn’t worthwhile supporting that many endpoints.
Some quick searching turned up this post that claims that USB3 controllers often support up to 254 endpoints (in total). https://www.cambrionix.com/a-quick-guide-to-usb-endpoint-limitations/ Other posters have also said that AMD appears to have higher limits. You could also consider adding more USB root hubs to your system (with PCIe cards).
Yield is the percentage of chips that are functional. Roughly, you can think of it as the probability of a chip having 0 defects. The bigger the chip, or the higher the defect density, the lower this probability becomes. Chip designers will also include mitigation techniques (e.g. redundancy) to allow chips to work even with some defects.
Talking about the “yield” of a process doesn’t make any sense. Yield is a metric for a specific chip fabricated on a given process. This depends heavily on the size of the chip and mitigation techniques.
The “correct” metric to compare processes is defect density (in defects per square cm). Intel claims that their defect density is below 0.4 defects/cm²: https://www.tomshardware.com/tech-industry/intel-says-defect-density-at-18a-is-healthy-potential-clients-are-lining-up. This would be relatively high but not much worse than what TSMC has seen for their recent nodes: https://www.techpowerup.com/forums/threads/intel-18a-process-node-clocks-an-abysmal-10-yield-report.329513/page-2#post-5387835).
Unfotunately, I can help you with that. The machine is not running any VMs.
It’s possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.
I’m seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:
What’s your setup?
With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.
We use Alma Linux at work and it’s fine, I suppose. I see two main reasons why you’d choose an EL linux distro:
Apart from those, it’s a competent distro, Red Hat know what they’re doing. If you want the equivalent to an Ubuntu LTS / Debian in the Fedora world, it get’s the job done. I quite like their approach of keeping the core OS stable while updating drivers, tools, and compilers (e.g., the kernel version number has very little meaning in RHEL).
Is the experience very different from Fedora?
Yes. the age of the core packages is very noticeable. The number of fully supported packages is also very small and you need to go to EPEL very quickly (at which point you’re no longer getting enterprise support…). On the plus side, it’s much more stable than Fedora in my experience.
Edit: My main recommendation for a stable distro would probably be Debian unless one of the above points applies.
That system also sounds a lot more capable than mine. How did you end up with 25 VMs?
I’m running it in a regular mATX case (Node 804) but I think you can also get AM5 motherboards in rack-mount cases.
Perhaps my recent NAS/home server build can serve as a bit of an inspiration for you:
I don’t think it’s more efficient to separate processing and storage so I’d only go for that if you want to play around with a cluster. I would also avoid SD cards as a root FS, as they tend to die early and catastrophically.
It sounds like Proton VPN (or its repo) is causing issues for you. Given that it’s a paid service, you can probably contact their support.
Alternatively, you can also look for the repo file in /etc/yum.repos.d
, something like /etc/yum.repos.d/file_name.repo
, for Proton VPN. You can then disable it by renaming it to .repo.disabled
and try again (sudo dnf upgrade
in the terminal). Note: This is not really a permanent solution, as it will disable updates for Proton VPN.
It sounds like the criterion is “is newer microcode available”. So it doesn’t look like a marketing strategy to sell new CPUs.
Nice, congrats on getting it to work! :) Native Debian packages are also nice. It can just get difficult if you want the latest stuff.
I used the docker compose template from https://hub.docker.com/_/drupal and mostly changed the image:
# Drupal with PostgreSQL
#
# Access via "http://localhost:8080"
# (or "http://$(docker-machine ip):8080" if using docker-machine)
#
# During initial Drupal setup,
# Database type: PostgreSQL
# Database name: postgres
# Database username: postgres
# Database password: example
# ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
# image: drupal:10-apache
# image: drupal:10.3.7-apache-bookworm
# image: drupal:10.3.6-apache-bookworm
image: drupal:11.0.5-apache-bookworm
# image: drupal:10-php8.3-fpm-alpine
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
environment:
PHP_MEMORY_LIMIT: "1024M"
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: example
restart: always
The details for the v11 image are here: https://hub.docker.com/layers/library/drupal/11.0.5-apache-bookworm/images/sha256-0e41e0173b4b5d470d30e2486016e1355608ab40651549e3e146a7334f9c8f77?context=explore
Yes, the docker images don’t use the sury.org php packages (they use the php docker image).
“11.0.5-apache-bookworm” also seems to work, maybe you can try that version?
I wanted to recommend using a Docker container but I ran into the same issue with the default config for “drupal:10-apache” (aka “drupal:10.3.7-apache-bookworm”). Opening “node/add/article” results in the OOM error. Downgrading to “drupal:10.3.6-apache-bookworm” resolved the issue. Looks like a Drupal regression to me. Maybe you can also try an older version of Drupal 11?
What a terrible article. The margins for board partners is small, but Nvidias margin is huge.