Once we have super fast reliable internet we’ll likely have the whole computer as a service. We’ll just have access terminals basically and a subscription with a login, except for the nerds who want their own physical machine.
They’ve been reinvented repeatedly. Citrix, terminal servers, thin clients, cloud desktops, web apps, remote app delivery…
Most people (not necessarily here) need a web browser and an office program. Most people are well suited to terminals or something like a Chromebook.
I need actual hardware for my job and hobbies, but even I have a mini PC set up like a gaming console so that if I want to play games on my bedroom TV I don’t have to hook up my Steam Deck or gaming laptop. I just stream them.
RAM as a service can’t happen. It’s just far too slow. The whole computer can though. It’s RAM can be local so it can access it quickly, then it just needs to stream the video over, which is relatively simple if creating some amount of latency to deal with.
You have to know that some dinosaur at ibm is screaming about how they gave up the centralized computer and is salivating over gigabit fiber so he can charge everyone 15 bucks a month to use an ibm mainframe.
Stadia almost didn’t suck, I bet we’re 10 years from phones just being hand terminals that tap into a local server and desktops won’t be far behind.
I’m happy it happened and gave us up to book 6 on the little screen.
The show runners for both the syfy and Amazon seasons were the original writers, so everything is very true to their vision of the story. I’m only sad that they didn’t get to finish the final few books on television. The whole series is a Masterpiece of hard science fiction.
Given the digital literacy of many “regular people” (e.g. my father, and seemingly every other of my friends), the idea is appealing. Especially, as most of them don’t care about privacy. Give them decent availability, and they will throw money at you. And if you also give them support, I will, too.
It’ll never be fast enough. An SSD is orders of magnitude slower than RAM, which is orders of magnitude slower than cache. Internet speed is orders of magnitude slower than the slowest of hard drives, which is still way too slow to be used for anything that needs memory relatively soon.
A SATA SSD has ballpark 500MB/s, a 10g ethernet link 1250MB/s. Which means that it can indeed be faster to swap to the RAM of another box on the LAN that to your local SSD.
A Crucial P5 has a bit over 3GB/s but then there’s 25g ethernet. Let’s not speak of 400g direct attach.
modern NVMe SSDs have much more bandwidth than that, on the order of > 3GiB/s.
even an antique SATA SSD from 2009 will probably have much lower access latency than sending commands to a remote device over an ethernet link and waiting for a response
Show me an SSD with 50GB/s, it’d need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.
My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.
SFP direct attach, you don’t need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with “We have this dual socket EPYC system here with full 12TB memory and need to double that”. You have *rolls dice* 104 free PCIe5 lanes, go.
Bandwidth isn’t really most of the issue. It’s latency. It’s the amount of time from the CPU requesting a segment of memory to receiving it, which bandwidth doesn’t effect.
Yeah, but the point of RAM is fast random (the R in RAM) access times. There are ways to make slower memory work better for this by predicting what will be needed (grab a chunk of memory because accesses will probably need things with closer locality than pure random), but it can’t be fixed. Cloud memory is good for non-random storage or storage that isn’t time critical.
For those too young to remember, the A:\ drive was for the hard 3" floppy disks and B:\ drive was for the soft 5.25" floppy disks. The C:\ drive was for the new HDDs that came out, and for whatever reason the C:\ drive became the standard after that.
I remember using ICMP data to bypass my high school’s firewall. TCP and UDP were very locked down, but they allowed pings. It was slow though - I think I managed to get a few KB per sec. Maybe there’s faster/fancier firewall bypass methods these days. This was back in the 2000s when an entire school would have a single OC-1 fiber connection.
The limiting factor is mostly your upload speed. And also you need to have a good QoS set up, or you have very limited internet usability. Where as on-site you can get way higher speeds for cheaper
Agreed. This is especially bad, though, because if it’s compromised they basically have hardware-level access to your machine. Unless you’re using encrypted swap, and I’m not sure how standard that is.
Well, assuming you’ve already gone through the effort to write a custom kernel module to offload your swap pages to Google Drive, it doesn’t seem like that much of a stretch to have it encrypt the data before transmitting it.
They don’t to my knowledge, I believe that’s mounted through rclone which just usually sets the filesystem size to 1PB so that it doesn’t have to try to query what the actual limit is for the various providers (and your specific plan).
Once upon a time, Google offered unlimited drive storage as part of some GSuite tiers. They stopped offering it a while ago and have kicked most/all legacy users off of it in the past few months. It was glorious while it lasted 😢
Guess they ran everyone out of business that they needed to, so now the premium features get yanked and your choice of alternatives is curtailed. Hooray for enshittification.
It’s not that, it’s that people were abusing it by using it for things like Plex with 100TB+ of data, which cost Google more than the revenue they got as a result. Blame the people that abused the policy. They’re not a charity and can’t keep an offer if they lose money as a result. Keep in mind that Google Drive data has several replicas and is also backed up to cold storage on LTO tapes, so people abusing the storage policy is actually pretty expensive for them .
They do still have unlimited data in some cases, for example with custom plans for large companies (like 50k+ employees).
At one point they offered unlimited storage for Play Music only. You could literally upload your entire collection. They changed it later to consume your Drive storage. Cheap enough plans so I subscribed. Then they killed off Play Music. I’m still salty about that.
https://imgur.com/j0S32zL.jpg
deleted by creator
Once we have super fast reliable internet we’ll likely have the whole computer as a service. We’ll just have access terminals basically and a subscription with a login, except for the nerds who want their own physical machine.
Bro just reinvented mainframes.
They’ve been reinvented repeatedly. Citrix, terminal servers, thin clients, cloud desktops, web apps, remote app delivery…
Most people (not necessarily here) need a web browser and an office program. Most people are well suited to terminals or something like a Chromebook.
I need actual hardware for my job and hobbies, but even I have a mini PC set up like a gaming console so that if I want to play games on my bedroom TV I don’t have to hook up my Steam Deck or gaming laptop. I just stream them.
Microsoft: Write it down! Write it down!
& thin clients
No. Just no.
And get off my lawn, ya whippersnapper.
RAM as a service can’t happen. It’s just far too slow. The whole computer can though. It’s RAM can be local so it can access it quickly, then it just needs to stream the video over, which is relatively simple if creating some amount of latency to deal with.
Mhmm… Computer as a service. Why does that sound familiar…?
You have to hand it to the French though, that stuff was pretty dope.
I was there, Gandalf. I was there, three thousand years ago.
Given how so many of us communicate, work, and compute using cloud platforms and services, we’re basically already there.
How many apps are basically just a dumb client using a REST API?
you will own nothing and be happy!
Wait, we already had that in the 70s.
You have to know that some dinosaur at ibm is screaming about how they gave up the centralized computer and is salivating over gigabit fiber so he can charge everyone 15 bucks a month to use an ibm mainframe.
Stadia almost didn’t suck, I bet we’re 10 years from phones just being hand terminals that tap into a local server and desktops won’t be far behind.
Like in The Expanse?
Exactly like the expanse.
Fucking love those books, am listening to one now.
Have you seen the Amazon show at all? How did you feel about it?
I’m happy it happened and gave us up to book 6 on the little screen.
The show runners for both the syfy and Amazon seasons were the original writers, so everything is very true to their vision of the story. I’m only sad that they didn’t get to finish the final few books on television. The whole series is a Masterpiece of hard science fiction.
For many of us Stadia didn’t suck at all, except for the game library and Google lack of commitment.
sweaty gamers and nerds as always unite over having proper physical PCs rather than online services or consoles.
That’s exactly how it works right now with VDI. I’m using one at work.
Given the digital literacy of many “regular people” (e.g. my father, and seemingly every other of my friends), the idea is appealing. Especially, as most of them don’t care about privacy. Give them decent availability, and they will throw money at you. And if you also give them support, I will, too.
Honestly, cloud gaming is very good… when it is good. Sometime it suck. But when it’s good it’s incredible how much it feels like gaming locally.
Unsubscribe
It’ll never be fast enough. An SSD is orders of magnitude slower than RAM, which is orders of magnitude slower than cache. Internet speed is orders of magnitude slower than the slowest of hard drives, which is still way too slow to be used for anything that needs memory relatively soon.
Need faster than light travel speeds and we can colocate it on the moon
A SATA SSD has ballpark 500MB/s, a 10g ethernet link 1250MB/s. Which means that it can indeed be faster to swap to the RAM of another box on the LAN that to your local SSD.
A Crucial P5 has a bit over 3GB/s but then there’s 25g ethernet. Let’s not speak of 400g direct attach.
Show me an SSD with 50GB/s, it’d need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.
Speaking of: You can swap to VRAM.
My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.
SFP direct attach, you don’t need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with “We have this dual socket EPYC system here with full 12TB memory and need to double that”. You have *rolls dice* 104 free PCIe5 lanes, go.
Bandwidth isn’t really most of the issue. It’s latency. It’s the amount of time from the CPU requesting a segment of memory to receiving it, which bandwidth doesn’t effect.
Depends on your workload and access pattern.
…I’m saying can be faster. Not is faster.
Yeah, but the point of RAM is fast random (the R in RAM) access times. There are ways to make slower memory work better for this by predicting what will be needed (grab a chunk of memory because accesses will probably need things with closer locality than pure random), but it can’t be fixed. Cloud memory is good for non-random storage or storage that isn’t time critical.
So I could download more RAM?
You can do it today, just put your swapfile on sshfs and you’re done.
It will crash as soon as it needs to touch the swap due to the relatively insane latency difference.
So use a small area in memory as cache
the infinite memory paradox. quaint. (lol)
It’s just a NUMA architecture. Linux can handle it.
Imagine doing this on a dial-up 56K modem
A:\SPICYMEMES\MODEMSOUND.WAV
Bwa-hahahahhah "A:" 🤣
bro is still using floppies
all the cool kids use iomega
https://youtu.be/vvr9AMWEU-c
For those too young to remember, the A:\ drive was for the hard 3" floppy disks and B:\ drive was for the soft 5.25" floppy disks. The C:\ drive was for the new HDDs that came out, and for whatever reason the C:\ drive became the standard after that.
A was the first floppy drive and B the second floppy drive (in dos and cp/m). The type of drive was irrelevant.
FWIW they were the other way around on my system. The order of A:\ vs B:\ depended on their order on the cable (“first” and “second”), not type.
wait, didn’t some tech youtubers like LTT try using cloud storage as swap/RAM? afaik they failed because of latency
This guy used ICMP data payload as a hard drive. It kinda worked.
I remember using ICMP data to bypass my high school’s firewall. TCP and UDP were very locked down, but they allowed pings. It was slow though - I think I managed to get a few KB per sec. Maybe there’s faster/fancier firewall bypass methods these days. This was back in the 2000s when an entire school would have a single OC-1 fiber connection.
155mbps Telco trunk line for a school? Nicer school than I went to.
Around 50Mbps: https://en.wikipedia.org/wiki/Optical_Carrier_transmission_rates#OC-1
I only had dialup at the time, and the fastest home broadband available was 1.5Mbps ADSL, so it was pretty fancy!
Afaik they used it as redundant off-site backup
I wonder if there would be a speed boost by setting 2 gdrive as raid 0 for off site backups
The limiting factor is mostly your upload speed. And also you need to have a good QoS set up, or you have very limited internet usability. Where as on-site you can get way higher speeds for cheaper
I feel like this might be a giant gaping security risk.
So is pretty much all of the cloud services the average user already subscribes to. People still use them though.
Agreed. This is especially bad, though, because if it’s compromised they basically have hardware-level access to your machine. Unless you’re using encrypted swap, and I’m not sure how standard that is.
Well, assuming you’ve already gone through the effort to write a custom kernel module to offload your swap pages to Google Drive, it doesn’t seem like that much of a stretch to have it encrypt the data before transmitting it.
Is that what this would take? Then yeah, you’d hope somewhere in the process you consider this.
Obviously you should set up device mapper to encrypt the gdrive device then put the swap on the encrypted mapper device.
If your kernel isn’t using 90% of your CPU resources, are you really even using it to it’s full potential? /s
Oh wow, I didn’t even know Gdrive offered a 1 petabyte option 😂
They don’t to my knowledge, I believe that’s mounted through rclone which just usually sets the filesystem size to 1PB so that it doesn’t have to try to query what the actual limit is for the various providers (and your specific plan).
Once upon a time, Google offered unlimited drive storage as part of some GSuite tiers. They stopped offering it a while ago and have kicked most/all legacy users off of it in the past few months. It was glorious while it lasted 😢
Guess they ran everyone out of business that they needed to, so now the premium features get yanked and your choice of alternatives is curtailed. Hooray for enshittification.
It’s not that, it’s that people were abusing it by using it for things like Plex with 100TB+ of data, which cost Google more than the revenue they got as a result. Blame the people that abused the policy. They’re not a charity and can’t keep an offer if they lose money as a result. Keep in mind that Google Drive data has several replicas and is also backed up to cold storage on LTO tapes, so people abusing the storage policy is actually pretty expensive for them .
They do still have unlimited data in some cases, for example with custom plans for large companies (like 50k+ employees).
And Google docs/sheets/slides used to not count in your used space.
At one point they offered unlimited storage for Play Music only. You could literally upload your entire collection. They changed it later to consume your Drive storage. Cheap enough plans so I subscribed. Then they killed off Play Music. I’m still salty about that.
Yea where do you get that? I can’t see anything on their pricing page, only goes up to 2tb
Even better:
Free cloud storage that doesn’t require an account and provides no limit to the volume of data stored
https://github.com/yarrick/pingfs
The image doesn’t load.
I posted that 10 months ago.
That being said, it seems to still work for me.
deleted by creator