Size of an uncompressed image of the Washington Crossing the Delaware painting = 1 Yankee
12 Yankees in a Doodle
60 Doodles in an Ounce (entirely unrelated to the volume or weight usage of ounce)
60 Doodles in a Dandy
That’s too straightforward. It should be 113 Doodles in a Dandy. And 73 Dandies in a Macaroni.
4 Macaronis in a bit of an ounce.
8 Macaronis in a full ounce.
How many Macaronis in a Handy though? I’d say 1776.
… I’ll see myself out.
Maybe its the number of men in the boat number of dandies in a macaroni
giggity
Make sure to make the specific term “Computer Ounce”, or co. oz.
Better yet, just use “cooz” as the “common unit”
Then it’s proportioned following fluid ounce measurements from there. e.g. “coc” (computer cup) is 16 coozes.
Ayyy, I’m in COLORADO so this would be great.
I second this. It makes total sense - computer memory is a volume to be filled with data. They ain’t call parts of a hard drive volumes for nothing.
Sampled at what resolution, though? It’s a physical painting and the true, atomic-scale resolution would make this whole system useless.
May I suggest the entire constitution in ASCII (American Standard Code for Information Interchange) instead? Bonus points if any future amendments change the whole system.
Edit: I suppose you actually want to start small. Maybe just the declaration sans-signatures, then. So, 6610*7 = 46,270 bits.
Congrats, in my almost year on Lemmy, this is the best comment I’ve seen!
How about feet of IBM punch cards?
A 1 foot tall stack holds 1,647,360 bits of data if all 80 columns are used. If only 72 columns are used for data then it’s 1,482,624 bits of data and the remaining columns can be used to number each card so they can be put back in order after the stack is dropped.
I like this because the amount of bits in a stack can vary depending on whose foot you use to measure, or the thickness of the card stock.
IBM standard cards are one 48th of a barleycorn thick. I believe IBM measured from the 1932 Iowa Reference Barleycorn, now kept in the vault inside Mt Rushmore.
THIS is what I’m talking about!
bit, Nibble, Byte, Word, doubleword, longword, quadword, double-quadword, verylongword, halfword
They check all Imperial criteria:
- confusing names
- some used only in some systems
- size depends on where you are
- some may overlap
- doesn’t manage to cover all the possible needs, but do you really need more than 64 bits?
- would probably cause you to crash a rocket
Words! Of course! Imperial measurement is words. Because they are as inconsistent as other imperial units.
1 tweet = 140 bytes
1 (printed) page = 60 lines of 60 characters = 3600 bytes
1 moa (minute of audio in 128000 bps mp3) = 960000 bytes
1 mov (minute of video) = typically around 30MB but varies by resolution and encoding, like ounces vs troy ounces vs apothecary ounces.
1 loc (library of congress, used for measuring hard drive capacity) = around 10TB depending on jurisdiction.
These are all rough averages, of course, but Tweets can be rather bigger than 140 bytes since they’re Unicode, not ASCII. What’s Twitter without emoji?
1 moa (minute of audio in 128000 bps mp3)
Give me 320000 bps or give me death!
Did anyone say Magabyte yet?
1/6th of a MAGAbyte is an insurrection
Ugh. I hate you.
Upvoted.
KiB, MiB, GiB etc are more clear. It makes a big difference especially 1TB vs 1TiB.
The American way would probably be still using the units you listed but still meaning 1024, just to be confusing.
Either that or maybe something that uses physical measurement of a hard-drive (or CD?) using length. Like that new game is 24.0854 inches of data (maybe it could be 1.467 miles of CD?).
The American way would probably be still using the units you listed but still meaning 1024, just to be confusing.
American here. This is actually the proper way. KB is 1024 bytes. MB is 1024 KB. The terms were invented and used like that for decades.
Moving to ‘proper metric’ where KB is 1000 bytes was a scam invented by storage manufacturers to pretend to have bigger hard drives.
And then inventing the KiB prefixes was a soft-bellied capitulation by Europeans to those storage manufacturers.
Real hackers still use Kilo/Mega/Giga/Tera prefixes while still thinking in powers of 2. If we accept XiB, we admit that the scummy storage vendors have won.
Note: I’ll also accept that I’m an idiot American and therefore my opinion is stupid and invalid, but I stand by it.
Absolutely, I started computers in 1981, for me 1K is 1024 bytes and will always be. 1000 bytes is a scam
Calling 1048576 bytes an “American megabyte” might be technically wrong, but it’s still slightly less goofy-looking than the more conventional “MiB” notation. I wish you good luck in making it the new standard.
Kilo comes from greek and has meant 1000 for 1000’s of years. If you want 2^10 to be represented using greek prefixes, it better involve “deca” and “di”. Kilo (and di) would be usable for roughly 1.071508607186267 x 10^301 byte. KB was wrong when it was invented, but they were only wrong for decades at least.
Computers have ruled the planet for longer than the Greeks ever did. The history lesson is appreciated, but we’re living in the future, now, and the future is digital.
No the correct way is to use the proper fucking metric standard. Use Mi or Gi if you need it. We have computers that can divide large numbers now. We don’t need bit shifting.
The metric standard is to measure information in bits.
Bytes are a non-metric unit. Not a power-of-ten multiple of the metric base unit for information, the bit.
If you’re writing “1 million bytes” and not “8 million bits” then you’re not using metric.
If you aren’t using metric then the metric prefix definitions don’t apply.
There is plenty of precedent for the prefixes used in metric to refer to something other than an exact power of 1000 when not combined with a metric base unit. A microcomputer is not one one-thousandth of a computer. One thousand microscopes do not add up to one scope. Megastructures are not exactly one million times the size of ordinary structures. Etc.
Finally: This isn’t primarily about bit shifting, it’s about computers being based on binary representation and the fact that memory addresses are stored and communicated using whole numbers of bits, which naturally leads to memory sizes (for entire memory devices or smaller structures) which are powers of two. Though the fact that no one is going to do something as idiotic as introducing an expensive and completely unnecessary division by a power of ten for every memory access just so you can have 1000-byte MMU pages rather than 4096 also plays a part.
Or maybe metric should measure in Hartleys
The metric system is fascist. It was invented by aristocratic elitist control freaks. It is arbitrary and totalitarian.
“The colorfulness and descriptiveness of the imperial system is due to the fact that it is rooted in imagery and analogies that make intuitive sense.”
I’ll save my own rant until after I’ve seen the zombies froth.
The meter is an French fascist measurement made by the court jester.
“Since 2019 the metre has been defined as the length of the path travelled by light in vacuum during a time interval of
1/299792458 of a second …” [Wikipedia]What is wrong with this definition?
The metre claims to be a ‘non-imperial’ basis of measurement.
But the basis of the metre is the imperial or ephemeral second, which is the ultimate imperial measurement. Seconds are an imperial unit. The measurement of time is fundamental to the ruler … get it?
So the arbitrarily devised metre is founded upon the imperial second. Oops. Now why again did you say the metric system is ‘superior’ to the imperial system?
Metric supremacists are fascist rubes who don’t realize they were pwnd by the empire before their rebellion even had a name or gang sign. They wanted to overthrow the king and based their coup on the king’s fundamental unit of regal measurement: time. Oops. This is a case of killing the baby in the cradle.
Imperial units of measurement are based upon things found in nature. The second is a division of the solar and astronomical day. A second is 1/86400th of a day, and is based again on sexigesimal math, which is found EVERYWHERE in nature.
Every good programmer should already know where this is going.
Day: 86400 seconds.
Day: 24 hours.
Hour: 3600 seconds.
Hour: minute squared.
Minute: 60 seconds.
3600 seconds * 24 hours = 86400 seconds.
60 seconds * 60 minutes = 3600 seconds = 1 hourThere is nothing arbitrary about this. The imperial measurement is neatly aligned to solar and astronomical cycles and to the latitudes and longitudes of the earth. In short, the imperial system of measurement had already measured the equatorial and tropical circuits of the earth and the sun’s path over 3000 years ago, and based measurements upon that.
Then along came the metric aristocrats, who pretended this had never been done before, speculated a _false_ circumference of the earth, and came up with a flawed metre based on that false measurement, then changed it decades later to the distance traveled by light in an imperial second, unaware that no constant speed of light has yet been proved conclusively, but only assumed.
Whereas the imperial system is based upon measurements which have been observed unchanged, verifiable, and reproducible, FOR THOUSANDS OF YEARS.
Tell me again why the metric system is, ‘superior’?
The metre is merely a speculation and the so-called speed of light has NOT been conclusively proven, considering special relativity and all that other aristocratic bollocks. Also complicating the matter is the specific definition of, “light traveling in a vacuum.” OK, sparky, how are you going to locate a laboratory in a vacuum at least 1 light second in length to conduct this experimental measurement and prove it?
This fallacy is called an ‘unfalsifiable’ claim. Yup, The metric system is based upon a pseudo-scientific conjecture and fallacy. Whereas the imperial system is based upon thousands of years of repeatable observation. And yet ‘scientists’ somehow are confused about the reality of the situation.
As I’ve said elsewhere, worldwide science and academia have been growing progressively more delusional for the past couple of centuries.
In the end the aristocrats will bow to the king they hate. Thank God Americans have refused to bow to this dumb idol. Stay strong Murrikanz.
Here’s a shout out to the limeys who still weigh in stones! Long live the king’s foot!
If you aren’t using metric then the metric prefix definitions don’t apply.
Yes it does wtf?
Hey how is “bit shifting” different then division? (The answer may surprise you).
Bit shifting works if you wanna divide by 2 only.
interesting, so does the computer have a special “base 10” ALU that somehow implements division without bit shifting?
In general integer division is implemented using a form of long division, in binary. There is no base-10 arithmetic involved. It’s a relatively expensive operation which usually requires multiple clock cycles to complete, whereas dividing by a power of two (“bit shifting”) is trivial and can be done in hardware simply by routing the signals appropriately, without any logic gates.
In general integer division is implemented using a form of long division, in binary.
The point of my comment is that division in binary IS bitshifting. There is no other way to do it if you want the real answer. You can estimate, you can round, but the computational method of division is done via bitshifting of binarary expansions of numbers in an ALU.
This is such a weird take to me. We don’t even colloquially discuss computer storage in terms of 1000.
The Greek terms were used from the beginning of computing and the new terms of kibi and mebi (etc.) were only added in 1998 when Members it the IEC got upset. But despite that, most personal computers still report in the binary way. The decimal is only used on boxes for marketing terms.
most personal computers still report in the binary way.
Which ones?
Windows reports using binary and continues to use the Greek terms. Windows is still the holder of largest market share for PC operating systems.
Yeah well windows is a POS so
The difference really needs to be enforced.
My ram is in GiB but advertised in GB ???
Your RAM is in GiB and GB. You can measure it either way you prefer. If you prefer big numbers, you can say you have 137,438,953,472 bits of RAM
Pretty sure the commenter above meant that the their RAM was advertised as X GiB but they only got X GB, substitute X with 4/8/16/your amount
As far as I know, RAM only comes in GiB sizes. There is some overhead that reduces the amount you see in the OS though. But that complaint is valid for storage devices if you don’t know the units and expect TB/GB on the box to match the numbers in Windows
MigaBytes?
MiB = mebibyte
I would suggest:
- 1KB = storage capacity of 1 kg of 1.44 floppy disks.
- 1MB = storage capacity of 0.0106 mile of CD drives.
- 1GB = storage capacity of 1 good computer in the 2000s.
- 1TB = storage capacity of 1 truck of GB (see above)
PS: just to be clear, I meant CD drives, not CD discs.
1 kg
(͡•_ ͡• )
Don’t you mean one pound, abbreviated lb?
Naw, it’s actually one Kinda Gallon; a Kinda Gallon of course referring to the average of the masses of a gallon of water, a gallon of beer, and a gallon of whiskey.
I know you’re joking, but that first Kb definition makes me grind my teeth!
1.44 floppy disks can store, well, 1.44 MEGAbytes. So how can 1 kg of floppy disks can just store 1 KB?
Thank you for your compliment. I love it. The floppy disk is 1.44 non-freedom MB, not 0.015264 miles of CD drives.
lol
Most people would use “word”, “half-word”, “quarter-word” etc, but the Anglophiles insist on “tuppit”, “ternary piece”, “span” and “chunk” (that’s 5 bits, or 12 old bits).
A milebyte is 5280 bytes
my harddrive is 250 toby keiths and my processer is 500 lee greenwoods
@cupcakezealot @BmeBenji why not 100 trumps processor rate.
i dunno that much seems almost criminal
@cupcakezealot yeah but I’d love to hear about megatrumps. but that could also be a measure for mass destruction
Because trump’s processor doesn’t have an IPC. It uses CPI instead and w’d have to start using scientific notation
why go for RAMs when the constitution says ARMs…
and no more bits or bytes too, double bytes small or quadbytes regular size all the way.
-
kilo bytes is a grand
-
mega bytes is a venti
-
giga bytes is a grand venti
-
terabytes is a doble venti
really large amounts of ARM is a ton
why go for RAMs when the constitution says ARMs…
x86 is heresy
-
Mp3s, standard def movies, HD movies, and 4k movies.
I’ve seen so many products advertised by how many “songs” or “movies” it can hold. Never mind you can encode the same movie to be massive or small. So I think we’ve found the right answer!
From smallest to biggest:
Bits (basic unit)
Bytes (8:1 reduction)
Words (4:1 reduction)
KiB (32:1 reduction)
MiB (1024:1)
GiB (1024:1)
TiB (1024:1)
PiB (1024:1)
A normal amount of porn (237:1)
All definitely not metric as metric uses steps of 1000 (and there’s also 10 and 100 and 1/10th and 1/100th but that doesn’t extend to 10000 and 1/10000th).
The KiB, MiB, etc, the 2^10 scale is called binary prefixes (as opposed to decimal prefixes KB, MB, etc) and standardised by the IEC.
And while the B in KiB is always going to mean eight bits it’s not a given that a byte is actually eight bits, network people still use “octet” to disambiguate because back in the days there were plenty of architectures around with other byte sizes. “byte” simply means “smallest number of bits an operation like addition will be done in” in the context of architectures. Then you have word for two bytes, d(ouble)word for four, q(uad)word for eight, o(cto)word for 16, and presumably h(ex)word for 32 it’s already hard to find owords in the wild. Yes it’s off by one of course it’s off by one what do you expect it’s about computers. There’s also nibble for half a byte.
EDIT: Actually that’s incorrect word is also architecture-dependent, the word/dword/qword sequence applies to architectures (like x86) which went from being 16-bit machines to now being 64 bit while keeping backwards compatibility. E.g. RISC-V uses 32-bit words, 16 bits there are a half-word.
The bit, at least, is not under contention everyone agrees what it is. Though you can occasionally see people staring in wild disbelief and confusion at statements such as “this information can be stored in ~1.58 bits”. That number is ~ log2 3, that is, the information that fits in one trit. Such as “true, false, maybe”.
Great write up, glad to see mention of nibble (my favorite lol)… You forgot to mention byte order (Little/Big Endian).
So you’re saying my proposed imperial units depend on where you are, and who is using them, for what purpose? That just sells me on them as imperial units even more. :)
Thank you for the details.
Words aren’t always four bytes
Words (4:1 reduction)
Word is imperial unit. Like one british gallon is not equal to one us gallon, one x86 word is not equal one ARM word.
12 bits to an eagle
27 eagles to a liberty (changes whenever an amendment is added)
1776 liberties to a freedom
Computers are still programmed in bytes, but filesize is always in freedoms.
Perhaps bandwidth could be calculated around the fire rate of an AR15?
I know you asked about memory, but the computer I just assembled had a 750watt power supply. As an American I think we should refer to it as a “one horsepower power supply” instead.
That’s not bad, but is there a digital equivalent of a horse we could use?
Nyan cats
One hor… Bwahahaha!
(GIF)