You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn’t more important code than XFree86. Obviously, this metric isn’t perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.
Can confirm it’s a shitty metric. I once saved the company I was working at few millions by changing one line of code. And it took 3 days to find it. And it was only 3 characters changed.
That’s the curse and blessing of our profession: efficiency of work is almost impossible to measure once you go beyond very simple code.
You can feel like a hero for changing three characters and finally fixing that nasty, or you can feel like an absolute disgrace for needing days to find such a simple fix. Your manager employs the same duality of judgement
I feel like a hero in this particular case, it was a bug in a code that was written when I was still too young to even read. And no one knew how to run it. We didn’t have access to the pipelines so no one knew how to build it and how to run it. It was a very obscure hybrid of C and PHP. I basically had to be the compiler, I went line by line through the whole codebase, searching for the code path that caused the error. Sounds easy enough, right? Just CTRL+click in your IDE. Wouldn’t it be a shame if someone decided that function names should be constructed as a string using at least 20 levels of nesting where each layer adda something to the function name and then it’s finally called. TL;DR it was a very shitty code.
Yes.
Also the required clock cycles depends a lot on individual CPU architectures.
For example division: Some CPUs have hardwired logic to compute the division operation directly on circuit level. Others are basically running a for loop with substraction. The difference in required clock cycles for a division operation can then be huge.
Another example: is it a scalar or superscalar CPU?
A rather obvious example: the bit width of the CPU. 32 bit systems compute 64 bit data much more inefficiently than 64 bit systems.
Then there is other stuff like branch prediction, or system dependencies like memory bus width and clock, cache size and associativity etc. etc…
Long story short: When evaluating the performance of code, multiple performance metrics have to be considered simultaneously and prioritized according to the development goals.
Lines of code is usually a veeery bad metric. (I sometimes spend hours just to write a few lines of code. But those are good ones then.) Cycles per code segment is better, but also not good (except you are developing for a very specific target system). Do benchmarking, profiling, run it on different systems and maybe design individual performance metrics based on your expectations.
No, he doesn’t. He suggests that there are Linux systems with no GNU code, like one I’m replying from, and whether “no” meant “no SLOC” or “no instructions spent executing” or “no packages” absolutely doesn’t matter.
Can confirm it’s a shitty metric. I once saved the company I was working at few millions by changing one line of code. And it took 3 days to find it. And it was only 3 characters changed.
That’s the curse and blessing of our profession: efficiency of work is almost impossible to measure once you go beyond very simple code.
You can feel like a hero for changing three characters and finally fixing that nasty, or you can feel like an absolute disgrace for needing days to find such a simple fix. Your manager employs the same duality of judgement
I feel like a hero in this particular case, it was a bug in a code that was written when I was still too young to even read. And no one knew how to run it. We didn’t have access to the pipelines so no one knew how to build it and how to run it. It was a very obscure hybrid of C and PHP. I basically had to be the compiler, I went line by line through the whole codebase, searching for the code path that caused the error. Sounds easy enough, right? Just CTRL+click in your IDE. Wouldn’t it be a shame if someone decided that function names should be constructed as a string using at least 20 levels of nesting where each layer adda something to the function name and then it’s finally called. TL;DR it was a very shitty code.
What the fuck. I’m so appalled I had to leave this useless comment to digitally stare with an agape mouth.
To be fair, they said I’ll be dealing with legacy code from time to time during the interview.
But did you add 3 characters? Gotta bump up that code count bruh.
Nope, removed 3, added 1.
-2 on your paycheck!
Theres an ancient Apple story about how LoC is a stupid metric from when they were programming the Lisa.
I wrote a program that does nothing but busy loop on all cores. stylist_trend/Linux is my favourite OS.
i’m partial to the more relaxing sleep(500)/linux os, but to each their own
Any good sleep will give back control to other threads.
that’s why sleep(500)/linux uses bad sleep
Then this:
:(){ :|:& };:
is most important code in existence.What you refer to as Linux, is actually called Forkbomb/Linux, or as I’ve recently taken to calli-
[Process Killed]
deleted by creator
Yes. Also the required clock cycles depends a lot on individual CPU architectures.
For example division: Some CPUs have hardwired logic to compute the division operation directly on circuit level. Others are basically running a for loop with substraction. The difference in required clock cycles for a division operation can then be huge.
Another example: is it a scalar or superscalar CPU?
A rather obvious example: the bit width of the CPU. 32 bit systems compute 64 bit data much more inefficiently than 64 bit systems.
Then there is other stuff like branch prediction, or system dependencies like memory bus width and clock, cache size and associativity etc. etc…
Long story short: When evaluating the performance of code, multiple performance metrics have to be considered simultaneously and prioritized according to the development goals.
Lines of code is usually a veeery bad metric. (I sometimes spend hours just to write a few lines of code. But those are good ones then.) Cycles per code segment is better, but also not good (except you are developing for a very specific target system). Do benchmarking, profiling, run it on different systems and maybe design individual performance metrics based on your expectations.
deleted by creator
No, he doesn’t. He suggests that there are Linux systems with no GNU code, like one I’m replying from, and whether “no” meant “no SLOC” or “no instructions spent executing” or “no packages” absolutely doesn’t matter.