If you actually have deep knowledge in a specialty, then you describe yourself as that specialty. ‘Full stack engineer’ coneys that you don’t have a specialty/are a master of nothing/your skills are _ shaped.
Principal Engineer for Accumulate
If you actually have deep knowledge in a specialty, then you describe yourself as that specialty. ‘Full stack engineer’ coneys that you don’t have a specialty/are a master of nothing/your skills are _ shaped.
Experience != expertise or skill. I have never met someone who was actually good at both. Maybe if your backend is just some SQL queries. I am a backend engineer and I’m adequate at front end but I’d never hire someone whose skills were merely adequate unless I thought they had the potential to reach ‘good’.
Scripting languages being languages that are traditionally source distributed.
So the only ways that the distribution mechanism matter are really a difference between How does the distribution mechanism matter beyond that? And even those points are
They tend to be much easier to write
I’m assuming you are not saying “real” languages should be hard to write…
run slower
Objective-C and Go run slower than C and they’re all compiled languages. Sure, an interpreter will be slower than a compiled language but modern languages aren’t simply interpreted (i.e. JIT, etc).
often but not always dynamically typed, and operate at a higher level
There are dynamically typed compiled languages, and high level compiled languages.
It’s not a demeaning separation, just a useful categorization IMO.
Calling one class of languages “real” and another class something else is inherently demeaning. I wouldn’t have cared enough to type this if you used “compiled vs scripting” instead of “real vs scripting”. Though I disagree with using “scripting” at all to describe a language since that’s an assertion of how you use the language, not of the language itself. “Interpreted” on the other hand is a descriptor of the language itself.
As someone who loves C there are lots of languages that seem too limiting and high level, doesn’t mean they aren’t useful tho.
I personally can’t stand Java because the language designers decided to remove ‘dangerous’ features like pointers and unsigned integers because apparently programmers are children who are incapable of handling the risk. On the other hand I love Go. It’s high level enough to be enjoyable and easy to write, but if you want to get into the weeds you can.
That line is blurring to the point where it barely exists any more. Compiled languages are becoming increasingly dynamic (e.g. JIT compilation, code generation at runtime) and interpreted languages are getting compiled. JavaScript is a great example: V8 uses LLVM (a traditional compiler) to optimize and compile hot functions into machine code.
IMO the only definition of “real” programming language that makes any sense is a (Turing complete) language you can realistically build production systems with. Anything else is pointlessly pedantic or gatekeeping.
I’d rather use a language that doesn’t treat me like an incompetent child, removing unsigned ints because “they’re a source of bugs”.
Or use a statically typed language that’s actually modern instead of C
Why? In my experience using a real debugger is always the superior choice. The only time I don’t is when I can’t.
Huh? Main file? Do you mean main package? A module can contain an arbitrary number of main packages but I don’t see how that has anything to do with this post. Also are you saying modules are equivalent to classes? That may be the strangest take I’ve ever heard about Go.
How so?
The person who uses the shitty tool is a moron. The person who makes the shitty tool is an asshole. At least in this case where the shitty tool is actively promoting shitty PRs.
Of course but presumably on occasion you do work in other languages? I work in all kinds of languages and so jumping between them it’s pretty handy to bridge the gap.
If I were jumping languages a lot, I definitely think it would be helpful. But pretty much 100% of what I’ve done for the last 3-4 years is Go (mostly) or JavaScript (occasionally). I have used chatgpt the few times I needed to work in some other language, but that has been pretty rare.
I think you could definitely still get value out of generating simple stuff though, at least for me it really helps get projects done quickly without burning myself out
If simple stuff == for loops and basic boilerplate, the kind of stuff that copilot can autocomplete, I write that on autopilot and it doesn’t really register. So it doesn’t contribute to my burnout. If simple stuff == boring, boilerplate tests, I’ll admit that I don’t do nearly enough of that. But doing the ‘prompt engineering’ to get copilot to write that wasn’t any less painful that writing it myself.
For small one off scripts it makes them actually save more time than they take to write
The other day I wrote a duplicate image detector for my sister (files recovered from a dying drive). In hindsight I could have asked chatgpt to do it. But it was something I’ve never done before and an interesting problem so it was more fun to do it myself. And most of the one off stuff I’m asked to do by coworkers is tied to our code and our system and not the kind of thing chatgpt would know how to do.
func randomRGB(uid int) color.RGBA {
b := binary.BigEndian.AppendUint64(nil, uint64(uid))
h := sha256.Sum256(b)
return color.RGBA{h[0], h[1], h[2], 255}
}
That took me under three minutes and half of that was remembering that RGBA is in the color package, not the image package, and uint-to-bits is in the binary package, not the math package. I have found chatgpt useful when I was working in a different language. But trying to get chatgpt or copilot to write tests or documentation for me (the kind of work that bores me to death), doing the prompt engineering to get it to spit out something useful was more work than just writing the tests/documentation myself. Except for the time when I needed to write about 100 tests that were all nearly the same. In that case, using chatgpt was worth it.
If I’ve been working in the same language for at least a year or two, I don’t have to look up any of that. Copilot might be actually helpful if I’m working in a language I’m not used to, but it’s been a long time since I’ve had to look up syntax or functions (excluding 3rd party packages) for the language I work in.
I won’t say copilot is completely useless for code. I will say that it’s near useless for me. The kind of code that it’s good at writing is the kind of code that I can write in my sleep. When I write a for-loop to iterate over an array and print it out (for example), it takes near zero brain power. I’m on autopilot, like driving to work. On the other hand, when I was trialing copilot I’d have to check each suggestion it made to verify that it wasn’t giving me garbage. Verifying copilot’s suggestions takes a lot more brain power than just writing it myself. And the difference in time is minimal. It doesn’t take me much longer to write it myself than it does to validate copilot’s work.
I have to strongly disagree with you. I’ve used WSL 2 with VSCode, and I experienced waaaaaaaay more weird broken shit than I ever have running Linux. And even if it weren’t for that, it’s still not at all worth it IMO because using WSL 2 means every interaction I have with my development environment has to go through a Linux-to-Windows translation layer. I will never use Windows again for anything beyond testing unless I’m forced to.
How are you using it for data crunching? That’s an honest question, based on my experiences with AI I can’t imagine how I’d use them to crunch data.
So I always have to check it’s work to some degree.
That goes without saying. Every AI I’ve seen or heard of generates some level of garbage.
My point is that I strongly feel that the kind of “AI” we have today is much closer to bacteria than to cats on that scale. Not that an LLM belongs on the same scale as biological life, but the point stands in so far as “is this thing intelligent” as far as I’m concerned.
it’s not inconceivable it could happen in the next two generations.
I am certain that it will happen eventually. And I am not arguing that something has to be human-level intelligent to be considered intelligent. See dogs, pigs, dolphins, etc. But IMO there is a huge qualitative difference between how an LLM operates and how animal intelligence operates. I am certain we will eventually create intelligent systems but there is a massive gulf between what LLMs are capable of and abstract reasoning. And it seems extremely unlikely to me that linear algebraic models will ever achieve that type of intelligence.
Intelligence is just responding to stimuli
Bacteria respond to stimuli. Would you call them intelligent?
I don’t know, have you ever used JavaScript? I’ve run into some really fucking weird bugs. I’ve also spent hours trying to find the source of an error message only to discover the error message was lying and caused by some other error.
If your job is to make websites and you make sites that don’t work on a browser that has over 100 million users you’re not doing your job.