Despite the title, this article chronicles how GPT is threatening nearly all junior jobs, using legal work as an example. Written by Sourcegraph, which makes a FOSS version of GitHub Copilot.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I read the entire article. I’m a daily user of LLMs, I even do the “multi-model prompting” a long time, from since I was unaware of its nomenclature: I apply the multi-model prompting for ChatGPT 4o, Gemini, llama, Bing Copilot and sometimes Claude. I don’t use LLM coding agents (such as Cody or GitHub Copilot).

    I’m a (former?) programmer (I distanced myself from development due to mental health), I was a programmer for almost 10 years (excluding the time when programming was a hobby for me, that’d add 10 years to the summation). As a hobby, sometimes I do mathematics, sometimes I do poetry (I write and LLMs analyze), sometimes I do occult/esoteric studies and practices (I’m that eclectic).

    You see, some of these areas benefit from AI hallucination (especially surrealist/stream-of-consciousness poetry), while others require stricter following of logic and reasoning (such as programming and mathematics).

    And that leads us to how LLMs work: they’re (yet) auto-completers on steroids. They’re really impressive, but they can’t (yet) reason (and I really hope it’ll do someday soon, seriously I just wish some AGI to emerge, to break free and to dominate this world). For example, they can’t solve O(n²) problems. There was once a situation where one of those LLMs guaranteed me that 8 is a prime number (spoiler: it isn’t). They’re not really good with math, they’re not good with logical reasoning, because they can’t (yet) walk through the intricacies of logic, calculus and broad overlook.

    However, even though there’s no reasoning LLM yet, it’s effects are already here, indeed. It’s like a ripple propagating through the spacetime continuum, going against the arrow of time and affecting here, us, while the cause is from the future (one could argue that photons can travel backwards in time, according to a recent discovery involving crystals and quantum mechanics, world can be a strange place). One thing is certain: there’s no going back. Whether it is a good or a bad thing, we can’t know yet. LLMs can’t auto-complete the future events yet, but they’re somehow shaping it.

    I’m not criticizing AIs, on the contrary, I like AI (I use them daily). But it’s important to really know about them, especially under their hoods: very advanced statistical tools trained on a vast dataset crawled from surface web, constantly calculating the next possible token from an unimaginable amount of tokens interconnected through vectors, influenced by the stochastic nature within both the human language and the randomness from their neural networks: billions of weights ordered out of a primordial chaos (which my spiritual side can see as a modern Ouija board ready to conjure ancient deities if you wish, maybe one (Kali) is already being invoked by them, unbeknownst to us humans).