As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • BluesF@feddit.uk
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    1 year ago

    I think *LLMs to do everything is the bubble. AI isn’t going anywhere, we’ve just had a little peak of interest thanks to ChatGPT. Midjourney and the like aren’t going anywhere, but I’m sure we’ll all figure out that LLMs can’t really be trusted soon enough.

      • Lazz45@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        4
        ·
        1 year ago

        I just want to make the distinction, that AI like this literally are black boxes. We (currently) have no ability to know why it chose the word it did for example. You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen. That’s a major pitfall of AI development, its very hard to know how the AI arrived at a decision. You might know it’s right, or it’s wrong…but how did the AI decide this?

        At a very technical level we understand HOW it makes decisions, we do not actively understand every decision it makes (it’s simply beyond our ability currently, from what I know)

        example: https://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          3
          ·
          1 year ago

          You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen.

          Of course you can, you can look at every single activation and weight in the network. It’s tremendously hard to predict what the model will do, but once you have an output it’s quite easy to see how it came to be. How could it be bloody otherwise you calculated all that stuff to get the output, the only thing you have to do is to prune off the non-activated pathways. That kind of asymmetry is in the nature of all non-linear systems, a very similar thing applies to double pendulums: Once you observed it moving in a certain way it’s easy to say “oh yes the initial conditions must have looked like this”.

          What’s quite a bit harder to do for the likes of ChatGPT compared to double pendulums is to see where they possibly can swing. That’s due to LLMs having a fuckton more degrees of freedom than two.

          • BackupRainDancer@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.

            One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.

            Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.

            Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.

            IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)

      • yata@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 year ago

        The thing is a lot of people are not using for that. They think it is a living omniscient sci-fi computer who is capable of answering everything, just like they saw in the movies. Noone thought that about keyboard auto-suggestions.

        And with regards to people who aren’t very knowledgeable on the subject, it is difficult to blame them for thinking so, because that is how it is presented to them in a lot of news reports as well as adverts.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          They think it is a living omniscient sci-fi computer who is capable of answering everything

          Oh that’s nothing new:

          On two occasions I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

          • Charles Babbage
      • Ragnell@kbin.social
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 year ago

        @Reva “Hey, should we use this statistical model that imitates language to replace my helpdesk personnel?” is an ethical question because bosses don’t listen when you outright tell them that’s a stupid idea.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        There are people who genuinelly think there’s actual artificial intelligent thinking behind something like ChatGPT.

        Reminds me of my grandmother - a poor illiterate peasant woman - when she came to live with us in the big city and who got really confused when the same actor appeared in multiple soap operas on TV. She saw the “living truthfully in imaginary circunstances” of good actors (or, lets be honest, the make-believe of most soap opera actors) and because of here complete ignorance on the subject confused acting with real life.

        I think there’s a lot of this going on and, hopefully, like with my grandmother most such people will eventually understand that the well done lifelike surface-level impression does not guarantee that what is behind it is a matching reality (people really living that life in the soap opera or an actual intelligence in this).

        • Freesoftwareenjoyer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The word AI has at least 3 different meanings. People who understand the subject usually just mean machine learning. But there is also AI we see in movies (which is usually a sentient computer) and AI in games (which is just programmed NPCs). I think most people confuse the stuff they see in movies with machine learning.

          • Ragnell@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I think marketing execs are COUNTING on that misinterpretation to make the product seem like more than it is.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Are you familiar with the 1980s program Racter? It wasn’t trained on the entire internet like LLMs are, but it kind of feels like an extension of that. Except Racter’s output was more amusing.

      • Freesoftwareenjoyer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        Yeah, it’s kinda scary to see how much people don’t understand modern technology. If some non-expert tells them AI can’t be trusted, they just believe it. I’ve noticed the same thing with cryptocurrencies. A non-expert says it’s a scam and people believe it even though it’s clear they don’t understand anything about that technology or what it’s made for.