Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.

    • ISometimesAdmin@the.coolest.zone
      link
      fedilink
      arrow-up
      76
      arrow-down
      16
      ·
      1 year ago

      I very rarely care for what most 62 year olds have to say about the capabilities about the theoretical limits of computation.

      This isn’t much different.

      • sir_reginald@lemmy.world
        link
        fedilink
        English
        arrow-up
        49
        arrow-down
        4
        ·
        1 year ago

        If the 62 year old had studied computer science and had specialized in AI, I would listen closely to them.

        But I definitely not care about a politician that has no idea about technology.

        • WashedOver@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Unfortunately when it comes to medical experts, many ignore them and listen to their aunt with the healing crystals, or their buddy that skipped most of his high school science classes to go smoke behind the school instead…

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      13
      ·
      1 year ago

      I mean — he’s defending human creativity and he’s kind of right. AI can recreate variations of the things it is trained on, but it doesn’t create new paradigms.

      • Sprokes@lemmy.ml
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        8
        ·
        1 year ago

        People always says AI do create only variations but many successful TV shows are variations. I started watching sitcoms from the 70s and many things were copied/adapted in recent shows.

        • Fungah@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          99% of everything people create is a variation.

          Truly innovative anything is RARE.

          There’s just stuff and things people haven’t thought to combine with stuff yet.

      • MrMcGasion@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        Yeah, also I think there is something about the human connection and communicating personal ideas and feelings that just isn’t there with AI generated art. I could see a case for an argument that a lot of music today is recorded by artists who didn’t write that music, and that they are expressing their own feelings through their performance of someone else’s creation. And is it really all that different if an AI wrote something that resonated with an artist who ultimately performed it? Which for a good chunk of pop-culture regurgitations may be completely valid. But in my opinion, the best art, communicates emotion, which an experience unique to biology, AI might be able to approximate it, and sure there’s a human prompting the AI who might genuinely have those feelings, but there’s a hollowness to it that I struggle to ignore. But maybe I’m just getting older and will be yelling at clouds before long.

        • canni@lemmy.one
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Literally the world’s oldest, continuous civilization. Pretty sure they got one or two things out there in the last 4000 years

    • WashedOver@lemmy.ca
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      1 year ago

      If he got super wild and crazy by wearing a tan suit again to work would you?

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    4
    ·
    1 year ago

    …so I’ve been on a shit load of elevators, and I don’t recall a single one of them having music. For as common a trope as it is, you’d think elevator music would be more common in actual elevators.

    • GrapesOfAss@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      1
      ·
      1 year ago

      It’s like porn, they all used to have music, and now people still make jokes about how bad it was but it’s just gone now

    • prole@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      32
      ·
      edit-2
      1 year ago

      It’s not as common as it used to be, but I think the point was kind of that you’re not supposed to notice it?

      Look into “muzak” (the style of music. Apparently it’s also a brand according to Google), and some of Brian Eno’s ambient albums like “Music for Airports” (which is definitely a bit more sparse than elevator music, which was often like smooth jazz versions of classic songs), but along similar lines.

      I don’t like to think I’m that old, and I 100% remember elevator music.

      Edit: was possibly thinking of “musique concrete” rather than muzak.

      • Triple_B@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        or more recently, ads

        Try this one trick to get me to take the stairs every single time.

    • knotthatone@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I don’t think I can actually recall one either.

      Maybe in a department store or mall in the 80s. It was just so deliberately bland I never noticed when it became less common.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I worked in an office that installed music in the bathrooms. It wasn’t there for a long time, and then they added it. An email went out at one point instructing people to stop turning off the music (someone figured out where the Sonos controls were I guess). Someone at the top had decided it was IMPERATIVE to have something to listen to other than the coworker grunting next to you.

    • LCP@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Some hotel elevators have it.

      But yeah, I don’t recall the last time I heard music in a residential elevator.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I think the statement was more about the impact, which will depend on each person’s subjective experience

      Personally I agree. Even if AI could produce identical work, the impact would be lessened. Art is more meaningful when you know it took time and was an expression/interpretation by another human (rather than a pattern prediction algorithm Frankenstein-ing existing work together). Combine that with the volume of AI content that’s produced, and the impact of any particular song/art piece is even more limited.

      • 5BC2E7@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I’d say art is more meaningful when it’s a unique experience. It’s like those myths about glassmakers being killed blinded after the cathedral is finnished so that no one can replicate the glass color… without the killing.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        People are social, if enough people feel the same way about one thing it’ll succeed. It doesn’t matter where it came from or how it was made, like how people can still admire and appreciate nature. Or maybe the impact will be that it reduces all impacts. Every group and subgroup might be able to have their own thing.

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      8
      ·
      edit-2
      1 year ago

      I don’t know. I think Obama kind of nailed it. AI can create boring and mediocre elaborations just fine. But for the truly special and original? It could never.

      For the new and special, humans will always be required. End of line.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        5
        ·
        1 year ago

        At this point I want a calendar of at what date people say “AI could never” - like “AI could never explain why a joke it’s never seen before is funny” (such as March 2019) - and at what date it happens (in that case April 2022).

        (That “explaining the joke” bit is actually what prompted Hinton to quit and switch to worrying about AGI sooner than expected.)

        I’d be wary of betting against neural networks, especially if you only have a casual understanding of them.

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          1 year ago

          I mean the limitations of LLMs are very well documented, they aren’t going to advance a whole lot more without huge leaps in computing technology. There are limits on how much context they can store for example, so you aren’t going to have AIs writing long epic stories without human intervention. And they’re fundamentally incapable of originality.

          General AI is another thing altogether that we’re still very far away from.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            1 year ago

            Nearly everything you wrote is incorrect.

            As an example, rolling context windows paired with RAG would easily allow for building an implementation of LLMs capable of writing long stories.

            And I’m not sure where you got the idea that they were fundamentally incapable of originality. This part in particular tells me you really don’t know how the tech is working.

            • rambaroo@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              A rolling context window isn’t a real solution and will not produce works that even come close to matching the quality of human writers. That’s like having a writer who can only remember the last 100 pages they wrote.

              The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination? Lmao - and you try to act like I’m the one who’s full of shit.

              • kromem@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                That’s like having a writer who can only remember the last 100 pages they wrote.

                That’s why you pair it with RAG.

                The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination?

                They are trained by iterating through network configurations until there’s diminishing returns on how accurately they can complete that human created data.

                But they don’t just memorize the data. They develop the capabilities to extend it.

                So yes, they absolutely are capable of generating original content that’s not in the training set. As has been demonstrated over and over. From explaining jokes not found in the training data, solving riddles not found in it, or combining different concepts to result in a new synthesis not found in the original data.

                What do you think it’s doing? Copy/pasting or something?

    • Knusper@feddit.de
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      I think, it will eventually become obsolete, because we keep changing what ‘AI’ means, but current AI largely just regurgitates patterns, it doesn’t yet have a way of ‘listening’ to a song and actually judging whether it’s good or bad.

      So, it may expertly regurgitate the pattern that makes up a good song, but humans spend a lot of time listening to perfect every little aspect before something becomes an excellent song, and I feel like that will be lost on the pattern regurgitating machine, if it’s forced to deviate from what a human composed.

      • TopRamenBinLaden@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I have seen a couple successful artists in different genres admit to using AI to help them write some of their most popular songs, and describe it’s use in the songwriting process. You hit the nail on the head with AI not being able to tell if something is good or bad. It takes a human ear for that.

        AI is good at coming up with random melodies, chord progressions, and motifs, but it is not nearly as good at composing and producing as humans are, yet. AI is just going to be another instrument for musicians to use, in its current form.

        • Knusper@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah, I do imagine, it won’t be just AIs either. And then, it will obviously be possible to take it to an excellent song, given enough human hours invested.

          I do wonder, how useful it will actually be for that, though. Often times, it really fucks you up to try to go from good to excellent and it can be freeing to start fresh instead. In particular, ‘excellent’ does require creative ideas, which are easier for humans to generate with a fresh start.
          But AI may allow us to start over fresh more readily, if it can just give us a full song when needed. Maybe it will even be possible to give it some of those creative snippets and ask it to flesh it all out. We’ll have to see…

    • takeda@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      27
      ·
      1 year ago

      As someone who is doing software engineering and my company jumped on AI bandwagon and got us GitHub Copilot. After using it for a while I think overall experience is actually net negative. Yes, sometimes it gets things right, sometimes it provides a correct solution, but often I can write much more concise code. Many times it provides code that looks like it is correct, but after looking in more detail it actually is wrong. So now I’m need to be in guard what code it inserts, which kills all the time that it supposedly saved me. It makes things harder because the code does look like it might work.

      It is like pair programming with a complete moron that is very good at picking patterns and trying to use them in following code. So if you do a lot of copy and paste I think it will help.

      I think this technology can make bad programmers suck less at programming. I think the LLM problem is that it was trained with existing works and the way it works is that its goal is to convince other human that the result was created by another one, but it isn’t capable to do any actual reasoning.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        16
        ·
        edit-2
        1 year ago

        Wow, my experience has been pretty much the exact opposite of this. Copilot is amazing and I’d rather not go without it ever again

        Edit: for the life of me I’ll never understand people. This comment got a bunch of downvotes and yet some douchebag who blindly accuses me of being bad at my job gets upvoted. Fuck people.

        • takeda@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          1 year ago

          What language you program in and what kind of code you develop? Before Copilot were you frequently searching answers on stackoverflow?

          • TrickDacy@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            Typescript, JavaScript, php, bash, scss/css… And isn’t every dev on SO or at least a search engine with some frequency?

            I don’t actually think the reason I like it is dependent on the language at all. The reason I like it is that it will often basically notice what I’m doing and save me from typing a repetitive 3-5 line block. Things like that and if I can’t remember a specific syntax, I’ve found that I can write a comment saying what the following code will do and boom, suddenly copilot writes a version of that code close to what I would’ve written.

            I mean you’re right that it can write stuff that doesn’t work, I just find that I can usually filter that out pretty quickly. The times I can’t, I’m a bit stuck anyway and it’s worth a shot to try their mysterious solution. But since I always treat its solutions with skepticism I haven’t been bitten yet.

            For me, copilot just takes the monotony out of the job. Instead of spending as much time writing boring stuff I get to focus on the more interesting parts

          • TrickDacy@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            6
            ·
            1 year ago

            Maybe you aren’t that good at being a human, this comment being good evidence of that

        • interceder270@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Ignore them. At some point you gotta realize most people are losers trying to bring others down with them.

          Do what works for you :)

          • TrickDacy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I appreciate this comment. You inspire me to not only ignore more assholes, but maybe I’ll also be one myself less often :)

        • deur@feddit.nl
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          6
          ·
          1 year ago

          Ill blindly accuse you of being bad at your job too, bud.

  • remus989@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    1 year ago

    Do people actually care what Obama has to say about AI? I’m just having a hard time seeing where his skillset overlaps with this topic.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      Probably as much as I care about most other people’s thoughts on AI. As someone that works in AI, 99% of the people making noise about it know fuck all about it, and are probably just as qualified as Barack Obama to have an opinion on it.

      • krazzyk@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        What do you do exactly in AI? I’m a software engineer interested in getting involved.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I work for Amazon as a software engineer, and primarily work on a mixture of LLM’s and compositional models. I work mostly with scientists and legal entities to ensure that we are able to reduce our footprint of invalid data (i.e. anything that includes deleted customer data, anything that is blocked online, things that are blocked in specific countries, etc). It’s basically data prep for training and evaluation, alongside in-model validation for specific patterns that indicate a model contains data it shouldn’t have (and then releasing a model that doesn’t have that data within a tight ETA).

          It can be interesting at times, but the genuinely interesting work seems to happen on the science side of things. They do some cool stuff, but have their own battles to fight.

          • krazzyk@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            That sounds cool, I’ve had roles that were heavy on data cleansing, although never on something so interesting. What languages / frameworks are used for transforming the data, I understand if you can’t go into too much detail.

            I did wonder how much software engineers contribute in the field, it’s the scientists doing the really interesting stuff when it comes to AI? Not surprisingly I guess 😂

            I’m a full stack engineer, I was thinking of getting into contracting, now I’m not so sure, I don’t know enough about AI’s potential coding capabilities to know whether I should be concerned about job security in the short, or long term.

            Getting involved in AI in some capacity seems like a smart move though…

            • EnderMB@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              We do a lot of orchestration of closed environments, so that we can access critical data without worry of leaks. We use Spark and Scala for most of our applications, with step functions and custom EC2 instances to host our environments. This way, we build verticals that can scale with the amount of data we process.

              If I’m perfectly honest, I don’t know how smart a move it is, considering our org just went through layoffs. We’re popular right now, but who knows how long for.

              It can be interesting at times, but to be honest if I were really interested in it, I would go back and get my PhD so I could actually contribute. Sometimes, it feels like SWE’s are support roles, and science managers only really care that we are unblocking scientists from their work. They rarely give a shit if we release anything cool.

        • unexpectedteapot@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          There is a tad bit of difference between caring about an opinion and tolerating one. Obama’s opinions on AI are unqualified pop culture nonsense. They wouldn’t be relevant in an actual discussion that would cite relevant technical, economical and philosophical aspects of AI as points.

          • lugal@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Sure, care about it or don’t, I don’t care. It was the “being qualified to have an opinion” bit I didn’t like. I don’t have to qualify to have an opinion and I can write an opinion piece and sure enough, less people will read it than Obama’s. I might not be qualified to teach on that subject but everyone is qualified to build one’s own opinion.

            But maybe that’s just overly pedantic on my side. You are qualified to have a different opinion.

      • Queen HawlSera@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        I know this was once said about the automobile, but I am confident in the knowledge that AI is just a passing fad

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Why? It’s a tool like any other, and we’re unlikely to stop using it.

          Right now there’s a lot of hype because some tech that made a marked impact of consumers was developed, and that’s likely to ease off a bit, but the actual AI and machine learning technology has been a thing for years before that hype, and will continue after the hype.

          Much like voice driven digital assistants, it’s unlikely to redefine how we interact with technology, but every other way I set a short timer has been obsoleted at this point, and I’m betting that auto complete having insight into what your writing will just be the norm going forward.

            • Trantarius@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              The Chinese room argument doesn’t have anything to do with usefulness. Its about whether or not a computer that passes the turing test is conscious. Besides, the argument is a ridiculous one to begin with. It assumes that if a subcomponent of a system (ie the human) lacks “understanding”, then the system itself (the human + the room + the program) lacks understanding.

              • ricecake@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Anything else aside, I wouldn’t be so critical of the thought experiment. It’s from 1980 and was intended as an argument against the thought that symbolic manipulation is all that’s required for a computer to have understanding of language.
                It being a thought experiment that examines where understanding originates in a system that’s been given serious reply and discussion for 43 years makes me feel like it’s not ridiculous.

                https://plato.stanford.edu/entries/chinese-room/#LargPhilIssu

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              What?

              At best you’re arguing that because it’s not conscious it’s not useful, which… No.
              My car isn’t conscious and it’s perfectly useful.

              A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

              We’re gonna see it wane as a key marketing point at some point, but it’s been in use for years and it’s gonna keep being in use for a while.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

                I agree with most of what you’re saying here, but just wanted to add that another really hard part of a lot of human endeavors is actual prediction, which none of these things (despite their names) actually do.

                These technologies are fine for figuring out that you often buy avocados when you buy tortillas, but they were utter shit at predicting anything about, for instance, pandemic supply chains…and I think that’s at least partially because they expect (given the input data and the techniques that drive them) the future to be very similar to the past. Which holds ok, until it very much doesn’t anymore.

                • jasondj@ttrpg.network
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  I’m sorry, they aren’t good at predicting?

                  My man, do you have any idea how modern meteorology works?

                  A ton of data gets dumped into a ton of different systems. That data gets analyzed against a bunch of different models to predict forecasts The median of al those models is essentially what makes it into the forecast on the news.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Well, I would disagree that they don’t predict things. That’s entirely what LLMs and such are.

                  Making predictions about global supply chains isn’t the “hard but boring” type of problem I was talking about.
                  Circling a defect, putting log messages under the right label, or things like that is what it’s suited for.

                  Nothing is good at predicting global supply chain issues. It’s unreasonable to expect AI to be good at it when I is also shit at it.

            • SCB@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              You not having a job where you work at a level to see how useful AI is just means you don’t have a terribly important job.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                What an brain drained asshole take to have. But I’ve seen your name before in my replies and it makes sense that you’d have it.

                AI is useful for filling out quarterly goal statements at my job, and boy are those terribly important… 😆

      • Rooskie91@discuss.online
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        1 year ago

        Absolutely not. We need to learn the difference between intelligence and expertise. Is Obama an intelligent person? Of course. Is he allowed to have and voice an opinion? Sure, it’s a free country. Does that mean that his opinion is informed by expertise and should dictate peoples actions and therefore the direction of an industry? No.

        This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

          And your logic is the same followed by government agencies when they effectively agree to regulatory capture because all of the industry experts work at this company, so why not just let the company write the rulebook? 🤔

          I personally don’t believe we need “industry experts” in every new, emerging type of tech to be the sole voices considered about them because that’s how we largely arrived at the great enshitterment we’re already experiencing.

          Edit: It’s really quite a baffling take (given a moment’s thought) that the big problem and/or a large problem facing America is that we aren’t cozy enough with “industry experts”. Industry practically write the policy in this country, and the only places where we have any kind of great debate (e.g. net neutrality, encryption) is where there are conflicting industry concerns.

    • Mango@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 year ago

      I’m just a dude who does general labor and have lots of insights about AI just because I’m interested and smart. People tend to come to me just to hear what I have to say.

      Now look at Obama. He’s all of that and much more in the eyes of a society that’s put Obama in the spotlight. He can talk about totally boring stuff and people will still respect his opinion.

  • BeautifulMind ♾️@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    14
    ·
    1 year ago

    But do we really need AI to generate art?

    Why can’t AI be used to automate useful work nobody wants to do, instead of being a way for capital to automate skilled labor out of high-paying jobs?

    • notapantsday@feddit.de
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      1 year ago

      Because AI is unpredictable. Which is not a big issue for art, because you can immediately see any flaws and if you can’t, it doesn’t matter.

      But for actually useful work, you don’t want to find out that the AI programmer completely made up a few lines of code that are only causing problems when the airplane is flying with a 32° bank angle on a saturday with a prime number for a date.

    • logicbomb@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      It’s virtually guaranteed that at some point, robots and/or AI will be capable of doing almost every human job. And then there will be a time when they can do every job better than any human.

      I wonder how people will react. Will they become lazy? Depressed? Just have sex all the time? Just have sex with robots all the time?

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        It depends if the government introduces universal basic income or not. If they do I couldn’t care less if I don’t have a job. Any reason I have a job is so that I have money. I don’t do it so I have some kind of fulfillment in my life because it isn’t a fulfilling job.

        Just have sex all the time?

        I’m confused about how this one tracks. Is the AI going to make me more attractive or is it just going to lower everyone else’s standards?

        • logicbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Like all other animals, humans evolved to be more likely to procreate. There is an argument that all of that other stuff we do is just in support of procreation. But in a way, it’s also a distraction and an impediment to procreation. It just so happens that we’ve been unable to avoid doing that other stuff so far.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Even animals do other things though they play and socialise.

            Personally if AI makes working irrelevant I’m just going to spend most of my time in Disneyland.

    • Stuka@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      You talk like AI I’d a singular entity that can only do one thing?

    • interceder270@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      Why should we stifle technological progress so people can still do jobs that can be done with a machine?

      If they still want to create art, nobody is stopping them. If they want to get paid, then they need to do something useful for society.

      • BeautifulMind ♾️@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        edit-2
        1 year ago

        Nobody’s calling to stifle technology or progress here. We could develop AI to do anything. The question is what should that be?

        There’s a distinction to be drawn between ‘things that are profitable to do and thus there isn’t any shortage of’ and ‘things that aren’t profitable and so there’s a shortage of it’ here. Today, the de facto measure of ‘is it useful for society?’ seems to be the former, and that doesn’t mean what’s useful for society, it’s what’s usefuI for people that have money to burn.

        Fundamentally, there isn’t a shortage of art, or copy writers, or software developers, or the things they do- what there is, that AI promises to change, is the inconvenient need to deal with (and pay) artisans or laborers to do it. If the alternative is for AI vendors to be paid instead of working people, is it really the public interest we’re talking about, or the interests of corporate management that would rather pocket the difference in cost between paying labor vs. AI?

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      AI is an enabler. I have not patience for sitting and drawing for hours on end to make extremely detailed art but I’m a creative individual and would love to have the power to bring my ideas into reality. That’s what AI art does.

      The problem with that, of course is it means that if I’m really serious about an idea I won’t be paying some artist(s) to make it happen. I’ll just whip open an AI art prompt (e.g.Stable Diffusion or any online AI art generators) and go to town.

      It often takes a lot of iteration and messing with the prompt but eventually you’ll get what you want (90% of the time). Right now your need a decent PC to run Stable Diffusion (got 8GB of VRAM? You too can generate all the AI images you want 👍) but eventually people’s cell phones in their pockets will be even better at it.

      Civitai is having a contest to make a new 404 error page graphic using AI. Go have a look at some of the entries:

      https://civitai.com/collections/104601

      I made one that’s supposed to be like the Scroll of Truth meme:

      Scroll of Truth meme 404 error page

      I made that on my own PC with my limited art skills using nothing but automatic1111 stable diffusion web UI and Krita. It took me like an hour of trying out various prompts and models before I had all the images I wanted then just a few minutes in Krita to put them into a 4-panel comic format.

      If I wanted to make something like that without AI it just would never have happened.

      • KaJedBear@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Not that it really matters in this case, but AI art just seems inconsistent in silly ways. That girls shirt changes each frame, her hair gets more braided, and the 3rd frame has 2 left hands. I guess at first glance you don’t really notice, but it’s not hard to spot and it hurts my brain once I do.

        • erwan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Also her face expression is so generic it doesn’t convey the meaning like the original did.

          Honestly if I didn’t know the original I wouldn’t understand the point of this one.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Not that it really matters in this case, but AI art just seems inconsistent in silly ways.

          It’s to be expected given that the AI has no access to the previous image.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      But do we really need AI to generate art?

      No, but we want it to. It’s probably only a matter of time untill AI can do better anything that humans can, including art. Now if there’s an option to view great art done by humans or amazing art done by AI I’ll go with the latter. It can already generate better photographs than I can capture with my camera but I couldn’t care less. Takes zero joy out of my photography hobby. I’m not doing it for money.

    • thoughtorgan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      We don’t need it. It’s just cool tech. I’ve messed around with stable diffusion a lot and it’s a cool tool.

    • IchNichtenLichten@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      I don’t think it’s really helpful to group a bunch of different technologies under the banner of A.I. but most people aren’t knowledgeable enough to make the distinction between software that can analyze a medical scan to tell me if I have cancer and a fancy chat bot.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        The whole point of AI is that those systems aren’t fundamentally different. There is little to no human expertise that goes into those systems, it’s all self learned from the data. That’s why we are getting AIs that can do images, music, chess, Go, chatbots, etc. all in short order. None of them are build on decades worth of human expertise in music or art, but simply created by throwing data at the problem and letting the AI algorithms figure out the rest.

        • BeautifulMind ♾️@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          There is little to no human expertise that goes into those systems, it’s all self learned from the data.

          The human expertise is in the data. There’s no such thing as spontaneous AI generation of expertise from nothing. If you train up an AI on information that doesn’t have it, the AI won’t learn it. In a very real way, the profit margins of AI-generated content rest wholly on its ability to consume and derive output from source material developed by unpaid experts.

          Also, when the data is the output of people with biases, the AI will do the same.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            The human expertise is in the data.

            There are no art lessons in the training data, just labeled images. How to actually draw the AI has to figure out by itself. Even the labels on the data aren’t strictly needed, they are just there so the humans can interact with the AI by text.

            Same with AlphaZero, it didn’t learn playing Go from humans, it learned that by playing against itself and not only beat humans, but previous versions that were still based on recording human Go games.

            That’s the bitter lesson of AI research: Throwing data and computation at the problem gives you much better results than human experts.

            And we have barely even begun to explore this. What ChatGPT does is still just reciting information from books and websites, it can’t interact with the real world to learn by itself. It being based on books written by human experts is not a benefit, but what’s holding it back.

      • Inmate@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        14
        ·
        1 year ago

        If he’s an unqualified bystander, then what the fuck are you?

        I’m always surprised that the people with all the answers only share them with thirty other assholes on the Internet.

        I’m confident a 14 year old can write their own AI, maybe even a smart 10 year old.

        Here’s instructions for kids:

        https://youtu.be/XJ7HLz9VYz0?si=1QN3fqT03HSMufib

        You think Obama can’t wrap his head around a little algebra?

        Why, when speaking intelligently and thoughtfully in the subject, is he so wrong in his assessment, when you, in one lazy sentence, are so right?

        I’m really worried about would-be wise people just throwing in the towel cause they don’t know how much better they could be with a little discipline, and settle for being clever here and there.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Why, when speaking intelligently and thoughtfully in the subject, is he so wrong in his assessment, when you, in one lazy sentence, are so right?

          Obama is employing good old human exceptionalism and moving the goal post. A tried and true method of argumentation that has continued to fail for the last 50+ years when it comes to AI. “AI is good at X, but not Y” becomes “AI is good at X and Y, but not Z” the next year. Focus on a tiny niche that AI hasn’t covered yet, while ignoring the pace at which AI is advancing. Wasn’t too long ago that people where proclaiming that computers could never be creative. Nowadays that switched to “but it can’t beat the human masters”. Well, guess what? That did neither hold true for Chess nor Go and it won’t work out for Bob Dylan music either. Be prepared for a future where AI is better at everything. It will come and much sooner than people expect.

          It’s also worth keeping in mind the quantity of AI generated content. I still hear tons of artists talk as if AI were competing with them on a level playing field. But in the time they finish one image, AI finished thousands or even millions. This is not just about AI replacing the human, but completely shifting how we deal with information in general. Something like ChatGPT isn’t interesting because it can write better websites than a human, but because it completely bypasses the need to visit websites in the first place. You ask the AI and the AI delivers the answers. There is no intermediate step where knowledge needs to get dumped into a static website or a book.

        • Fisch@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          As far as I know, Obama has nothing to do with IT and doesn’t have a big interest in it. A lot of people on here are probably more qualified than he is when it comes to these topics simply because they spent a lot of their free time learning about it.

    • Sagifurius@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      1 year ago

      Because he’s a world leader and AI programs are answering search engine queries with what you want to hear now, not actual answers. Ain;t no way hes unaware that.

    • Inmate@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      12
      ·
      1 year ago

      Because you can teach a teen to do it in two weeks. He was a constitutional law professor, as well as the first elected African-American president in the United States. I learned LLMs in a couple months and I never used a comp until 2021. Why are you gatekeeping?

      • Daxtron2@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        1 year ago

        Using the end product and having any idea how it works are two VERY different things.

        • Inmate@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          9
          ·
          1 year ago

          I agree, my argument is that both aren’t challenging for even the average person if they really want/need to understand how these models produce refined noise informed by human patterns.

          There are electricians everywhere you know.

          This isn’t a random person thoughtlessly yelling one-sentence nonsense pablum on the Internet like you.

          You think this person can’t understand something as straightforward as programming, coming from law?

          https://en.wikipedia.org/wiki/Barack_Obama

          Please link your Wikipedia below 🫠

          • Daxtron2@lemmy.ml
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            3
            ·
            1 year ago

            It’s a bit more complicated than you’re making it out to be lmfao, there’s a reason it’s only really been viable for the past few years.

            • skulkingaround@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              1 year ago

              The principles are really easy though. At its core, neural nets are just a bunch of big matrix multiplication operations. Training is still fundamentally gradient descent, which while it is a fairly new concept in the grand scheme of things, isn’t super hard to understand.

              The progress in recent years is primarily due to better hardware and optimizations at the low levels that don’t directly have anything to do with machine learning.

              We’ve also gotten a lot better at combining those fundamentals in creative ways to do stuff like GANs.

    • lledrtx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      AI researcher (PhD) here and for what it’s worth, Obama got it extremely right. I saw this and went “holy shit, he gets it”

      • gmtom@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Yeah I dont believe you at all. I got my master in AI 8 years ago and have been working in the field ever since and no one with any knowledge would agree with you at all. In fact I showed a couple of my colleagues the headline of this article and they both just laughed.

      • Azhad@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        If you don’t think ai will get there and surpass everything humans have done in the past, you should change career.

        • lledrtx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          I’m saying this because I do this for a living. It has become obvious to everyone in research (for example - https://arxiv.org/abs/2311.00059) that "AI"s don’t understand what they are outputting. The secret sauce with all these large models is the data scale. That is, we have not had real algorithmic breakthroughs - it’s just model scale and data scale. So we can make models that mimic human language and music etc but to go beyond, we need multiple fundamentally different breakthroughs. There is a ton of research attention now so it might happen, but it’s not guaranteed - the improvements we’ve seen in the past few years will plateau as data plateaus (we are already there according to some, i.e we’ve used all the data on the Internet). Also, this - https://arxiv.org/abs/2305.17493v2

          • Azhad@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You do it for a living and you can’t even understand what a general ai is. Alas I long since understood that mostly everyone is profoundly incompetent at their own jobs.

    • davidgro@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      While I agree, it’s also the case that those …Creations… are extremely human directed. As far as I know the maker is not only training the models for the voices, but also specifying each output word, and then its timing and pitch(s)

      And of course placing the siren whistle.

  • eronth@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    1 year ago

    While reassuring for many to hear, that’s only going to be true for so long. Eventually it’s going to be real fucking good at making “real” music. We need to be preparing for those advancements rather than acting like they’ll never come.

    • IDontHavePantsOn@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      edit-2
      1 year ago

      I feel very reassured to hear that from the AI expert / musical virtuoso himself, 62 year old, former United States President Barack Obama.

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      To make “real” music, AI will probably need a lot of help. Image generators and chat bots seem to have their own, very boring style. I’ve seen videos of artists using AI tools in their workflow, but it’s still a very involved process. I think it will just be another tool for musicians and sound engineers.

      • eronth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m the immediate term, yeah I 100% agree. However, I’m not thinking we bank on that being true forever.

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    One of my jobs involved updating blogs for small businesses. I had a Shutterstock subscription for the images that goes along with these blog posts. For this task, I think AI generated images work a lot better than stock photography.

    • boogetyboo@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      There’s some recruitment company advertising jobs on LinkedIn. All the pictures are clearly AI generated and they’re terrifying. Uncanny Valley freaks grinning at you from your screen.

    • b3nsn0w@pricefield.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      you can already api into chatgpt and dall-e 3 as one cohesive service, and make a system in an afternoon’s work that reads the article, decides on a thumbnail, and automatically generates one. the whole thing costs like 8 cents per article.

      • Fisch@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        You could also self host Stable Diffusion to save some money

        Edit: Or is it free to use with ChatGPT Premium (or whatever it’s called)? Then that would actually be cheaper

        • b3nsn0w@pricefield.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          no, the premium stuff doesn’t give you api access. which is total bs, but yeah, it’s only for that grey interface. (i’m also quite salty that the playground has no easy to access image inputs but that’s beside the point)

          you’re completely right about self-hosting sd, it’s just a matter of prompting. sd workflows tend to get a little more experimental but i guess you could still make chatgpt write a few prompts that are close to correct and just manually rerun if an image failed

          • Fisch@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Maybe ChatGPT knows how to write SD prompts at this point. You could try just telling it to generate a prompt specifically for SD.

    • S_204@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      My office is using AI to do this years xmas card. It’s pretty cool.

  • the_q@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    10
    ·
    1 year ago

    Young people think all this AI stuff is great and older folks are suspicious. I think older folks are right this time.

    • interceder270@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Eh. Hard to take people’s opinion on art seriously considering what’s popular.

      This is an interesting crossroads where greedy creators have to fight against greedy owners.

    • Blueberrydreamer@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      My experience has been the opposite. The boomers think it’s gonna do everything for them, and the young people I know think it’s gonna destroy the world.

    • desconectado@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      As any tool, it’s as great as its user. I think younger generations are probably more eager to explore and expand, but it’s ok to be suspicious when used incorrectly.

      AI is great when used for some specific applications, but I had a discussion last week with someone asking chatgpt about immigration advice… Ehh no thanks, I rather talk to an actual expert.

    • egeres@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I agree with the conclusions of the boomers, but for very different I think long-term AI will produce vastly more harm than good. Just this week we got a headline about google, which is a serious and grown company which already makes billions was up to some fuckery against firefox, facebook has been fined a million times for not respecting privacy and amazon workers have to pee in bottles. To my sadness, all movement against the integration of AI in weapons basically to “kill people” will be very noble but won’t do jackshit. Do we think china/rusia are going to give a single fuck about this? Even the US will start selling AI-drones when it becomes normalized. And that’s just AI in war, but there’s another trillion things where AI will fuck things up, artists will be devalued, misinformation will reach a new all-time high, capchas are long dead making the internet a more polluted place, surveillance will be more toxic, the list goes on

  • Loqzer@lemmings.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    1 year ago

    Definitely need more people to tell me about ai and what it will be capable of. Make a daily show so that every shitty celebrity can tell us about ai, there might still be plenty of word combinations that haven’t been used!

  • takeda@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 year ago

    Elevator music as well as the mainstream music that majority of people listen to like pop etc.

    That music is already very formulaic and almost as if it is generated by Ai.

  • Mr PoopyButthole@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    1 year ago

    Already likely to be untrue, but honestly I’d happily sign up for a world wear “hold music” isn’t the same 20sec loop of shit jazz

  • spudwart@spudwart.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    Okay, I love the elevator music idea as a gag in media.

    But I’ve never been in an elevator that has ever played music, and I can completely understand why. Elevator music sounds obnoxious.