• Grandwolf319@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    25 days ago

    I read this comment chain and no? They are giving you actual criticism about the fundamental behaviour of the technology.

    The person basically explained the broken telephone game and how “summarizing” will always have data loss by definition, and you just responded with:

    In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense)

    Just because you couldn’t notice the data loss doesn’t mean the principle isn’t true.

    Your basically saying translating something from English to Spanish and then back to English again is flawless cause it worked for some words for you.

    • cybersandwich@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      25 days ago

      I’m not saying any thing you guys are saying that I’m saying. Wtf is happening. I never said anything about data loss. I never said I wanted people using LLMs to email each other. So this comment chain is a bunch of internet commenters making weird cherry picked, straw man arguments and misrepresenting or miscomprehending what I’m saying.

      Legitimately, the llm grok’d the gist of my comment while you all are arguing against your own strawmen argument.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        25 days ago

        I have it parse huge policy memos into things I actually might give a shit about.

        I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.

        Here are two cases from your original comment that would have data loss. I get you didn’t use the phrase “data loss” but that doesn’t mean your examples didn’t have that flaw.

        Sorry if you view all this as lemmy being “anti ai”. For me, I’m a big fan of ML and what things like image recognition can do. I’m not a fan of LLMs becoming so overhyped that it basically gave the other ML use cases a bad name.