• ReallyZen@lemmy.ml
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    7 months ago

    DDG has it’s non-track version online since a bit now. Use the !ai bang to get to it

    Also you have the choice of Claude insted of ChatGPT, and your queries aren’t harvested for further ai training

    In any case, it’s a completely different tab, it’s not mingled in general search results

    • brie@beehaw.org
      link
      fedilink
      arrow-up
      15
      ·
      6 months ago

      DDG’s AI chat isn’t a search engine. It’s just a chat interface for GPT-3.5 Turbo and Claude, without any search access. It’s also not very up to date.

      GPT-3.5 Turbo:

      Who is the PM of the UK?

      As of my last update, the Prime Minister of the United Kingdom is Boris Johnson. Please note that political positions can change, so I recommend checking the latest news sources for the most up-to-date information.

      Is Queen Elizabeth still alive?

      Yes, as of my last update, Queen Elizabeth II is still alive.

      The Claude version doesn’t fare any better:

      Is Queen Elizabeth still alive?

      Yes, Queen Elizabeth II is still alive. She is the current and longest-reigning monarch of the United Kingdom, having ascended to the throne in 1952. As of 2023, she is 96 years old.

      • averyminya@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        6 months ago

        These are never the sort of answers I would want to ask AI for anyway (not a slight against your example, this is a common thing I see).

        @u_tamtam@programming.dev

        I also haven’t seen any practical advantage to using LLM prompts vs. traditional search engines in the general case:

        For general temporary facts I would agree. Even Amazon’s surmized reviews, it can be handy to know that “Adhesive issues” is commonly sighted… but I’d learn that from reading the reviews anyway… Like, a lot of the time it comes down to AI being used when the human should do their own due diligence. I will even admit to this in the very next paragraph.

        I find AI to be especially good at things I am not, like math. I am very good at estimations, and I can work out some stuff over time. However, I am much slower compared to asking “I currently make 2.1-Z a month and I have 397-Z earning that interest. I would like to make 65-Z a month, how much do I need earning interest to make that?” (Roughly 13,100 btw) and getting that answer along with the formula showing its work. It spits out the answer in the amount of time it took me to work out that verbal question, both of which were far faster than the time it takes me to pull up a calculator and do the same math. It’s not that I can’t, it just takes a lot of time that could be better spent actually doing the thing I want to do, which is how many months based off what I earn will it take to reach that number.

        Similarly, this reigns true for a lot of things with “facts.” Perpetual facts or immutable facts are the best use for AI. In my opinion based on experience, of course.

        A fact about a song will always be in the key it was created in. A key will always have a specific set of scales that can be used with it. Math will always be the answer to an equation. These are, for the most part, immutable facts. A person on the other hand, will not always be their age, or even living, nor will their net worth stay the same. Let’s not even get started on the weather! These are temporary facts.

        Quite a few people tend to ask AI temporary facts (rightfully so, it’s what we would like to do on a day to day basis for casual questions), but and it gets a lot of flack for not doing a great job at it (again rightfully so since it’s a basic question.) But I have found that AI is actually quite strong at perpetual facts. When time is short and at the end of the day I just want to jam to my favorite songs, I can get a quick reminder of the key and scales I can use to play along with. On my own I know and can remember these things, but asking a question and getting an answer possibly even faster is really nice.

        Not to be pro-AI – In this case I really think it comes down to using the tool you have. We live in the present and the future, so it seems ridiculous to rely on something trained on data rooted in the past and expecting that it will always be that. Hence, immutable facts tending to be more reliable to work with when using AI.

        I like tech, so I have used and played with local LLM’s and Stable Diffusion models and worked on a model based on my own art of Zentangles, I don’t think I would ever actively rely on this technology for anything more than cursory fun when I’m short on time and energy, or as a supplement to something that I, frankly, am going to take far too long to learn and will forget in the span of a couple months when I no longer need it. I don’t exactly feel the need to memorize the 300,000 Excel sheet tricks, but I will sure as shit ask BarGemeni about it. Using it to confirm my estimations to see that I was roughly accurate compared to an AI that is roughly accurate is good enough for me for some quick and dirty math.

        Ultimately that’s what the LLM-AI debate is for me. Relying on it for anything that is ever changing, using it for anything more than just basic fun is setting yourself up for a bad time. Using it here and there as a calculator or for some non-important details about something that has remained static since the dawn of time? You can net yourself some pretty nice futuristic “Hell yeah’s”. Packing these things up into little boxes like supplanting a phone (or adding it to your phone), using it to create non-existent support (both support staff and supporting terrible products to trick people into buying it), or adding it to rice cookers and refrigerators is… the direction expected but not the one I was hoping for.

      • ReallyZen@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        You can also ask it when is the cutoff date of their database - there is a gentleman’s agreement between providers not to have ai involved in news / current politics in it’s public chats.

        I tried them on a topic I’m pretty proficient on, (a spaghetti recipe lol) and the answer was the most bland imaginable.

        The way it is setup by DDG, the restrictions and blandness, shallowness of the replies give me peace of.mind when a ‘natural language’ query is the easiest one. And Claude wouldn’t give me the DOB of that queen because it is Personal Info!