College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    15
    ·
    edit-2
    1 year ago

    The only real requirement for AI is that a machine take actions in an intelligent manner.

    There’s the rub: defining “intelligent”.

    If you’re arguing that traffic lights should be called AI, then you and I might have more in common than we thought. We both believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.

    • throwsbooks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      1 year ago

      I think you’re conflating “intelligence” with “being smart”.

      Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.

      Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.

      I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.

      But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.

      • ParsnipWitch@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Intelligence is more about taking in information and being able to make a decision based on that information.

        Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely sucks at that.

        I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….

        • throwsbooks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach

          Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.

          But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.

          My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?

          If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?

          • ParsnipWitch@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Intelligence (in a biological sense) is defined differently from how computer scientists approach describing artificial intelligence. “Making a decision based on information” is not a criteria sufficient to declare something is intelligent in a biological sense. But that’s what a lot of people (wrongly) assume when they hear artificial “intelligence”.

            To describe an AI as intelligent, as understood in natural science, you obviously can’t use the criteria applied in computer science. There is broad consensus in biological science that animals have intelligence. Just the scope of intelligence is heavily discussed, or rather, which level of intelligence each of them reach.

            Viruses are not considered lifeforms, btw. Naturally, there is no 100 % answer on anything in science. But that shouldn’t be confused with there being no substance to these answers.

    • sin_free_for_00_days@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.