• metaStatic@kbin.earth
    link
    fedilink
    arrow-up
    48
    arrow-down
    3
    ·
    2 months ago

    if he gets over his fear of lidar and stops trying to do everything with cameras this could actually work out … so basically this isn’t going to work out.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      3
      ·
      edit-2
      2 months ago

      this could actually work out

      No it won’t, Elon Next year Musk has promised Fully autonomous driving next year since 2016. He has sold his cars with subscriptions for FSD for years, despite it doesn’t work. It’s even illegal to call it FSD now, so Tesla has to call it assisted FSD, which is an oxymoron.

      With this move, Enron Musk will more likely ensure the continued decline of Tesla. The Cybercab most likely will not be a moneymaker, and the focus on developing it, will detract from Tesla developing much needed new EV models, for a market with increasing competition.

      Tesla is far from #1 in developing autonomous driving, so the chances are very slim that they would be even close to be first to market.

      AFAIK this is pretty much the current rank:
      -1 Waymo (Google)
      -2 Mercedes
      -3 Mobil Eye
      -4 GM (Cruise)
      -5 Baidu
      -6 Tesla

      Possibly Nissan-Renault (WeRide) and Nvidia can match Tesla too.

      Notice that Tesla used MobilEye originally up to 2016, but MobilEye ended the partnership after a Tesla model S had a fatal crash. I suspect the irresponsible claims implementation and practices by Elon Musk were too much.

      Elon Musk is insane and a con man, to believe anything he claims about the future of his companies is naive.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        2 months ago

        Because Musk has a weird obsession that everything needs to work with just cameras, and no other sensors can help. While it might work someday in the far future with proper AGI (e.g Delamain from Cyberpunk 2077), until then it’s a pretty hopeless endeavour.

        • IphtashuFitz@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          3
          ·
          2 months ago

          The problem is computer vision has a LONG way to go before it’s truly on par with human eyesight. Musk loves to crow how cameras are sufficient since we use our eyes to drive.

          The thing is, eyes have special neural circuits that detect motion. They essentially filter out unnecessary information and send just the motion details to the brain. This prevents the brain from being overloaded with every detail the eye constantly sees.

          And being overloaded with everything is exactly what computer vision currently does. It’s just a stream of images that the computer must analyze completely. So it’s working exactly opposite to how the eye & brain works.

          • JohnEdwa@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            2 months ago

            Long way, but it’s not an impossible task, as at the core the eye is nothing but a bunch of light sensors that spit out a result, we just need to figure out how to calculate that result ourselves. Motion amplification could be one solution given enough computing power to do it in real time, for example.

            But we agree, safe and accurate camera based self driving isn’t going to happen in a long, long time.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          2 months ago

          I’m not sure the approach Elon Musk has to developing self driving will ever work.
          From what I’ve heard about how they “teach” the AI, it probably won’t, because loading massive amounts of new data is rewarded, but there is no proper qualitative control.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Man, how did Mobil Eye not just tack an “s” on the end? The pun is right there, and it’s frankly excellent and super topical to what they’re trying to do.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      ·
      2 months ago

      What benefit would a lidar bring that they haven’t already achieved with cameras and radar? The car not seeing where it’s going is not exactly an issue they’re having with FSD.

      • Num10ck@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        2 months ago

        a lidar could tell the difference between a person on a bus billboard and a person. it brings 3d to a 2d party.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          14
          ·
          2 months ago

          A lidar alone can’t do that. It’ll just build a 3D point cloud. You still need software to detect the individual objects in there and that’s easier said than done. So far Tesla seems to be achieving this just fine by using cameras alone. Human eyes can tell the difference between an actual person and a picture of a person too. I don’t see how this is supposed to be somethin you can’t do with just cameras.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            2
            ·
            edit-2
            2 months ago

            So far Tesla seems to be achieving this just fine by using cameras alone.

            Funny, last I heard, Tesla FSD has a tendency to run into motorcycles.
            With lidar there would be no doubt that there is an actual object, and obviously you don’t drive into it.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              13
              ·
              edit-2
              2 months ago

              No, and neither are your eyes, but you can still see the world in 3D.

              You can use normal cameras to create 3D images by placing two cameras next to each other and creating a stereogram. Alternatively, you can do this with just one camera by taking a photo, moving it slightly, and then taking another photo - exactly what cameras in a moving vehicle are doing all the time. Objects closer to the camera move differently than the background. If you have a billboard with a person on it, the background in that picture moves differently relative to the person than the background behind an actual person would.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                arrow-down
                2
                ·
                2 months ago

                neither are your eyes

                That’s a grossly misleading statement.
                We definitely use 2 eyes to achieve a 3D image with depth perception.

                So the question is obviously whether Tesla does the same with their Camera AI for FSD.

                IDK if they do, but if they do, they apparently do it poorly. Because FSD has a history of driving into things that are obviously (for a human) in front of it.

              • zbyte64@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                Talk about making a difficult problem (self-driving) more difficult to solve by solving another hard problem.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  2 months ago

                  Just slapping on a lidar doesn’t simply solve that issue for you either. Making out individual objects from the point cloud data is equally difficult plus you’re then having to deal with cameras too because Waymo has both. I don’t see how you imagine having Lidar and cameras would be easier to deal with than just cameras.

                  Also. Tesla already has more or less solved this issue. FSD works just fine with cameras only and new HW4 models have radar too.