• Spaz@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    23
    ·
    1 year ago

    I agree. Let’s cut the middle man and force 100% automated driving. People can fuck in the back then with less likely to die than with humans with stupid cars without assistance driver aids. Driving is extremely dangerous and honestly I trust ai over other people (in USA).

    • Da_Boom@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Nah, I don’t know if AI will ever be 100% perfect, and I don’t want to trust it fully. Ai is human built, and it’s my personal belief that humans aren’t perfect, so AI will therefore never be perfect.

      Also, you will always want a qualified driver to be able to take over should some part of the car sensor systems fail.

      Sensors, unlike humans have a tendency to fail quickly, sometimes instantly, and even AI and autopilot can behave erratically if it gets bad or false inputs from bad sensors.

      It’s like in a airliner, autopilot even though at this point is pretty much practically capable of flying a plane completely from takeoff to landing, there will always be at least pilots on duty in the cockpit in order to account for unforseen circumstances and failures, even if they never actually fly the plane normally.

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        AI doesn’t need to be perfect, it just needs to be better than your average human driver. Which, you know isn’t a very high bar…

        Comparing to an airplane pilot isn’t the same, a pilot goes through years of training to be able to fly passengers (Well beyond a dinky Cessna or whatever anyways) and you need years of experience on top before you are even considered by the big airlines

        A human driver can get a license in as little as a few days

        • fuckwit_mcbumcrumble@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          Or hear me out… What if we had really long cars, sometimes chained together, put them on rails, and have just 1 human drive hundreds of them.

      • Spaz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        1 year ago

        Oh seems I wasn’t clear. Sentient AI should drive us. Give it 30 years and I bet it will be close to the outcome if not on the cusp.

        • Da_Boom@iusearchlinux.fyi
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          Even if we somehow manage to create a sentient AI, it will still have to rely on the information it receives from various sensors in the car. If those sensors fail, and it doesn’t have the information it needs to do the job, it could still make a mistake due to a lack of, or completely incorrect data, or if it manages to realise the data is erroneous it still could flatly refuse to work. I’d rather keep people in the loop as a final failsafe just in case that should ever happen.

          • wabafee@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            1 year ago

            I see your point on this but when should an sentient AI be able to decide for itself? What makes it different from a human by this point? Human, us rely on sensors too to react to the world. We make mistakes also, even dangerous one. I guess we just want to make sure this sentient AI is not working against us?

            • Da_Boom@iusearchlinux.fyi
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              That’s why it’s layers of security. Humans have a natural instinct - usually we can tell if our eyesight is getting worse. And any mistake we make is most likely due to us not noticing something or reacting in time, something that the AI should be able to compensate for.

              The only time where this is not true when we have a medical episode, like a grand Mal or something. But everyone knows safety is always relative. And we mitigate that by redundancies. Sensors will have redundancies, and we ourselves are also an additional redundancy. Heck we could also put in sensors for the occupants to monitor their vitals. There is once again a question of privacy, but really that’s all we should need to protect against that.

              A sentient AI, not counting any potential issues with its own sentience, would have issues with sudden failed or poorly maintained sensors. Usually when a sensor fails, it either zeros out, maxes out, or starts outputting completely erratic results.

              If any of these results look the same as normal results, they can be hard for the AI to tell. We can reconcile those sensors with our own human senses and tell if they failed. A car only has its sensors to know what it needs to know, so if it fails, will it be able to know? Sure sensor redundancy helps, but there is still that minor chance that all the redundant sensors fail in a way that the AI cannot tell, and in that case the driver should be there to take over.

              Again I will refer to the system of an aircraft, as even if it’s a 1 in a billion chance there have been a few instances where this has happened and the autpilot nearly pitched the plane into the ground or ocean, and the plane was only saved due to the pilots takeover - in one of those cases it was due to a faulty sensor reporting that the angle of attack was too steeply pitched up, so the stick pusher mechanism tried to pitch the nose down, to save the plane, when infact it already was down. An autopilot, even an AI one will have no choice to trust its sensors as that’s the only mechanism it has.

              When it come to a faulty redundant sensor, the AI also has to work out which sensor to trust, and if it picks the wrong one, well you’re fucked. It might not be able to work out which sensor is more trustworthy…

              We keep ourselves safe with layered safety mechanisms and redundancy, including ourselves. So if anyone fails, the other can hopefully catch the failure.

              • wabafee@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Wow, I appreciate the response must have taken awhile to write.