• Mossheart@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    17 hours ago

    Using AI to filter applicants must be fraught with legal risks. Discrimination, unknown biases in the model used, all kinds of things that risk screening someone out illegally. Hopefully someone loses a big lawsuit to scare the industry into sense.

    • CosmicTurtle0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      16 hours ago

      Courts have ruled that using AI allows companies to avoid liability.

      Schrödinger’s corporations: corporations are people and and not people, depending on how much money they stand to gain or lose.

      • Mossheart@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        15 hours ago

        Out of curiosity, do you have any references to those rulings? It’s a topic that comes up at work a lot, I’d love to read more about it!

        • CosmicTurtle0@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 hours ago

          I can’t remember the court case per se but it involved a self driving car hitting a pedestrian I think.

          My cursory search on Google seems to now show companies being held liable for AI hallucinations, with one court case currently pending where an AI chat bot encouraged a teen to commit suicide.

          Given our current administration and make up of SCOTUS I’m not holding my breath.