AI hiring tools may be filtering out the best job applicants

As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.

  • Gaywallet (they/it)@beehaw.orgM
    link
    fedilink
    arrow-up
    38
    ·
    10 个月前

    This really does not surprise me one bit. But also, nobody using these tools really cares. It reduces the amount of applications they need to review, which is often all they care about. Can’t wait for the inevitable company to pop up which will do the AI equivalent of SEO stacking your resume so you can get a job.

    Also, perhaps more importantly, this is just going to undo fifty years of antiracism and antisexism work. The biggest problem with AI is that it’s trained on a bigoted system and when it’s used to gatekeep said system, it just creates additional inequality.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 个月前

      Building off your last point, with AI models, bias can come in ways you might not expect. For example, I once saw a model that was trained with diversity in mind, but then only ever output Asian people with a high bias towards women. It seems to me like diversity is something that is difficult to train into a model since it’d be really difficult not to overfit it on a specific demographic.

      It might be interesting to see if a random input into the model could be used to increase the diversity of the model outputs. This doesn’t really help with resume screening tools though (which are probably classifiers), only really generative models.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        10 个月前

        There isn’t really a good way to even define for diversity.

        The bad approach is the corporate token diversity, where every picture has to include a white, a black and an asian person, at least 50% have to be women and one of them has to wear a hijab. That might include many groups, but isn’t really representative.

        You could also use the “blind test” approach many tech solutions are using, where you simply leave out any hints to cultural background, but as has been shown, if the underlying data is biased, AIs will find that (for example by devaluing certain zip codes).

        And of course there’s the “equal opportunity” approach, where you try to represent the relevant groups in your selection like they are in the underlying population, but that is essentially *-ism by another name.

    • acockworkorange@mander.xyz
      link
      fedilink
      arrow-up
      6
      ·
      10 个月前

      The key point is “missing the best applicants”. Companies care about good enough, not best, most of the time. There are only a few positions where they truly worry about having actually good people, and they’re often wrong about which ones and how many they should care about.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        10 个月前

        “Good enough”… is going to be AIs themselves, way cheaper than people. Some of the “actually good”, will also be AIs… just the expensive version. A few people will need to stay there to write “general vision” prompts, oversee the lower level AIs, and press Enter.

        The interesting part, is that it will be much easier to 100% control the work output of the AIs, letting businesses make data-driven optimizations (by manager AIs), and become way more competitive.