> He added that Workday’s AI recruiting tools are not trained to use or identify protected characteristics like race, age or disability.
Hmm, perhaps, but I think we should be clear on the distinctions between:
1. "We didn't try to cause X."
2. "There is no X happening."
3. "We don't look to see if X happens."
4. "If X happens we don't try to stop it."
As someone involved in HR-tech-stuff, my default stance towards complex "AI" systems is that they all harbor biases, and the main difference is which ones have been discovered yet.
How do they prove this? It sounds like the plaintiffs basically claimed they were rejected a bunch of times and since the resume had recognizable indicators of protected classes they must have been discriminated against?
Don’t get me wrong, I do this work, and Workdays statement of “we don’t use protected classes” instead of “we test our models to prove they are unbiased when given recognizable indicators of protected classes” is pretty telling. Because it’s hard and if you solved it you would be proud. If you don’t control for it it WILL discriminate. See Amazon’s experiment a decade ago.
I’m just really curious how all this plays out in front of a judge.
I'm not siding with Workday, but this feels like a stretch.
The market is rough. Everyone I know who have been looking have had the same experience - hundreds of applications, immediate rejections, etc. And most aren't black applicants over 40 with anxiety.
Nonetheless it'll be fun to see what discovery finds, if it ever gets that far. But I have a feeling they'll just pay a few bucks to make it go away as a nuisance suit.
> He added that Workday’s AI recruiting tools are not trained to use or identify protected characteristics like race, age or disability.
Hmm, perhaps, but I think we should be clear on the distinctions between:
1. "We didn't try to cause X."
2. "There is no X happening."
3. "We don't look to see if X happens."
4. "If X happens we don't try to stop it."
As someone involved in HR-tech-stuff, my default stance towards complex "AI" systems is that they all harbor biases, and the main difference is which ones have been discovered yet.
> my default stance towards complex "AI" systems is that they all harbor biases
I’m sure there are exceptions, but one could assume that opaque systems are used as tools to encode biases that are advantageous but wrong.
These biases could have existed in code, but opaque agents give much better plausible deniability.
(Caveat here acknowledging one can often assume a lack of malice)
How do they prove this? It sounds like the plaintiffs basically claimed they were rejected a bunch of times and since the resume had recognizable indicators of protected classes they must have been discriminated against?
Don’t get me wrong, I do this work, and Workdays statement of “we don’t use protected classes” instead of “we test our models to prove they are unbiased when given recognizable indicators of protected classes” is pretty telling. Because it’s hard and if you solved it you would be proud. If you don’t control for it it WILL discriminate. See Amazon’s experiment a decade ago.
I’m just really curious how all this plays out in front of a judge.
I'm not siding with Workday, but this feels like a stretch.
The market is rough. Everyone I know who have been looking have had the same experience - hundreds of applications, immediate rejections, etc. And most aren't black applicants over 40 with anxiety.
Nonetheless it'll be fun to see what discovery finds, if it ever gets that far. But I have a feeling they'll just pay a few bucks to make it go away as a nuisance suit.
This might very well be happening, but fyi, current LLMs tend to be biased in favor of traditionally disadvantaged groups.