Uncategorized

Recruitment algorithms do not always make neutral decisions

Algorithms will also become more and more present in our everyday lives over the next few decades. It doesn’t matter whether a computer suggests the best bus schedule, finds a drug to treat illnesses or just helps out in everyday laboratory work: In some respects, machines are simply more efficient than humans. Nevertheless, artificial intelligence also has its downsides.

Because every algorithm can only be as good and sustainable as its developer. So it doesn’t seem so strange that otherwise objective machines often make prejudiced decisions. For example, Amazon already had to switch off a system that disadvantaged female applicants in recruiting.

Amazon already had problems with unjust algorithms (Image: Christian Wiediger)

Researchers at the University of Melbourne have now also looked at this problem. 40 recruiters were invited and a certain number of applications for a position at UniBank were submitted to them. Also included were tenders for data analysts, tax officials and recruiters. The positions reflected roles that are more male, neutral or female.

Half of the recruiters received applications specifying the applicant’s gender. The other half only received applications that contained a male or female-sounding name (for example “Mark” or “Sarah”). Then the recruiters should introduce the best and worst three candidates. Surprise: women were more often passed over during the trial.

An AI can also have prejudices (Image: Alexander Sinn)

Using the data obtained in the field, an algorithm for evaluating new applications was then developed. Now everyone can guess what the result looked like. Even if no name was given, the AI ​​more often disadvantaged female applicants based on other data. Especially women who have less work experience because they have taken a break to have a child usually have bad cards.

The example shows that algorithms and computer-aided decisions do not always produce the better results. We have to look beyond previous clichés and role models in order to be able to build sustainable and fair AI. Unfortunately, most of them still lack the foresight.

via The Next Web


More from the topic of artificial intelligence:

Leave a Reply

Your email address will not be published. Required fields are marked *