Uncategorized

Twitter’s algorithm for cropping images is biased

Last Monday we reported to you about Twitter’s challenge to uncover prejudice in its own algorithm for cropping images. After the function was switched off in March because it reflected certain prejudices, they wanted to be on the safe side before trying again.

Now the results are known and despite the revision by Twitter, certain user groups are still preferred by Artificial Intelligence. The problem that made it to first place made it clear that the algorithm preferred thin, young, feminine faces with light or warm facial colors.

Gaps in the algorithm were found using computer-generated faces (Image: Bogdan Kulynych)

Bogdan Kulynych, a graduate of a Swiss university, took home the victory. He researched the algorithm’s weaknesses by having an AI program create realistic faces that differed only in factors such as facial color or age. Then he fed the Twitter function with the data.

Second place demonstrated that AI disadvantaged people with gray or white hair, a clear indicator of age discrimination. Third place was achieved by the realization that Latin letters take precedence over Arabic symbols. But other interesting properties were also uncovered, for example Vincenzo di Cicco demonstrated that emojis with light skin color are preferred by the algorithm.

Latin characters were preferred by the AI ​​(Image: Roya Pakzad)

So in summary, the challenge has brought a lot of positive things. Twitter now has time to close gaps that have so far remained undiscovered by its own development teams. With its own facial recognition, Amazon demonstrated that companies can also approach problems the wrong way. Here users were dragged into the dirt who reported problems.

Own opinion:

With the challenge announced, Twitter is taking a step in the right direction. It is often the departments that develop software that overlook errors, one reason being that you simply overlook things when you deal with them every day. A sufficient test and the detection of as many gaps as possible are essential, especially with algorithms that interact with people.

Via The Verge

Leave a Reply

Your email address will not be published. Required fields are marked *