Mon Oct 05 2020
Twitters AI may not be biased… but there’s a growing perception that it is
Amidst a growing storm of controversy Twitter has decided to cut back on the amount of AI used in its automated image-cropping tool to pacify users who feel it’s becomes biased in several unwanted directions.
Twitters AI works by automatically cropping/resizing uploaded images to match the screen size you’re viewing the image on, be it a phone, tablet or desktop. Using computer-vision software the AI ‘decides’ which part of the image to focus on… the problem is, according to Twitter users, the AI has been focusing in on women’s chests and people with lighter skin.
In response to this Twitter have said they’ll re-think their approach to combat this bias, in the meantime by cutting down on the amount of Machine Learning (ML) in use.
We are prioritizing work to decrease our reliance on ML-based image cropping by giving people more visibility and control over what their images will look like in a tweet. We’ve started exploring different options to see what will work best across the wide range of images people tweet every day.
We hope that giving people more choices for image cropping and previewing what they’ll look like in the tweet composer may help reduce the risk of harm.
The algorithm works by calculating a ‘saliency map’ of the image that’s been uploaded, identifying where the pixel values change the most, indicating which part of the image has the most detail.
That’s the spot people are most likely to look at first and the spot Twitters AI has been focussing on, for instance in an image of a man playing frisbee on the beach, there’s no point focussing on a swathe of blue pixels in the sky.
Twitter have confirmed that they have tested the AI extensively to ensure it didn’t focus on women’s breasts, or white people over black and are confident that the AI is bias free.
It's great to hear that Twitter are doing bias testing on models before releasing them into the wild, but the backlash shows that we have a way to go before we have a really reliable methodology for bias testing, and highlights the need for more focus on transparent, explicable machine learning methods, especially when it comes to models which directly affect user experience in such a prominent way.
Whilst they’re convinced there’s no bias in the program, there’s a growing belief by users on the platform that there is, and it’s that perception Twitter now aim to tackle.
While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product.
Mon Oct 05 2020