Interesting (and concerning) interview with YouTube spokesperson yesterday on BBC Radio. Seems some far right race hate material is still online a year after notification unlike IS type propaganda and “radicalization” videos that are taken down in a matter of hours – Opening up questions about what markers are used in machine learning technologies, as well as human decision making processes. Seems that the default argument by Zuckerberg et al that these AI technologies will solve the problems of hate groups on social media (and fake news) is perhaps not stacking up.
YouTube: Not removing far-right video ‘missed the mark’
More opinion in The Guardian today on Facebook, and why, after Charlottesville, big tech can’t delete white supremacists?