2020-12-30, 13:00–13:40, r3s - Monheim/Rhein
I talk about failures of AI/ML, how it more often affects minorities, can even have deadly consequences, what the reason for those failures are and what we can do to prevent this from happening in the future.
Nowadays “Artificial Intelligence” is everywhere! And rightly so, it does enable us to do really cool things, things we couldn’t even imagine doing just a decade ago. In fact, it sometimes just feels like magic. This ‘magic’ behind it is often powered by “Machine Learning”. But even “AI” has its limitations. I’ll show examples where “AI” and ML have failed (sometimes with horrible consequences) and will explain why failures are unavoidable in ML but also mention what we can do to reduce them in the future. Furthermore, I’ll showcase how current AI implementations discriminate against minorities and how that in some cases even leads to a higher risk of death for those groups. I’ll cover the bias that humans introduce and I’ll explain how poor choice of data makes our world even more unjust than it already is. The takeaway for the audience: AI can fail and sometimes it has horrible consequences. Why is AI so hard to “do right”? How can we make AI better?
Senior Coding Monkey @ ThoughtWorks Singapore