Google AI- understanding automated decisions
Google AI asked Projects by IF to look at explainability in personalised machine learning, particularly in the context of federated machine learning—an intriguing new technology that has recently been open-sourced by Google.
ChallengeIf devices present a personalised view on the world, people should be able to understand how and when personalisation happened. But how do you explain that in a way that people can actually digest, especially when it comes to complex scenarios like federated learning? And can you take the leap from explaining what’s happening at certain points, to creating interfaces where people can stop or influence what’s happening? Those are some of the questions we prototyped in this project.
|
Image credits: IF: CC-BY