Lisa-Maria Neudert and Philip Howard write in TechCrunch about the risks that accompany AI-driven solutions to the global health crisis:
It is evident that data revealing the health and geolocation of citizens is as personal as it gets. The potential benefits weigh heavy, but so do concerns about the abuse and misuse of these applications. There are safeguards for data protection — perhaps, the most advanced one being the European GDPR — but during times of national emergency, governments hold rights to grant exceptions. And the frameworks for the lawful and ethical use of AI in democracy are much less developed — if at all…
Regulators are unlikely to generate special new terms for AI during the coronavirus crisis, so at the very least we need to proceed with a pact: all AI applications developed to tackle the public health crisis must end up as public applications, with the data, algorithms, inputs and outputs held for the public good by public health researchers and public science agencies. Invoking the coronavirus pandemic as a sop for breaking privacy norms and reason to fleece the public of valuable data can’t be allowed.
Read the full piece here.