Whether and to what extent privacy engineering might be open to automation is a question that I asked myself for quite some time now. Luckily, I finally found some time to dig deeper into the topic during the last weeks. My research resulted in a first paper for which I happily handed in the camera-ready version last week.
Originally, the paper was to be presented at the Open Identity Summit, but due to the Corona situation, this year’s OID has been canceled. However, accepted papers will still be published this summer in the LNI series (with open access and under the CC BY-SA 4.0 license). Until the paper can be accessed in the GI library, you can find it here.
As usual, lots of new questions came up while I was writing the paper. Obviously, there is a lot of research and development left to be done regarding technology for automating at least parts of the privacy engineering process.
But there is also a lot to be done regarding the impact of automating privacy engineering on developers, decision makers, data subjects and privacy in general. Will automation make the job of privacy engineers more interesting, less interesting or even obsolete? Will automating Privacy Impact Assessment lead to more privacy-preserving systems or to more invasive processing and systems? Is it ethically and legally justifiable to automate (parts of) a process that aims at selecting and implementing measures to safeguards rights and freedoms of data subjects? How, if at all, can the required balancing tests be automated?
Hopefully, I’ll find some time to look into these questions anytime soon. I’d be happy for any recommendation regarding literature on these topics.