Resolving the conflicts between the masses of data AI requires and an individual’s right to privacy
The unlimited potential Artificial Intelligence can bring to an organisation is far too broad a subject to discuss here (although we have discussed it at length elsewhere) but unfortunately, with the benefits of AI also come the pitfalls…
The main pitfall we’ll be focussing on here is that of data protection, and all its associated concerns, such as privacy and data security.
Under GDPR (or whatever equivalent legislation applies within your territory), using technology for the processing and handling of personal data through complex computer systems (and often rather opaque algorithms) is something you’ll need to consider very, very carefully as an organisation.
Let’s make it clear. We’re not trying to put you off Artificial Intelligence, merely stating that it’s use requires due consideration.
Its potential, when applied to a business’ processes through the right partner, are nearly limitless.
But there are things you need to consider (and again… the right partner should be able to help you with that).
The following should give you a brief oversight on how to you mitigate any data protection risks arising from the implementation of an AI project within your organisation without scaring you so much you lose sight of the benefits that such a project can deliver.
If at the end however, you still have questions, feel free to get in touch to discuss your needs further…
First things first… This is going to be a lot less complicated than you thought it might be.
When you’re assessing the impact Data Protection may have on your Artificial Intelligence project (no matter how complex) it’s worth remembering the questions that will need answering will be the exact same questions that needed answering for all your other projects.
The starting point for this is completing a DPIA and deciding (and documenting) what data is relevant and needed for the project. This will be key alongside the steps you’ve taken to secure said data.
If no personal data is being captured, everything becomes much simpler.
Don’t forget personal data is any information relating to an identifiable natural person who can be identified, directly or indirectly, by any of the information being processed. If you absolutely must use personal data, then ensure adequate controls are used to restrict access, keep it safely encrypted and pseudo-anonymise where possible.
As with all projects, the key to getting this right will be through a principle of Privacy By Design.
If you make it your goal to mitigate privacy risks as part of the initial project design, rather than as a rushed (and potentially bodged) bolt on at the end of the project, it’s likely you’ll be successful in coming up with a valid and compliant Data Protection solution.
No matter what the project, good Data Protection governances have always been dependent on specific factors such as…
The types of data being captured, what the data will be used for, how the data will be used, where the data will be used, if there are any special categories of data etc.
Whilst AI technologies do make this trickier and are likely by design to include automated decision making (AI by its very nature requires as much unfettered data as possible, whilst Privacy By Design focusses on data minimisation) the important thing is to be able to demonstrate the steps you have taken to mitigate as many risk factors as possible.
It’s important that this task isn’t just delegated to your Data Scientists or Developers though.
AI developers may have a tendency to prioritise data collection and a wider view than that will be needed.
These steps can’t be a tick box exercise either; you should never underestimate the amount of time, expertise or resources your AI governance and risk management efforts deserve if you want to be compliant.
A risk based approach to data protection and AI means you must consider (and document) how you comply with your obligations under the law by taking specific measures that are appropriate to your organisation and showing you’ve balanced the risks to an individual rights and freedoms vs your legitimate business interests.
Setting your risk appetite should be intentional and expected to form part of an AI strategy document. This will also affect the possible range of algorithms that can be chosen from to use in your solution.
The AI strategy document should scope out frameworks for applying AI within your organisation over a horizon of 3-5 years and it should assess the risks that will be posed to said individual’s rights and freedoms.
When it comes to AI technology, the various risks posed by your project and how data is to be processed will mean you need to take a balanced approach between those competing interests to ensure you remain compliant.
That doesn’t mean you need to assume a zero-risk stance.
A zero-risk stance in of itself would be immensely impractical (the law even recognises this).
What it’s about is assessing your own use of AI and doing your utmost, at an organisational level, to mitigate the Data Protection risks.
Consider the following:
Whilst this process may feel long winded and a ‘nuisance’, doing it correctly will give you a much better picture of your organisations risk proposition and exposure and how adequate your governance's are in balancing out the various conflicts. It will also help you to justify your actions if you are challenged in the future.
It’s not uncommon to have several different organisations involved in the planning, development and then deployment of Artificial Intelligence technology.
Whilst GDPR legislation does recognise that not all the parties involved in the processing of this data will have the same degree of control over the data being processed, it’s still incredibly important to identify who’s the controller, who’s a joint controller and who’s just processing the data… and then document these facts.
GDPR’s data minimisation principle states you should be storing and processing the minimum amount of personal data you can to achieve your businesses goals.
1. Personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation)
As we’ve already mentioned though, AI technology requires pretty much the exact opposite… so how do reconcile these two seemingly diametrically opposed stands?
Whilst it may sound like a huge obstacle to overcome, a closer look at the legislation shows the way forward.
It clearly states that the data used needs to be limited to only what is necessary to complete your stated goals. Whilst AI certainly pushes that limit, the two are both still possible to conform to.
How you go about determining what is ‘adequate, relevant and limited’ is therefore going to be specific to your circumstances and should be captured within you DPIA.
Once you understand what data is being captured, the purpose it will be used for, and what measures are needed to manage the security risks when processing that data, when it is time to implement AI, Microsoft Azure can help.
We end by simply listing a few products from Microsoft within Azure, Windows or Office 365 that illustrate all the options that can be layered up to to create defense-in-depth for your data:
If you're struggling to introduce AI withing your organisation or are worried about the implications of GDPR then feel free to speak to one of cloudThing's experts today...