cloudThing logo in white
Menu open icon
email: info@cloudthing.com
tel: +44 (0) 121 393 4700
Menu closed icon

General

Generally useful pages

Sectors

We know loads about this stuff

What we do

The Building Blocks for cloudThing Magic

Data Protection & Artificial Intelligence: Best Practice

Resolving the conflicts between the masses of data AI requires and an individual’s right to privacy

The unlimited potential Artificial Intelligence can bring to an organisation is far too broad a subject to discuss here (although we have discussed it at length elsewhere) but unfortunately, with the benefits of AI also come the pitfalls…

 

The main pitfall we’ll be focussing on here is that of data protection, and all its associated concerns, such as privacy and data security.

 

Under GDPR (or whatever equivalent legislation applies within your territory), using technology for the processing and handling of personal data through complex computer systems (and often rather opaque algorithms) is something you’ll need to consider very, very carefully as an organisation.

 

Let’s make it clear. We’re not trying to put you off Artificial Intelligence, merely stating that it’s use requires due consideration.

 

Its potential, when applied to a business’ processes through the right partner, are nearly limitless.

But there are things you need to consider (and again… the right partner should be able to help you with that).

 

The following should give you a brief oversight on how to you mitigate any data protection risks arising from the implementation of an AI project within your organisation without scaring you so much you lose sight of the benefits that such a project can deliver.

If at the end however, you still have questions, feel free to get in touch to discuss your needs further…

Taking A Risk Based Approach To Data Protection & Artificial Intelligence

First things first… This is going to be a lot less complicated than you thought it might be.

 

When you’re assessing the impact Data Protection may have on your Artificial Intelligence project (no matter how complex) it’s worth remembering the questions that will need answering will be the exact same questions that needed answering for all your other projects.

 

  • What data will be used and if there is personal data being processed do I really need it? 
  • Is the data being processed under one of the lawful bases for processing?
  • Have I adequately informed my end-users of how their data is being used?
  • Is the data being processed securely?

 

The starting point for this is completing a DPIA and deciding (and documenting) what data is relevant and needed for the project. This will be key alongside the steps you’ve taken to secure said data.

 

If no personal data is being captured, everything becomes much simpler.

 

Don’t forget personal data is any information relating to an identifiable natural person who can be identified, directly or indirectly, by any of the information being processed.  If you absolutely must use personal data, then ensure adequate controls are used to restrict access, keep it safely encrypted and pseudo-anonymise where possible.

 

As with all projects, the key to getting this right will be through a principle of Privacy By Design.

If you make it your goal to mitigate privacy risks as part of the initial project design, rather than as a rushed (and potentially bodged) bolt on at the end of the project, it’s likely you’ll be successful in coming up with a valid and compliant Data Protection solution.

 

No matter what the project, good Data Protection governances have always been dependent on specific factors such as…

The types of data being captured, what the data will be used for, how the data will be used, where the data will be used, if there are any special categories of data etc.

Whilst AI technologies do make this trickier and are likely by design to include automated decision making (AI by its very nature requires as much unfettered data as possible, whilst Privacy By Design focusses on data minimisation) the important thing is to be able to demonstrate the steps you have taken to mitigate as many risk factors as possible.

 

It’s important that this task isn’t just delegated to your Data Scientists or Developers though.

 AI developers may have a tendency to prioritise data collection and a wider view than that will be needed.

These steps can’t be a tick box exercise either; you should never underestimate the amount of time, expertise or resources your AI governance and risk management efforts deserve if you want to be compliant.

Data Protection & Artificial Intelligence: How To Set Your Risk Appetite

A risk based approach to data protection and AI means you must consider (and document) how you comply with your obligations under the law by taking specific measures that are appropriate to your organisation and showing you’ve balanced the risks to an individual rights and freedoms vs your legitimate business interests.

 

Setting your risk appetite should be intentional and expected to form part of an AI strategy document. This will also affect the possible range of algorithms that can be chosen from to use in your solution. 

 

The AI strategy document should scope out frameworks for applying AI within your organisation over a horizon of 3-5 years and it should assess the risks that will be posed to said individual’s rights and freedoms.

 

When it comes to AI technology, the various risks posed by your project and how data is to be processed will mean you need to take a balanced approach between those competing interests to ensure you remain compliant.

 

But…

That doesn’t mean you need to assume a zero-risk stance.

A zero-risk stance in of itself would be immensely impractical (the law even recognises this).

What it’s about is assessing your own use of AI and doing your utmost, at an organisational level, to mitigate the Data Protection risks.

Consider the following:

 

  • Have you thoroughly (and accurately) assessed the risks to an individual’s rights and privacy that may come about due to your AI activities?
  • Have you determined how all these risks will be mitigated?
  • How will you collate, store and use this data?
  • What volume and sensitivity of data are you collecting?
  • What’s the final outcome you’re attempting to achieve by collating and processing this data?
  • Have you clearly documented these risk assessments?

 

Whilst this process may feel long winded and a ‘nuisance’, doing it correctly will give you a much better picture of your organisations risk proposition and exposure and how adequate your governance's are in balancing out the various conflicts. It will also help you to justify your actions if you are challenged in the future.

Identifying The Controller Of Your AI Technology

It’s not uncommon to have several different organisations involved in the planning, development and then deployment of Artificial Intelligence technology.

 

Whilst GDPR legislation does recognise that not all the parties involved in the processing of this data will have the same degree of control over the data being processed, it’s still incredibly important to identify who’s the controller, who’s a joint controller and who’s just processing the data… and then document these facts.

How To Make AI Systems Conform To The Data Minimisation Principle

GDPR’s data minimisation principle states you should be storing and processing the minimum amount of personal data you can to achieve your businesses goals.

open quote mark

1. Personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation)

GDPR 5(1)(c)

close quote mark

As we’ve already mentioned though, AI technology requires pretty much the exact opposite… so how do reconcile these two seemingly diametrically opposed stands?

Whilst it may sound like a huge obstacle to overcome, a closer look at the legislation shows the way forward.

 

It clearly states that the data used needs to be limited to only what is necessary to complete your stated goals. Whilst AI certainly pushes that limit, the two are both still possible to conform to.

How you go about determining what is ‘adequate, relevant and limited’ is therefore going to be specific to your circumstances and should be captured within you DPIA.

 

Once you understand what data is being captured, the purpose it will be used for, and what measures are needed to manage the security risks when processing that data, when it is time to implement AI, Microsoft Azure can help.

 

We end by simply listing a few products from Microsoft within Azure, Windows or Office 365 that illustrate all the options that can be layered up to to create defense-in-depth for your data:  

 

  • Azure Information Protection
  • SQL Server Transparent Data Encryption (TDE)
  • Dynamic Data Masking
  • Always Encrypted
  • Data Classification
  • Azure Advanced Threat Protection
  • PIMs
  • Group Policy
  • Conditional Access Policies
  • MFA through Azure AD
  • ·Azure Recovery Services Vault
  • Intune
  • Azure Application Gateway
  • ·Azure API Gateway
  • Azure Firewall
  • Azure Sentinel

Not Quite Ready To Get Back To Work Just Yet?

ORGANISATIONAL DEBT & WHY IT MAKES DIGITAL TRANSFORMATION HARD

THE 7 STAGES OF A SUCCESSFUL AI PROJECT

ARTIFICIAL INTELLIGENCE

Contact Us

If you're struggling to introduce AI withing your organisation or are worried about the implications of GDPR then feel free to speak to one of cloudThing's experts today...

Name

*

Company name

*

Email address

*

Telephone Number

Is there anything else you'd like us to know?

© cloudThing 2020
email iconinfo@cloudthing.com
© 2020 Copyright cloudThing ltd. All rights reserved. Company registered in England & Wales no. 7510381, VAT no. 152340739