The Privacy and Data Protection Journal has published an article by Duc Tran, Senior Associate from our Digital TMT, Sourcing & Data Team, exploring automated decision making under the General Data Protection Regulation (GDPR).
In recent times, forward-thinking organisations have sought to automate and optimise the effectiveness and efficiency of their operations and decision making processes using new and disruptive technologies such as AI and machine learning. However, whilst the efficiency gains and other benefits may be considerable, it is important for these organisations to be aware of the legal implications of using such technology.
One of these considerations is the restriction on the use of machines and automated systems to make decisions about individuals.
Article 22 of the GDPR seeks to protect individuals from having important decisions (those with a legal or ‘similarly significant effect’) made about them by solely automated means (“automated decision making”). Indeed, automated decision making is only permitted under Article 22 in certain, limited situations.
However, there is a significant amount of ambiguity surrounding the application of the rules on automated decision making, including in relation to when a given process will amount to automated decision making for the purposes of Article 22.
The article seeks to explore this ambiguity, applying the following issues to real-world decision making processes:
- The meaning of ‘similarly significant effect’;
- When a decision is deemed ‘solely automated’; and
- The level of human intervention required to take a decision outside the scope of Article 22, and whether this human intervention can take place at the ‘input’ or ‘output’ stage of a given decision making process.
Further guidance on automated decision making is available on the ICO’s AI Framework blog.