Data Protection in Guernsey

Security in Guernsey

Security features more prominently under the DPL 2017 than its predecessor. Whilst implementing appropriate security measures to safeguard personal data from unauthorised or unlawful processing continues to be a feature of the DPL 2017 (see Principle 6 'Integrity and Confidentiality'), the DPL 2017 (unlike its predecessor) sets out with more clarity the steps required to ensure compliance.

Data controllers must take reasonable steps to ensure a level of security which is appropriate to the personal data, taking into account the nature, scope, context and purpose of the processing, the likelihood and severity of the risks to data subjects if the personal data is not secure (including the risk of unlawful or accidental destruction, loss or alteration and / or unauthorised disclosure of personal data), best practice and the costs of implementing appropriate measures. 

Section 41 of the DPL 2017 provides some assistance as to what may be regarded as a reasonable 'step' to ensure appropriate security. In essence, to ensure compliance with this obligation, a controller should consider:

  • pseudonymising and encrypting personal data
  • ensuring that the controller or processor has and retains the ability to:
    • ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services; and
    • restore access to personal data in a timely manner in the event of a physical or technical incident; and
  • establishing and implementing a process for regular testing and evaluation of the effectiveness of the technical and organisational measures.

There are several provisions which touch on the security obligations, located throughout the DPL 2017. Thus, the key provisions not only appear in the main security section (Part VI of the DPL 2017) but also form a key consideration (amongst other things) when undertaking a data protection impact assessment, the right to erasure, a controller's duty to take reasonable steps to achieve compliance and the measures that should be in place when choosing a processor. For example, when assessing the suitability of a processor a controller must ensure that the processor provides sufficient guarantees that reasonable technical and organisational security measures governing the processing will be established to meet the requirements of the DPL 2017.

The ODPA published targeted cybersecurity guidance (including a cyber‑security checklist), emphasising practical steps organisations should follow to mitigate modern threat vectors such as phishing and credential compromise. Some example of recommended cyber security measures include: software updates, staff awareness, incident planning and access restrictions. This guidance reinforces the statutory security obligations under the DPL 2017.

Security under the DPL 2017 is intrinsically linked to accountability, meaning controllers must be able to demonstrate the security measures implemented, including maintaining internal policies, risk assessments, training records, incident logs, and testing records.

The ODPA also published a Ten‑Step Practical AI Guidance ("AI Guidance"), recognising that AI technologies are increasingly forming part of everyday business processes and rely heavily on personal data. The AI Guidance emphasises that AI must be used in a way that is fair, transparent, accountable, and respectful of individuals' rights, and that organisations must ensure compliance with the DPL 2017 when deploying AI systems.

The AI Guidance encourages organisations to identify whether the AI system will process personal data, understand their role as controller or processor, and identify the lawful basis relied upon for AI- related processing. The AI Guidance further notes that AI systems must operate transparently, meaning organisations should be clear of their use of AI systems, how decisions are made, be prepared to explain AI decisions, what data is used, and how individuals may be affected.The AI guidance also stresses the importance of conducting appropriate data protection impact assessments before deploying AI systems that may significantly affect individuals, to ensure that data‑subject rights can be exercised within the AI environment and that robust technical and organisational security measures are implemented throughout the lifecycle of the AI system.

Continue reading

  • no results

Previous topic
Back to top